Feb 26 17:14:42 crc systemd[1]: Starting Kubernetes Kubelet... Feb 26 17:14:43 crc restorecon[4683]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:43 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 17:14:44 crc restorecon[4683]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 26 17:14:44 crc restorecon[4683]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 26 17:14:46 crc kubenswrapper[4805]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 26 17:14:46 crc kubenswrapper[4805]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 26 17:14:46 crc kubenswrapper[4805]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 26 17:14:46 crc kubenswrapper[4805]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 26 17:14:46 crc kubenswrapper[4805]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 26 17:14:46 crc kubenswrapper[4805]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.699064 4805 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705462 4805 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705493 4805 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705504 4805 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705513 4805 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705522 4805 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705531 4805 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705539 4805 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705548 4805 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705557 4805 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705566 4805 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705574 4805 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705582 4805 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705591 4805 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705600 4805 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705608 4805 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705616 4805 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705624 4805 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705634 4805 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705642 4805 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705652 4805 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705660 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705668 4805 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705676 4805 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705684 4805 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705699 4805 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705708 4805 feature_gate.go:330] unrecognized feature gate: Example Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705717 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705725 4805 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705733 4805 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705742 4805 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705750 4805 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705758 4805 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705767 4805 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705778 4805 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705788 4805 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705797 4805 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705809 4805 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705821 4805 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705830 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705839 4805 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705848 4805 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705856 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705864 4805 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705872 4805 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705881 4805 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705889 4805 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705902 4805 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705913 4805 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705922 4805 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705932 4805 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705941 4805 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705950 4805 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705959 4805 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705971 4805 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705979 4805 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705988 4805 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.705998 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.706006 4805 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.706054 4805 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.706064 4805 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.706082 4805 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.706091 4805 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.706099 4805 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.706107 4805 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.706115 4805 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.706123 4805 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.706131 4805 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.706140 4805 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.706148 4805 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.706156 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.706167 4805 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.708063 4805 flags.go:64] FLAG: --address="0.0.0.0" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.708111 4805 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.708142 4805 flags.go:64] FLAG: --anonymous-auth="true" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.708173 4805 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.708195 4805 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.708210 4805 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.708227 4805 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.708243 4805 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.708255 4805 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.708266 4805 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.708278 4805 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.708289 4805 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.708299 4805 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.708309 4805 flags.go:64] FLAG: --cgroup-root="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.708319 4805 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.708329 4805 flags.go:64] FLAG: --client-ca-file="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.708339 4805 flags.go:64] FLAG: --cloud-config="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.708348 4805 flags.go:64] FLAG: --cloud-provider="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.708358 4805 flags.go:64] FLAG: --cluster-dns="[]" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.708371 4805 flags.go:64] FLAG: --cluster-domain="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.708380 4805 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.708390 4805 flags.go:64] FLAG: --config-dir="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.708400 4805 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.708412 4805 flags.go:64] FLAG: --container-log-max-files="5" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.708428 4805 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.708456 4805 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.708471 4805 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.708485 4805 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.708496 4805 flags.go:64] FLAG: --contention-profiling="false" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.708507 4805 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.708517 4805 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.708527 4805 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.708537 4805 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.708551 4805 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.708562 4805 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.708572 4805 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.708585 4805 flags.go:64] FLAG: --enable-load-reader="false" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.708598 4805 flags.go:64] FLAG: --enable-server="true" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.708609 4805 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709109 4805 flags.go:64] FLAG: --event-burst="100" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709123 4805 flags.go:64] FLAG: --event-qps="50" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709133 4805 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709144 4805 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709154 4805 flags.go:64] FLAG: --eviction-hard="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709167 4805 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709177 4805 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709187 4805 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709197 4805 flags.go:64] FLAG: --eviction-soft="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709206 4805 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709216 4805 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709226 4805 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709235 4805 flags.go:64] FLAG: --experimental-mounter-path="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709245 4805 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709255 4805 flags.go:64] FLAG: --fail-swap-on="true" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709265 4805 flags.go:64] FLAG: --feature-gates="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709277 4805 flags.go:64] FLAG: --file-check-frequency="20s" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709287 4805 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709297 4805 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709308 4805 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709318 4805 flags.go:64] FLAG: --healthz-port="10248" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709328 4805 flags.go:64] FLAG: --help="false" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709338 4805 flags.go:64] FLAG: --hostname-override="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709348 4805 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709358 4805 flags.go:64] FLAG: --http-check-frequency="20s" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709368 4805 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709377 4805 flags.go:64] FLAG: --image-credential-provider-config="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709388 4805 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709397 4805 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709406 4805 flags.go:64] FLAG: --image-service-endpoint="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709426 4805 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709438 4805 flags.go:64] FLAG: --kube-api-burst="100" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709452 4805 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709469 4805 flags.go:64] FLAG: --kube-api-qps="50" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709482 4805 flags.go:64] FLAG: --kube-reserved="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709492 4805 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709502 4805 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709512 4805 flags.go:64] FLAG: --kubelet-cgroups="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709522 4805 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709531 4805 flags.go:64] FLAG: --lock-file="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709541 4805 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709551 4805 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709561 4805 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709577 4805 flags.go:64] FLAG: --log-json-split-stream="false" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709587 4805 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709598 4805 flags.go:64] FLAG: --log-text-split-stream="false" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709607 4805 flags.go:64] FLAG: --logging-format="text" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709617 4805 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709630 4805 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709640 4805 flags.go:64] FLAG: --manifest-url="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709651 4805 flags.go:64] FLAG: --manifest-url-header="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709665 4805 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709675 4805 flags.go:64] FLAG: --max-open-files="1000000" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709687 4805 flags.go:64] FLAG: --max-pods="110" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709698 4805 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709708 4805 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709717 4805 flags.go:64] FLAG: --memory-manager-policy="None" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709727 4805 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709737 4805 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709747 4805 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709757 4805 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709786 4805 flags.go:64] FLAG: --node-status-max-images="50" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709801 4805 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709811 4805 flags.go:64] FLAG: --oom-score-adj="-999" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709821 4805 flags.go:64] FLAG: --pod-cidr="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709831 4805 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709844 4805 flags.go:64] FLAG: --pod-manifest-path="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709854 4805 flags.go:64] FLAG: --pod-max-pids="-1" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709864 4805 flags.go:64] FLAG: --pods-per-core="0" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709875 4805 flags.go:64] FLAG: --port="10250" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709885 4805 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709895 4805 flags.go:64] FLAG: --provider-id="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709905 4805 flags.go:64] FLAG: --qos-reserved="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709915 4805 flags.go:64] FLAG: --read-only-port="10255" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709925 4805 flags.go:64] FLAG: --register-node="true" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709935 4805 flags.go:64] FLAG: --register-schedulable="true" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709944 4805 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709961 4805 flags.go:64] FLAG: --registry-burst="10" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709973 4805 flags.go:64] FLAG: --registry-qps="5" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709982 4805 flags.go:64] FLAG: --reserved-cpus="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.709992 4805 flags.go:64] FLAG: --reserved-memory="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.710004 4805 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.710074 4805 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.710087 4805 flags.go:64] FLAG: --rotate-certificates="false" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.710096 4805 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.710106 4805 flags.go:64] FLAG: --runonce="false" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.710115 4805 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.710126 4805 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.710136 4805 flags.go:64] FLAG: --seccomp-default="false" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.710146 4805 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.710155 4805 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.710165 4805 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.710175 4805 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.710185 4805 flags.go:64] FLAG: --storage-driver-password="root" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.710195 4805 flags.go:64] FLAG: --storage-driver-secure="false" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.710208 4805 flags.go:64] FLAG: --storage-driver-table="stats" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.710218 4805 flags.go:64] FLAG: --storage-driver-user="root" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.710228 4805 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.710239 4805 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.710249 4805 flags.go:64] FLAG: --system-cgroups="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.710258 4805 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.710275 4805 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.710285 4805 flags.go:64] FLAG: --tls-cert-file="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.710295 4805 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.710307 4805 flags.go:64] FLAG: --tls-min-version="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.710317 4805 flags.go:64] FLAG: --tls-private-key-file="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.710327 4805 flags.go:64] FLAG: --topology-manager-policy="none" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.710337 4805 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.710346 4805 flags.go:64] FLAG: --topology-manager-scope="container" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.710360 4805 flags.go:64] FLAG: --v="2" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.710380 4805 flags.go:64] FLAG: --version="false" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.710392 4805 flags.go:64] FLAG: --vmodule="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.710404 4805 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.710416 4805 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.710670 4805 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.710684 4805 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.710696 4805 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.710707 4805 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.710717 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.710726 4805 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.710735 4805 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.710743 4805 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.710751 4805 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.710760 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.710769 4805 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.710778 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.710787 4805 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.710800 4805 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.710808 4805 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.710816 4805 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.710825 4805 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.710834 4805 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.710842 4805 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.710851 4805 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.710859 4805 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.710867 4805 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.710877 4805 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.710885 4805 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.710894 4805 feature_gate.go:330] unrecognized feature gate: Example Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.710902 4805 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.710911 4805 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.710920 4805 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.710929 4805 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.710938 4805 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.710946 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.710957 4805 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.710968 4805 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.710978 4805 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.710987 4805 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.710997 4805 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.711009 4805 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.711048 4805 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.711059 4805 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.711069 4805 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.711078 4805 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.711087 4805 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.711095 4805 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.711104 4805 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.711113 4805 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.711124 4805 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.711133 4805 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.711141 4805 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.711150 4805 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.711158 4805 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.711167 4805 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.711175 4805 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.711183 4805 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.711193 4805 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.711201 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.711210 4805 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.711218 4805 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.711226 4805 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.711235 4805 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.711246 4805 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.711258 4805 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.711267 4805 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.711277 4805 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.711287 4805 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.711297 4805 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.711306 4805 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.711316 4805 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.711325 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.711334 4805 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.711343 4805 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.711353 4805 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.711367 4805 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.722915 4805 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.722969 4805 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723137 4805 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723161 4805 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723173 4805 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723183 4805 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723195 4805 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723205 4805 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723214 4805 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723222 4805 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723231 4805 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723238 4805 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723246 4805 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723254 4805 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723262 4805 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723271 4805 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723279 4805 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723288 4805 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723296 4805 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723303 4805 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723312 4805 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723320 4805 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723328 4805 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723335 4805 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723343 4805 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723351 4805 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723359 4805 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723366 4805 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723374 4805 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723381 4805 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723389 4805 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723397 4805 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723405 4805 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723412 4805 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723420 4805 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723427 4805 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723435 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723443 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723450 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723458 4805 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723466 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723476 4805 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723486 4805 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723505 4805 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723520 4805 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723535 4805 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723552 4805 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723565 4805 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723579 4805 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723593 4805 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723606 4805 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723618 4805 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723629 4805 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723640 4805 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723651 4805 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723660 4805 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723670 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723680 4805 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723690 4805 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723699 4805 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723709 4805 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723719 4805 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723730 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723739 4805 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723749 4805 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723759 4805 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723769 4805 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723779 4805 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723789 4805 feature_gate.go:330] unrecognized feature gate: Example Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723802 4805 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723814 4805 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723824 4805 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.723833 4805 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.723846 4805 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724147 4805 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724171 4805 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724183 4805 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724193 4805 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724201 4805 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724210 4805 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724219 4805 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724227 4805 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724236 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724247 4805 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724258 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724268 4805 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724278 4805 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724287 4805 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724297 4805 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724306 4805 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724315 4805 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724323 4805 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724331 4805 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724339 4805 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724347 4805 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724354 4805 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724364 4805 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724371 4805 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724379 4805 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724387 4805 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724394 4805 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724403 4805 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724410 4805 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724418 4805 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724426 4805 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724436 4805 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724445 4805 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724454 4805 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724463 4805 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724471 4805 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724479 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724487 4805 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724507 4805 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724515 4805 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724524 4805 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724531 4805 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724539 4805 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724547 4805 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724555 4805 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724565 4805 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724573 4805 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724580 4805 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724588 4805 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724595 4805 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724604 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724612 4805 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724619 4805 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724627 4805 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724634 4805 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724642 4805 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724650 4805 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724661 4805 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724672 4805 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724683 4805 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724691 4805 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724700 4805 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724708 4805 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724715 4805 feature_gate.go:330] unrecognized feature gate: Example Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724723 4805 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724731 4805 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724739 4805 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724747 4805 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724754 4805 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724762 4805 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.724769 4805 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.724783 4805 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.725058 4805 server.go:940] "Client rotation is on, will bootstrap in background" Feb 26 17:14:46 crc kubenswrapper[4805]: E0226 17:14:46.733075 4805 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.737334 4805 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.737469 4805 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.739452 4805 server.go:997] "Starting client certificate rotation" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.739489 4805 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.739710 4805 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.767551 4805 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 26 17:14:46 crc kubenswrapper[4805]: E0226 17:14:46.770913 4805 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.771056 4805 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.790385 4805 log.go:25] "Validated CRI v1 runtime API" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.832595 4805 log.go:25] "Validated CRI v1 image API" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.834860 4805 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.840010 4805 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-26-17-10-19-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.840102 4805 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.875365 4805 manager.go:217] Machine: {Timestamp:2026-02-26 17:14:46.872619973 +0000 UTC m=+1.434374382 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:f8358516-caaf-41d0-bd53-1417d7f7dfbf BootID:aed24eaa-3a4d-469f-a8c6-d3249b191765 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:f8:65:2d Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:f8:65:2d Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:eb:1a:ef Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:e1:3e:ab Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:d4:e1:42 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:7f:23:89 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:fa:02:f9:8b:db:db Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:76:4f:d9:d1:26:de Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.875898 4805 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.876173 4805 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.876986 4805 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.877388 4805 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.877451 4805 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.877834 4805 topology_manager.go:138] "Creating topology manager with none policy" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.877858 4805 container_manager_linux.go:303] "Creating device plugin manager" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.878461 4805 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.878525 4805 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.878821 4805 state_mem.go:36] "Initialized new in-memory state store" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.879360 4805 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.883204 4805 kubelet.go:418] "Attempting to sync node with API server" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.883249 4805 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.883300 4805 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.883329 4805 kubelet.go:324] "Adding apiserver pod source" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.883351 4805 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.889067 4805 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.890379 4805 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.892194 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Feb 26 17:14:46 crc kubenswrapper[4805]: E0226 17:14:46.892274 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.892354 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Feb 26 17:14:46 crc kubenswrapper[4805]: E0226 17:14:46.892433 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.892668 4805 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.896038 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.896112 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.896125 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.896137 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.896157 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.896170 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.896183 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.896203 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.896217 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.896229 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.896246 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.896257 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.896995 4805 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.897694 4805 server.go:1280] "Started kubelet" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.897833 4805 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.898581 4805 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.899240 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.899461 4805 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 26 17:14:46 crc systemd[1]: Started Kubernetes Kubelet. Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.900611 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.900654 4805 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 26 17:14:46 crc kubenswrapper[4805]: E0226 17:14:46.900795 4805 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.900870 4805 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.900878 4805 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.900934 4805 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.901499 4805 server.go:460] "Adding debug handlers to kubelet server" Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.902405 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.903476 4805 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.903511 4805 factory.go:55] Registering systemd factory Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.903523 4805 factory.go:221] Registration of the systemd container factory successfully Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.904195 4805 factory.go:153] Registering CRI-O factory Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.904228 4805 factory.go:221] Registration of the crio container factory successfully Feb 26 17:14:46 crc kubenswrapper[4805]: E0226 17:14:46.904219 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.904254 4805 factory.go:103] Registering Raw factory Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.904304 4805 manager.go:1196] Started watching for new ooms in manager Feb 26 17:14:46 crc kubenswrapper[4805]: E0226 17:14:46.903905 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="200ms" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.905478 4805 manager.go:319] Starting recovery of all containers Feb 26 17:14:46 crc kubenswrapper[4805]: E0226 17:14:46.907315 4805 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.194:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1897db4972bcd154 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:46.897652052 +0000 UTC m=+1.459406411,LastTimestamp:2026-02-26 17:14:46.897652052 +0000 UTC m=+1.459406411,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.921253 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.921631 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.921769 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.921867 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.921952 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.922159 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.922261 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.922343 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.922429 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.922512 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.922609 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.922722 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.922841 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.922941 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.923075 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.923186 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.923408 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.923696 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.923799 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.923888 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.923971 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.924156 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.924251 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.924354 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.924461 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.924550 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.924645 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.924729 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.924820 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.924948 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.925063 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.925175 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.925258 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.925340 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.925420 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.925503 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.925593 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.925680 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.925761 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.925873 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.925956 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.926068 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.926206 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.926299 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.926427 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.926536 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.926663 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.926791 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.927536 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.927652 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.927768 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.927871 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.928009 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.928178 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.928293 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.928424 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.928545 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.928666 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.928794 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.928966 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.929209 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.929329 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.929435 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.929555 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.929670 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.929781 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.929888 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.930916 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.930953 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.930970 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.930985 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.930998 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.931024 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.931037 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.931050 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.931064 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.931076 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.931088 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.931101 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.931112 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.931126 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.931138 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.931152 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.931165 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.931178 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.931192 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.931204 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.931217 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.931229 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.934374 4805 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.934447 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.934479 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.934504 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.934526 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.934547 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.934569 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.934596 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.934616 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.934664 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.934684 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.934705 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.934730 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.934751 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.934771 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.934810 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.934894 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.934948 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.935669 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.935705 4805 manager.go:324] Recovery completed Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.935721 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.935984 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.936081 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.936114 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.936145 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.936176 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.936206 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.936231 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.936254 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.936277 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.936298 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.936319 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.936339 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.936368 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.936394 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.936415 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.936438 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.936466 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.936487 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.936506 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.936528 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.936553 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.936574 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.936594 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.936616 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.936634 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.936781 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.936808 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.936827 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.936846 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.936870 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.936893 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.936914 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.936933 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.936953 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.936973 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.936994 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937055 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937080 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937103 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937128 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937167 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937185 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937204 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937223 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937244 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937264 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937280 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937303 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937325 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937344 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937363 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937406 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937428 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937448 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937466 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937485 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937504 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937525 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937545 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937564 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937581 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937600 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937619 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937639 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937657 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937678 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937712 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937731 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937757 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937781 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937800 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937820 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937841 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937869 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937894 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937914 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937938 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937961 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937978 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.937998 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.938043 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.938095 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.938152 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.938173 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.938197 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.938219 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.938243 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.938265 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.938286 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.938306 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.938327 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.938345 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.938370 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.938393 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.938414 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.938435 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.938455 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.938478 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.938501 4805 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.938521 4805 reconstruct.go:97] "Volume reconstruction finished" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.938535 4805 reconciler.go:26] "Reconciler: start to sync state" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.949119 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.949986 4805 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.950900 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.950934 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.950948 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.951766 4805 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.951817 4805 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.951855 4805 kubelet.go:2335] "Starting kubelet main sync loop" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.951892 4805 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 26 17:14:46 crc kubenswrapper[4805]: E0226 17:14:46.951917 4805 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.951968 4805 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.952056 4805 state_mem.go:36] "Initialized new in-memory state store" Feb 26 17:14:46 crc kubenswrapper[4805]: W0226 17:14:46.952540 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Feb 26 17:14:46 crc kubenswrapper[4805]: E0226 17:14:46.952606 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.974509 4805 policy_none.go:49] "None policy: Start" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.976052 4805 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 26 17:14:46 crc kubenswrapper[4805]: I0226 17:14:46.976082 4805 state_mem.go:35] "Initializing new in-memory state store" Feb 26 17:14:47 crc kubenswrapper[4805]: E0226 17:14:47.001391 4805 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.049323 4805 manager.go:334] "Starting Device Plugin manager" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.049385 4805 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.049403 4805 server.go:79] "Starting device plugin registration server" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.050006 4805 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.050056 4805 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.050867 4805 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.051086 4805 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.051104 4805 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.052154 4805 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.052235 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.053134 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.053163 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.053171 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.053302 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.053518 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.053567 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.054109 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.054168 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.054177 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.054284 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.054461 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.054488 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.054604 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.054685 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.054702 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.055162 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.055176 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.055183 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.055253 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.055268 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.055313 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.055329 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.055556 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.055573 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.055887 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.055904 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.055912 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.055989 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.056291 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.056314 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.056325 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.056544 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.056630 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.056820 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.056848 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.056861 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.057311 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.057347 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.058196 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.058216 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.058225 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.058263 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.058292 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.058309 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:47 crc kubenswrapper[4805]: E0226 17:14:47.060384 4805 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 17:14:47 crc kubenswrapper[4805]: E0226 17:14:47.105483 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="400ms" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.141553 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.141663 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.141694 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.141718 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.141748 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.141777 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.141801 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.141828 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.141862 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.141885 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.141910 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.141944 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.141968 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.141994 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.142044 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.150233 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.151364 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.151406 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.151418 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.151441 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 17:14:47 crc kubenswrapper[4805]: E0226 17:14:47.152050 4805 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.243491 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.243552 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.243577 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.243599 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.243618 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.243635 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.243655 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.243673 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.243694 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.243713 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.243732 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.243750 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.243770 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.243777 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.243783 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.243847 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.243789 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.243886 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.243892 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.243779 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.243908 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.243937 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.243910 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.243874 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.243828 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.243943 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.243981 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.243831 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.243851 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.243888 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.352798 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.353977 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.354035 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.354046 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.354068 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 17:14:47 crc kubenswrapper[4805]: E0226 17:14:47.354739 4805 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.403762 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.417968 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.424265 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.446797 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.450594 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:14:47 crc kubenswrapper[4805]: W0226 17:14:47.450922 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-e760f9dddc805a1560f02863d40c8c2d03a14ebb73c2f487119395aedc7b12ba WatchSource:0}: Error finding container e760f9dddc805a1560f02863d40c8c2d03a14ebb73c2f487119395aedc7b12ba: Status 404 returned error can't find the container with id e760f9dddc805a1560f02863d40c8c2d03a14ebb73c2f487119395aedc7b12ba Feb 26 17:14:47 crc kubenswrapper[4805]: W0226 17:14:47.452419 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-a5b1a727d976aa12203b417795f7cb11a15c85bd8e943554b8927fc58cc022e2 WatchSource:0}: Error finding container a5b1a727d976aa12203b417795f7cb11a15c85bd8e943554b8927fc58cc022e2: Status 404 returned error can't find the container with id a5b1a727d976aa12203b417795f7cb11a15c85bd8e943554b8927fc58cc022e2 Feb 26 17:14:47 crc kubenswrapper[4805]: W0226 17:14:47.457721 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-003e5ee905b000ff3f87835af7de88d448da7f96008d8378a4ea23ea5fe5a27d WatchSource:0}: Error finding container 003e5ee905b000ff3f87835af7de88d448da7f96008d8378a4ea23ea5fe5a27d: Status 404 returned error can't find the container with id 003e5ee905b000ff3f87835af7de88d448da7f96008d8378a4ea23ea5fe5a27d Feb 26 17:14:47 crc kubenswrapper[4805]: W0226 17:14:47.469926 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-ac0c29ec96dab034c37f552228f8d8ed817fb97e11a38b3499fb743ccabc7ebe WatchSource:0}: Error finding container ac0c29ec96dab034c37f552228f8d8ed817fb97e11a38b3499fb743ccabc7ebe: Status 404 returned error can't find the container with id ac0c29ec96dab034c37f552228f8d8ed817fb97e11a38b3499fb743ccabc7ebe Feb 26 17:14:47 crc kubenswrapper[4805]: W0226 17:14:47.470738 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-7f381c30fccb0d5a678e90a2b6fb3c5c348270ad601608654e673f354c0e0075 WatchSource:0}: Error finding container 7f381c30fccb0d5a678e90a2b6fb3c5c348270ad601608654e673f354c0e0075: Status 404 returned error can't find the container with id 7f381c30fccb0d5a678e90a2b6fb3c5c348270ad601608654e673f354c0e0075 Feb 26 17:14:47 crc kubenswrapper[4805]: E0226 17:14:47.506349 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="800ms" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.755230 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.756286 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.756341 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.756354 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.756384 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 17:14:47 crc kubenswrapper[4805]: E0226 17:14:47.756678 4805 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Feb 26 17:14:47 crc kubenswrapper[4805]: W0226 17:14:47.814987 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Feb 26 17:14:47 crc kubenswrapper[4805]: E0226 17:14:47.815098 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.900161 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.955754 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a5b1a727d976aa12203b417795f7cb11a15c85bd8e943554b8927fc58cc022e2"} Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.956997 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e760f9dddc805a1560f02863d40c8c2d03a14ebb73c2f487119395aedc7b12ba"} Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.958239 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7f381c30fccb0d5a678e90a2b6fb3c5c348270ad601608654e673f354c0e0075"} Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.959232 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ac0c29ec96dab034c37f552228f8d8ed817fb97e11a38b3499fb743ccabc7ebe"} Feb 26 17:14:47 crc kubenswrapper[4805]: I0226 17:14:47.959817 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"003e5ee905b000ff3f87835af7de88d448da7f96008d8378a4ea23ea5fe5a27d"} Feb 26 17:14:48 crc kubenswrapper[4805]: W0226 17:14:48.156271 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Feb 26 17:14:48 crc kubenswrapper[4805]: E0226 17:14:48.156626 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Feb 26 17:14:48 crc kubenswrapper[4805]: W0226 17:14:48.244981 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Feb 26 17:14:48 crc kubenswrapper[4805]: E0226 17:14:48.245088 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Feb 26 17:14:48 crc kubenswrapper[4805]: W0226 17:14:48.281976 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Feb 26 17:14:48 crc kubenswrapper[4805]: E0226 17:14:48.282118 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Feb 26 17:14:48 crc kubenswrapper[4805]: E0226 17:14:48.307277 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="1.6s" Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.557425 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.559937 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.559982 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.559993 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.560052 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 17:14:48 crc kubenswrapper[4805]: E0226 17:14:48.560359 4805 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.882105 4805 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 26 17:14:48 crc kubenswrapper[4805]: E0226 17:14:48.883294 4805 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.900577 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.963413 4805 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e" exitCode=0 Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.963555 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.963890 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e"} Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.964489 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.964536 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.964549 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.965282 4805 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="24f5115c5c9ba391bf4503f7e0868747a2591f781173fb55a8de06d90f4e5d0a" exitCode=0 Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.965340 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"24f5115c5c9ba391bf4503f7e0868747a2591f781173fb55a8de06d90f4e5d0a"} Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.965367 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.966091 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.966132 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.966142 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.968436 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fe5fcd2ef94d1daafee16035d9fe7743229aed752e4e3a2cbb3e39f11620f90f"} Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.968460 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fb76989f99dc85548cf68fedb8b05b76aeed8628772502bf3a6c820635f5ed60"} Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.968469 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ba08578e2e225a893f884321ea71492c216b89d6b785ef5e2bca0bee1b16029e"} Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.968478 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8661d5f04ab119c311367b9e13c65671370a488a8d37c63aa6e37f2e1556688b"} Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.968501 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.969268 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.969297 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.969306 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.970279 4805 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="a451f932847342121502796c41fd3b32fe0b6faa4746377ed06b630d0f447fd5" exitCode=0 Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.970371 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.970364 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"a451f932847342121502796c41fd3b32fe0b6faa4746377ed06b630d0f447fd5"} Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.971040 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.971072 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.971084 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.972325 4805 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315" exitCode=0 Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.972358 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315"} Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.972417 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.973276 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.973306 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.973317 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.976535 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.977291 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.977327 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:48 crc kubenswrapper[4805]: I0226 17:14:48.977340 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:49 crc kubenswrapper[4805]: I0226 17:14:49.679908 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 17:14:49 crc kubenswrapper[4805]: I0226 17:14:49.900229 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Feb 26 17:14:49 crc kubenswrapper[4805]: E0226 17:14:49.908183 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="3.2s" Feb 26 17:14:49 crc kubenswrapper[4805]: I0226 17:14:49.933321 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 17:14:49 crc kubenswrapper[4805]: I0226 17:14:49.978034 4805 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd" exitCode=0 Feb 26 17:14:49 crc kubenswrapper[4805]: I0226 17:14:49.978090 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd"} Feb 26 17:14:49 crc kubenswrapper[4805]: I0226 17:14:49.978199 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:49 crc kubenswrapper[4805]: I0226 17:14:49.979167 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:49 crc kubenswrapper[4805]: I0226 17:14:49.979188 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:49 crc kubenswrapper[4805]: I0226 17:14:49.979196 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:49 crc kubenswrapper[4805]: I0226 17:14:49.981395 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"9ea4296c5f0aeccbd6407ca35a2f863bc14b58e0d445271541413e2bd411970c"} Feb 26 17:14:49 crc kubenswrapper[4805]: I0226 17:14:49.981504 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:49 crc kubenswrapper[4805]: I0226 17:14:49.982877 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:49 crc kubenswrapper[4805]: I0226 17:14:49.982932 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:49 crc kubenswrapper[4805]: I0226 17:14:49.982951 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:49 crc kubenswrapper[4805]: I0226 17:14:49.985370 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:49 crc kubenswrapper[4805]: I0226 17:14:49.985377 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2798a01279759b3561383b0d984c845729bfb391addb4d35a637600cfb46c9e8"} Feb 26 17:14:49 crc kubenswrapper[4805]: I0226 17:14:49.985424 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"208dd9e5f6749b5e8c9b4f0d63e86d91b0c64d979b8d23b4a80169ca30565a45"} Feb 26 17:14:49 crc kubenswrapper[4805]: I0226 17:14:49.985447 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"10e49ec5b4e10dc6dac71d5faf7f0e1b1183cab3aad802aae8c801a1ea4baa4f"} Feb 26 17:14:49 crc kubenswrapper[4805]: I0226 17:14:49.985870 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:49 crc kubenswrapper[4805]: I0226 17:14:49.985891 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:49 crc kubenswrapper[4805]: I0226 17:14:49.985900 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:49 crc kubenswrapper[4805]: I0226 17:14:49.987483 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:49 crc kubenswrapper[4805]: I0226 17:14:49.987818 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd"} Feb 26 17:14:49 crc kubenswrapper[4805]: I0226 17:14:49.987847 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504"} Feb 26 17:14:49 crc kubenswrapper[4805]: I0226 17:14:49.987860 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389"} Feb 26 17:14:49 crc kubenswrapper[4805]: I0226 17:14:49.988624 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:49 crc kubenswrapper[4805]: I0226 17:14:49.988644 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:49 crc kubenswrapper[4805]: I0226 17:14:49.988652 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:50 crc kubenswrapper[4805]: I0226 17:14:50.160761 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:50 crc kubenswrapper[4805]: I0226 17:14:50.162396 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:50 crc kubenswrapper[4805]: I0226 17:14:50.162463 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:50 crc kubenswrapper[4805]: I0226 17:14:50.162484 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:50 crc kubenswrapper[4805]: I0226 17:14:50.162528 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 17:14:50 crc kubenswrapper[4805]: E0226 17:14:50.163152 4805 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Feb 26 17:14:50 crc kubenswrapper[4805]: W0226 17:14:50.644384 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Feb 26 17:14:50 crc kubenswrapper[4805]: E0226 17:14:50.644506 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Feb 26 17:14:50 crc kubenswrapper[4805]: W0226 17:14:50.815670 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Feb 26 17:14:50 crc kubenswrapper[4805]: E0226 17:14:50.815752 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Feb 26 17:14:50 crc kubenswrapper[4805]: I0226 17:14:50.900685 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Feb 26 17:14:50 crc kubenswrapper[4805]: W0226 17:14:50.908654 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Feb 26 17:14:50 crc kubenswrapper[4805]: E0226 17:14:50.908757 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Feb 26 17:14:50 crc kubenswrapper[4805]: I0226 17:14:50.993667 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bfa636e5637f14fa34168632e06dfe4a90e46c757207e76d3ad7111702d59a24"} Feb 26 17:14:50 crc kubenswrapper[4805]: I0226 17:14:50.993723 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb"} Feb 26 17:14:50 crc kubenswrapper[4805]: I0226 17:14:50.993893 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:50 crc kubenswrapper[4805]: I0226 17:14:50.995659 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:50 crc kubenswrapper[4805]: I0226 17:14:50.995717 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:50 crc kubenswrapper[4805]: I0226 17:14:50.995741 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:50 crc kubenswrapper[4805]: I0226 17:14:50.996921 4805 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24" exitCode=0 Feb 26 17:14:50 crc kubenswrapper[4805]: I0226 17:14:50.997039 4805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 17:14:50 crc kubenswrapper[4805]: I0226 17:14:50.997071 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:50 crc kubenswrapper[4805]: I0226 17:14:50.997105 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:50 crc kubenswrapper[4805]: I0226 17:14:50.997138 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24"} Feb 26 17:14:50 crc kubenswrapper[4805]: I0226 17:14:50.997072 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:50 crc kubenswrapper[4805]: I0226 17:14:50.997111 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:50 crc kubenswrapper[4805]: I0226 17:14:50.998505 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:50 crc kubenswrapper[4805]: I0226 17:14:50.998543 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:50 crc kubenswrapper[4805]: I0226 17:14:50.998555 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:50 crc kubenswrapper[4805]: I0226 17:14:50.998613 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:50 crc kubenswrapper[4805]: I0226 17:14:50.998656 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:50 crc kubenswrapper[4805]: I0226 17:14:50.998678 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:50 crc kubenswrapper[4805]: I0226 17:14:50.998695 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:50 crc kubenswrapper[4805]: I0226 17:14:50.998734 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:50 crc kubenswrapper[4805]: I0226 17:14:50.998746 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:50 crc kubenswrapper[4805]: I0226 17:14:50.998846 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:50 crc kubenswrapper[4805]: I0226 17:14:50.998867 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:50 crc kubenswrapper[4805]: I0226 17:14:50.998880 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:51 crc kubenswrapper[4805]: W0226 17:14:51.332422 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Feb 26 17:14:51 crc kubenswrapper[4805]: E0226 17:14:51.332514 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Feb 26 17:14:51 crc kubenswrapper[4805]: I0226 17:14:51.447712 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:14:51 crc kubenswrapper[4805]: I0226 17:14:51.900728 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Feb 26 17:14:52 crc kubenswrapper[4805]: I0226 17:14:52.004096 4805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 17:14:52 crc kubenswrapper[4805]: I0226 17:14:52.004193 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:52 crc kubenswrapper[4805]: I0226 17:14:52.005171 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85"} Feb 26 17:14:52 crc kubenswrapper[4805]: I0226 17:14:52.005665 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:52 crc kubenswrapper[4805]: I0226 17:14:52.005723 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:52 crc kubenswrapper[4805]: I0226 17:14:52.005745 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:52 crc kubenswrapper[4805]: I0226 17:14:52.886974 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:14:52 crc kubenswrapper[4805]: I0226 17:14:52.901135 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Feb 26 17:14:52 crc kubenswrapper[4805]: I0226 17:14:52.934156 4805 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 17:14:52 crc kubenswrapper[4805]: I0226 17:14:52.934261 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 17:14:53 crc kubenswrapper[4805]: I0226 17:14:53.008919 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab"} Feb 26 17:14:53 crc kubenswrapper[4805]: I0226 17:14:53.009008 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:53 crc kubenswrapper[4805]: I0226 17:14:53.009897 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:53 crc kubenswrapper[4805]: I0226 17:14:53.009932 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:53 crc kubenswrapper[4805]: I0226 17:14:53.009946 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:53 crc kubenswrapper[4805]: E0226 17:14:53.109869 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="6.4s" Feb 26 17:14:53 crc kubenswrapper[4805]: I0226 17:14:53.277519 4805 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 26 17:14:53 crc kubenswrapper[4805]: E0226 17:14:53.278970 4805 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Feb 26 17:14:53 crc kubenswrapper[4805]: I0226 17:14:53.363717 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:53 crc kubenswrapper[4805]: I0226 17:14:53.365624 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:53 crc kubenswrapper[4805]: I0226 17:14:53.365688 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:53 crc kubenswrapper[4805]: I0226 17:14:53.365698 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:53 crc kubenswrapper[4805]: I0226 17:14:53.365724 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 17:14:53 crc kubenswrapper[4805]: E0226 17:14:53.366363 4805 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Feb 26 17:14:53 crc kubenswrapper[4805]: I0226 17:14:53.793148 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 17:14:53 crc kubenswrapper[4805]: I0226 17:14:53.793438 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:53 crc kubenswrapper[4805]: I0226 17:14:53.795290 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:53 crc kubenswrapper[4805]: I0226 17:14:53.795361 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:53 crc kubenswrapper[4805]: I0226 17:14:53.795384 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:53 crc kubenswrapper[4805]: I0226 17:14:53.900415 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Feb 26 17:14:54 crc kubenswrapper[4805]: I0226 17:14:54.013258 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 26 17:14:54 crc kubenswrapper[4805]: I0226 17:14:54.015741 4805 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bfa636e5637f14fa34168632e06dfe4a90e46c757207e76d3ad7111702d59a24" exitCode=255 Feb 26 17:14:54 crc kubenswrapper[4805]: I0226 17:14:54.015834 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"bfa636e5637f14fa34168632e06dfe4a90e46c757207e76d3ad7111702d59a24"} Feb 26 17:14:54 crc kubenswrapper[4805]: I0226 17:14:54.015865 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:54 crc kubenswrapper[4805]: I0226 17:14:54.017000 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:54 crc kubenswrapper[4805]: I0226 17:14:54.017068 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:54 crc kubenswrapper[4805]: I0226 17:14:54.017085 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:54 crc kubenswrapper[4805]: I0226 17:14:54.018200 4805 scope.go:117] "RemoveContainer" containerID="bfa636e5637f14fa34168632e06dfe4a90e46c757207e76d3ad7111702d59a24" Feb 26 17:14:54 crc kubenswrapper[4805]: I0226 17:14:54.023782 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3"} Feb 26 17:14:54 crc kubenswrapper[4805]: I0226 17:14:54.031284 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:14:54 crc kubenswrapper[4805]: I0226 17:14:54.031640 4805 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Feb 26 17:14:54 crc kubenswrapper[4805]: I0226 17:14:54.031742 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" Feb 26 17:14:54 crc kubenswrapper[4805]: I0226 17:14:54.165808 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 17:14:54 crc kubenswrapper[4805]: I0226 17:14:54.166038 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:54 crc kubenswrapper[4805]: I0226 17:14:54.167570 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:54 crc kubenswrapper[4805]: I0226 17:14:54.167623 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:54 crc kubenswrapper[4805]: I0226 17:14:54.167637 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:54 crc kubenswrapper[4805]: I0226 17:14:54.184305 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 17:14:54 crc kubenswrapper[4805]: W0226 17:14:54.462502 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Feb 26 17:14:54 crc kubenswrapper[4805]: E0226 17:14:54.462608 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Feb 26 17:14:55 crc kubenswrapper[4805]: I0226 17:14:55.029716 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 26 17:14:55 crc kubenswrapper[4805]: I0226 17:14:55.032854 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"75438a61c4f4c4c69aef2cb90820412135e9e93e9d5ea505eb489a5023b89991"} Feb 26 17:14:55 crc kubenswrapper[4805]: I0226 17:14:55.032953 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:55 crc kubenswrapper[4805]: I0226 17:14:55.036176 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:55 crc kubenswrapper[4805]: I0226 17:14:55.036218 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:55 crc kubenswrapper[4805]: I0226 17:14:55.036231 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:55 crc kubenswrapper[4805]: I0226 17:14:55.043416 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:55 crc kubenswrapper[4805]: I0226 17:14:55.043598 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f"} Feb 26 17:14:55 crc kubenswrapper[4805]: I0226 17:14:55.043648 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:55 crc kubenswrapper[4805]: I0226 17:14:55.043662 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 17:14:55 crc kubenswrapper[4805]: I0226 17:14:55.043680 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e"} Feb 26 17:14:55 crc kubenswrapper[4805]: I0226 17:14:55.044653 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:55 crc kubenswrapper[4805]: I0226 17:14:55.044683 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:55 crc kubenswrapper[4805]: I0226 17:14:55.044699 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:55 crc kubenswrapper[4805]: I0226 17:14:55.045054 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:55 crc kubenswrapper[4805]: I0226 17:14:55.045132 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:55 crc kubenswrapper[4805]: I0226 17:14:55.045152 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:56 crc kubenswrapper[4805]: I0226 17:14:56.046386 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:56 crc kubenswrapper[4805]: I0226 17:14:56.046462 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:56 crc kubenswrapper[4805]: I0226 17:14:56.046386 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:56 crc kubenswrapper[4805]: I0226 17:14:56.046475 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:14:56 crc kubenswrapper[4805]: I0226 17:14:56.047918 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:56 crc kubenswrapper[4805]: I0226 17:14:56.047978 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:56 crc kubenswrapper[4805]: I0226 17:14:56.047990 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:56 crc kubenswrapper[4805]: I0226 17:14:56.048000 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:56 crc kubenswrapper[4805]: I0226 17:14:56.048045 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:56 crc kubenswrapper[4805]: I0226 17:14:56.048063 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:56 crc kubenswrapper[4805]: I0226 17:14:56.048079 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:56 crc kubenswrapper[4805]: I0226 17:14:56.048078 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:56 crc kubenswrapper[4805]: I0226 17:14:56.048123 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:57 crc kubenswrapper[4805]: I0226 17:14:57.048880 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:57 crc kubenswrapper[4805]: I0226 17:14:57.050061 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:57 crc kubenswrapper[4805]: I0226 17:14:57.050144 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:57 crc kubenswrapper[4805]: I0226 17:14:57.050158 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:57 crc kubenswrapper[4805]: E0226 17:14:57.061543 4805 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 17:14:57 crc kubenswrapper[4805]: I0226 17:14:57.355313 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 26 17:14:57 crc kubenswrapper[4805]: I0226 17:14:57.355495 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:57 crc kubenswrapper[4805]: I0226 17:14:57.356790 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:57 crc kubenswrapper[4805]: I0226 17:14:57.356862 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:57 crc kubenswrapper[4805]: I0226 17:14:57.356878 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:58 crc kubenswrapper[4805]: I0226 17:14:58.027961 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 17:14:58 crc kubenswrapper[4805]: I0226 17:14:58.028150 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:58 crc kubenswrapper[4805]: I0226 17:14:58.029657 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:58 crc kubenswrapper[4805]: I0226 17:14:58.029757 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:58 crc kubenswrapper[4805]: I0226 17:14:58.029777 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:59 crc kubenswrapper[4805]: I0226 17:14:59.766796 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:14:59 crc kubenswrapper[4805]: I0226 17:14:59.768389 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:14:59 crc kubenswrapper[4805]: I0226 17:14:59.768427 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:14:59 crc kubenswrapper[4805]: I0226 17:14:59.768437 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:14:59 crc kubenswrapper[4805]: I0226 17:14:59.768465 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 17:15:01 crc kubenswrapper[4805]: I0226 17:15:01.603067 4805 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 26 17:15:02 crc kubenswrapper[4805]: I0226 17:15:02.599729 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 26 17:15:02 crc kubenswrapper[4805]: I0226 17:15:02.600074 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:15:02 crc kubenswrapper[4805]: I0226 17:15:02.601350 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:15:02 crc kubenswrapper[4805]: I0226 17:15:02.601394 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:15:02 crc kubenswrapper[4805]: I0226 17:15:02.601409 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:15:02 crc kubenswrapper[4805]: I0226 17:15:02.648745 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 26 17:15:02 crc kubenswrapper[4805]: I0226 17:15:02.934243 4805 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 17:15:02 crc kubenswrapper[4805]: I0226 17:15:02.934346 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 17:15:03 crc kubenswrapper[4805]: I0226 17:15:03.065801 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:15:03 crc kubenswrapper[4805]: I0226 17:15:03.066756 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:15:03 crc kubenswrapper[4805]: I0226 17:15:03.066790 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:15:03 crc kubenswrapper[4805]: I0226 17:15:03.066802 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:15:03 crc kubenswrapper[4805]: I0226 17:15:03.080964 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 26 17:15:04 crc kubenswrapper[4805]: I0226 17:15:04.068630 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:15:04 crc kubenswrapper[4805]: I0226 17:15:04.069741 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:15:04 crc kubenswrapper[4805]: I0226 17:15:04.069797 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:15:04 crc kubenswrapper[4805]: I0226 17:15:04.069811 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:15:04 crc kubenswrapper[4805]: I0226 17:15:04.703888 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:04Z is after 2026-02-23T05:33:13Z Feb 26 17:15:04 crc kubenswrapper[4805]: W0226 17:15:04.705606 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:04Z is after 2026-02-23T05:33:13Z Feb 26 17:15:04 crc kubenswrapper[4805]: E0226 17:15:04.705734 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:04Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 17:15:04 crc kubenswrapper[4805]: W0226 17:15:04.706226 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:04Z is after 2026-02-23T05:33:13Z Feb 26 17:15:04 crc kubenswrapper[4805]: E0226 17:15:04.706343 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:04Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 17:15:04 crc kubenswrapper[4805]: E0226 17:15:04.708318 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:04Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 26 17:15:04 crc kubenswrapper[4805]: E0226 17:15:04.713411 4805 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:04Z is after 2026-02-23T05:33:13Z" node="crc" Feb 26 17:15:04 crc kubenswrapper[4805]: I0226 17:15:04.717215 4805 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 26 17:15:04 crc kubenswrapper[4805]: I0226 17:15:04.717314 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 26 17:15:04 crc kubenswrapper[4805]: W0226 17:15:04.718316 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:04Z is after 2026-02-23T05:33:13Z Feb 26 17:15:04 crc kubenswrapper[4805]: E0226 17:15:04.718443 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:04Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 17:15:04 crc kubenswrapper[4805]: E0226 17:15:04.720519 4805 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:04Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 17:15:04 crc kubenswrapper[4805]: W0226 17:15:04.724700 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:04Z is after 2026-02-23T05:33:13Z Feb 26 17:15:04 crc kubenswrapper[4805]: E0226 17:15:04.724818 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:04Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 17:15:04 crc kubenswrapper[4805]: I0226 17:15:04.726158 4805 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 26 17:15:04 crc kubenswrapper[4805]: I0226 17:15:04.726238 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 26 17:15:04 crc kubenswrapper[4805]: E0226 17:15:04.726389 4805 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:04Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1897db4972bcd154 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:46.897652052 +0000 UTC m=+1.459406411,LastTimestamp:2026-02-26 17:14:46.897652052 +0000 UTC m=+1.459406411,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:04 crc kubenswrapper[4805]: I0226 17:15:04.907080 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:04Z is after 2026-02-23T05:33:13Z Feb 26 17:15:05 crc kubenswrapper[4805]: I0226 17:15:05.073325 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 26 17:15:05 crc kubenswrapper[4805]: I0226 17:15:05.074147 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 26 17:15:05 crc kubenswrapper[4805]: I0226 17:15:05.076103 4805 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="75438a61c4f4c4c69aef2cb90820412135e9e93e9d5ea505eb489a5023b89991" exitCode=255 Feb 26 17:15:05 crc kubenswrapper[4805]: I0226 17:15:05.076149 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"75438a61c4f4c4c69aef2cb90820412135e9e93e9d5ea505eb489a5023b89991"} Feb 26 17:15:05 crc kubenswrapper[4805]: I0226 17:15:05.076230 4805 scope.go:117] "RemoveContainer" containerID="bfa636e5637f14fa34168632e06dfe4a90e46c757207e76d3ad7111702d59a24" Feb 26 17:15:05 crc kubenswrapper[4805]: I0226 17:15:05.076348 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:15:05 crc kubenswrapper[4805]: I0226 17:15:05.077407 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:15:05 crc kubenswrapper[4805]: I0226 17:15:05.077476 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:15:05 crc kubenswrapper[4805]: I0226 17:15:05.077490 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:15:05 crc kubenswrapper[4805]: I0226 17:15:05.078398 4805 scope.go:117] "RemoveContainer" containerID="75438a61c4f4c4c69aef2cb90820412135e9e93e9d5ea505eb489a5023b89991" Feb 26 17:15:05 crc kubenswrapper[4805]: E0226 17:15:05.078670 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 17:15:05 crc kubenswrapper[4805]: I0226 17:15:05.902860 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:05Z is after 2026-02-23T05:33:13Z Feb 26 17:15:06 crc kubenswrapper[4805]: I0226 17:15:06.081165 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 26 17:15:06 crc kubenswrapper[4805]: I0226 17:15:06.903068 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:06Z is after 2026-02-23T05:33:13Z Feb 26 17:15:07 crc kubenswrapper[4805]: E0226 17:15:07.061702 4805 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 17:15:07 crc kubenswrapper[4805]: I0226 17:15:07.481730 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:15:07 crc kubenswrapper[4805]: I0226 17:15:07.482298 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:15:07 crc kubenswrapper[4805]: I0226 17:15:07.483966 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:15:07 crc kubenswrapper[4805]: I0226 17:15:07.484006 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:15:07 crc kubenswrapper[4805]: I0226 17:15:07.484078 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:15:07 crc kubenswrapper[4805]: I0226 17:15:07.484928 4805 scope.go:117] "RemoveContainer" containerID="75438a61c4f4c4c69aef2cb90820412135e9e93e9d5ea505eb489a5023b89991" Feb 26 17:15:07 crc kubenswrapper[4805]: E0226 17:15:07.485223 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 17:15:07 crc kubenswrapper[4805]: I0226 17:15:07.905702 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:07Z is after 2026-02-23T05:33:13Z Feb 26 17:15:08 crc kubenswrapper[4805]: I0226 17:15:08.903010 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:08Z is after 2026-02-23T05:33:13Z Feb 26 17:15:09 crc kubenswrapper[4805]: I0226 17:15:09.038820 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:15:09 crc kubenswrapper[4805]: I0226 17:15:09.039042 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:15:09 crc kubenswrapper[4805]: I0226 17:15:09.040088 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:15:09 crc kubenswrapper[4805]: I0226 17:15:09.040148 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:15:09 crc kubenswrapper[4805]: I0226 17:15:09.040161 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:15:09 crc kubenswrapper[4805]: I0226 17:15:09.040776 4805 scope.go:117] "RemoveContainer" containerID="75438a61c4f4c4c69aef2cb90820412135e9e93e9d5ea505eb489a5023b89991" Feb 26 17:15:09 crc kubenswrapper[4805]: E0226 17:15:09.041002 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 17:15:09 crc kubenswrapper[4805]: I0226 17:15:09.044877 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:15:09 crc kubenswrapper[4805]: I0226 17:15:09.094940 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:15:09 crc kubenswrapper[4805]: I0226 17:15:09.096287 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:15:09 crc kubenswrapper[4805]: I0226 17:15:09.096363 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:15:09 crc kubenswrapper[4805]: I0226 17:15:09.096378 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:15:09 crc kubenswrapper[4805]: I0226 17:15:09.097247 4805 scope.go:117] "RemoveContainer" containerID="75438a61c4f4c4c69aef2cb90820412135e9e93e9d5ea505eb489a5023b89991" Feb 26 17:15:09 crc kubenswrapper[4805]: E0226 17:15:09.097555 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 17:15:09 crc kubenswrapper[4805]: I0226 17:15:09.903222 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:09Z is after 2026-02-23T05:33:13Z Feb 26 17:15:10 crc kubenswrapper[4805]: I0226 17:15:10.902647 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:10Z is after 2026-02-23T05:33:13Z Feb 26 17:15:11 crc kubenswrapper[4805]: E0226 17:15:11.712853 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:11Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 26 17:15:11 crc kubenswrapper[4805]: I0226 17:15:11.713870 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:15:11 crc kubenswrapper[4805]: I0226 17:15:11.715482 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:15:11 crc kubenswrapper[4805]: I0226 17:15:11.715525 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:15:11 crc kubenswrapper[4805]: I0226 17:15:11.715538 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:15:11 crc kubenswrapper[4805]: I0226 17:15:11.715569 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 17:15:11 crc kubenswrapper[4805]: E0226 17:15:11.719190 4805 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:11Z is after 2026-02-23T05:33:13Z" node="crc" Feb 26 17:15:11 crc kubenswrapper[4805]: I0226 17:15:11.903604 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:11Z is after 2026-02-23T05:33:13Z Feb 26 17:15:12 crc kubenswrapper[4805]: W0226 17:15:12.755780 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:12Z is after 2026-02-23T05:33:13Z Feb 26 17:15:12 crc kubenswrapper[4805]: E0226 17:15:12.755893 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:12Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 17:15:12 crc kubenswrapper[4805]: I0226 17:15:12.905370 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:12Z is after 2026-02-23T05:33:13Z Feb 26 17:15:12 crc kubenswrapper[4805]: I0226 17:15:12.935155 4805 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 17:15:12 crc kubenswrapper[4805]: I0226 17:15:12.935356 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 17:15:12 crc kubenswrapper[4805]: I0226 17:15:12.935474 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 17:15:12 crc kubenswrapper[4805]: I0226 17:15:12.935744 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:15:12 crc kubenswrapper[4805]: I0226 17:15:12.937476 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:15:12 crc kubenswrapper[4805]: I0226 17:15:12.937532 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:15:12 crc kubenswrapper[4805]: I0226 17:15:12.937609 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:15:12 crc kubenswrapper[4805]: I0226 17:15:12.938492 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"ba08578e2e225a893f884321ea71492c216b89d6b785ef5e2bca0bee1b16029e"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 26 17:15:12 crc kubenswrapper[4805]: I0226 17:15:12.938731 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://ba08578e2e225a893f884321ea71492c216b89d6b785ef5e2bca0bee1b16029e" gracePeriod=30 Feb 26 17:15:13 crc kubenswrapper[4805]: I0226 17:15:13.109348 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 26 17:15:13 crc kubenswrapper[4805]: I0226 17:15:13.110882 4805 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="ba08578e2e225a893f884321ea71492c216b89d6b785ef5e2bca0bee1b16029e" exitCode=255 Feb 26 17:15:13 crc kubenswrapper[4805]: I0226 17:15:13.110975 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"ba08578e2e225a893f884321ea71492c216b89d6b785ef5e2bca0bee1b16029e"} Feb 26 17:15:13 crc kubenswrapper[4805]: I0226 17:15:13.903156 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:13Z is after 2026-02-23T05:33:13Z Feb 26 17:15:14 crc kubenswrapper[4805]: I0226 17:15:14.117279 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 26 17:15:14 crc kubenswrapper[4805]: I0226 17:15:14.117862 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f8326f4c52e7600d16343e34c506cc2965d03fb136016e58b4e6bbd2a3220da0"} Feb 26 17:15:14 crc kubenswrapper[4805]: I0226 17:15:14.117911 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:15:14 crc kubenswrapper[4805]: I0226 17:15:14.119055 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:15:14 crc kubenswrapper[4805]: I0226 17:15:14.119097 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:15:14 crc kubenswrapper[4805]: I0226 17:15:14.119111 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:15:14 crc kubenswrapper[4805]: E0226 17:15:14.732485 4805 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:14Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1897db4972bcd154 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:46.897652052 +0000 UTC m=+1.459406411,LastTimestamp:2026-02-26 17:14:46.897652052 +0000 UTC m=+1.459406411,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:14 crc kubenswrapper[4805]: I0226 17:15:14.904567 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:14Z is after 2026-02-23T05:33:13Z Feb 26 17:15:15 crc kubenswrapper[4805]: I0226 17:15:15.120399 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:15:15 crc kubenswrapper[4805]: I0226 17:15:15.121837 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:15:15 crc kubenswrapper[4805]: I0226 17:15:15.121886 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:15:15 crc kubenswrapper[4805]: I0226 17:15:15.121899 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:15:15 crc kubenswrapper[4805]: W0226 17:15:15.435431 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:15Z is after 2026-02-23T05:33:13Z Feb 26 17:15:15 crc kubenswrapper[4805]: E0226 17:15:15.435528 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:15Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 17:15:15 crc kubenswrapper[4805]: I0226 17:15:15.903446 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:15Z is after 2026-02-23T05:33:13Z Feb 26 17:15:16 crc kubenswrapper[4805]: W0226 17:15:16.557073 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:16Z is after 2026-02-23T05:33:13Z Feb 26 17:15:16 crc kubenswrapper[4805]: E0226 17:15:16.557178 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:16Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 17:15:16 crc kubenswrapper[4805]: I0226 17:15:16.904009 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:16Z is after 2026-02-23T05:33:13Z Feb 26 17:15:17 crc kubenswrapper[4805]: E0226 17:15:17.061867 4805 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 17:15:17 crc kubenswrapper[4805]: I0226 17:15:17.904794 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:17Z is after 2026-02-23T05:33:13Z Feb 26 17:15:18 crc kubenswrapper[4805]: E0226 17:15:18.717850 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:18Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 26 17:15:18 crc kubenswrapper[4805]: I0226 17:15:18.720079 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:15:18 crc kubenswrapper[4805]: I0226 17:15:18.721685 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:15:18 crc kubenswrapper[4805]: I0226 17:15:18.721726 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:15:18 crc kubenswrapper[4805]: I0226 17:15:18.721739 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:15:18 crc kubenswrapper[4805]: I0226 17:15:18.721768 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 17:15:18 crc kubenswrapper[4805]: E0226 17:15:18.724873 4805 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:18Z is after 2026-02-23T05:33:13Z" node="crc" Feb 26 17:15:18 crc kubenswrapper[4805]: I0226 17:15:18.903569 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:18Z is after 2026-02-23T05:33:13Z Feb 26 17:15:19 crc kubenswrapper[4805]: I0226 17:15:19.680663 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 17:15:19 crc kubenswrapper[4805]: I0226 17:15:19.680866 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:15:19 crc kubenswrapper[4805]: I0226 17:15:19.682504 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:15:19 crc kubenswrapper[4805]: I0226 17:15:19.682613 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:15:19 crc kubenswrapper[4805]: I0226 17:15:19.682637 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:15:19 crc kubenswrapper[4805]: I0226 17:15:19.905310 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:19Z is after 2026-02-23T05:33:13Z Feb 26 17:15:19 crc kubenswrapper[4805]: I0226 17:15:19.934418 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 17:15:19 crc kubenswrapper[4805]: I0226 17:15:19.953278 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:15:19 crc kubenswrapper[4805]: I0226 17:15:19.954889 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:15:19 crc kubenswrapper[4805]: I0226 17:15:19.954943 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:15:19 crc kubenswrapper[4805]: I0226 17:15:19.954964 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:15:19 crc kubenswrapper[4805]: I0226 17:15:19.955803 4805 scope.go:117] "RemoveContainer" containerID="75438a61c4f4c4c69aef2cb90820412135e9e93e9d5ea505eb489a5023b89991" Feb 26 17:15:20 crc kubenswrapper[4805]: I0226 17:15:20.133901 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:15:20 crc kubenswrapper[4805]: I0226 17:15:20.134776 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:15:20 crc kubenswrapper[4805]: I0226 17:15:20.134826 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:15:20 crc kubenswrapper[4805]: I0226 17:15:20.134842 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:15:20 crc kubenswrapper[4805]: I0226 17:15:20.903990 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:20Z is after 2026-02-23T05:33:13Z Feb 26 17:15:21 crc kubenswrapper[4805]: I0226 17:15:21.139196 4805 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 26 17:15:21 crc kubenswrapper[4805]: I0226 17:15:21.139496 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 26 17:15:21 crc kubenswrapper[4805]: I0226 17:15:21.140238 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 26 17:15:21 crc kubenswrapper[4805]: I0226 17:15:21.142749 4805 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fb5e308a835bd21e4b68bba4ed0cc363233bf17d1c5618fc9233b656eadac6ba" exitCode=255 Feb 26 17:15:21 crc kubenswrapper[4805]: I0226 17:15:21.142804 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"fb5e308a835bd21e4b68bba4ed0cc363233bf17d1c5618fc9233b656eadac6ba"} Feb 26 17:15:21 crc kubenswrapper[4805]: I0226 17:15:21.142848 4805 scope.go:117] "RemoveContainer" containerID="75438a61c4f4c4c69aef2cb90820412135e9e93e9d5ea505eb489a5023b89991" Feb 26 17:15:21 crc kubenswrapper[4805]: I0226 17:15:21.142970 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:15:21 crc kubenswrapper[4805]: I0226 17:15:21.144415 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:15:21 crc kubenswrapper[4805]: I0226 17:15:21.144471 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:15:21 crc kubenswrapper[4805]: I0226 17:15:21.144488 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:15:21 crc kubenswrapper[4805]: E0226 17:15:21.145113 4805 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:21Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 17:15:21 crc kubenswrapper[4805]: I0226 17:15:21.145567 4805 scope.go:117] "RemoveContainer" containerID="fb5e308a835bd21e4b68bba4ed0cc363233bf17d1c5618fc9233b656eadac6ba" Feb 26 17:15:21 crc kubenswrapper[4805]: E0226 17:15:21.145966 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 17:15:21 crc kubenswrapper[4805]: E0226 17:15:21.146265 4805 certificate_manager.go:440] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Reached backoff limit, still unable to rotate certs: timed out waiting for the condition" logger="UnhandledError" Feb 26 17:15:21 crc kubenswrapper[4805]: I0226 17:15:21.903999 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:21Z is after 2026-02-23T05:33:13Z Feb 26 17:15:22 crc kubenswrapper[4805]: I0226 17:15:22.147704 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 26 17:15:22 crc kubenswrapper[4805]: W0226 17:15:22.735712 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:22Z is after 2026-02-23T05:33:13Z Feb 26 17:15:22 crc kubenswrapper[4805]: E0226 17:15:22.735847 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:22Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 17:15:22 crc kubenswrapper[4805]: I0226 17:15:22.887741 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:15:22 crc kubenswrapper[4805]: I0226 17:15:22.887988 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:15:22 crc kubenswrapper[4805]: I0226 17:15:22.889536 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:15:22 crc kubenswrapper[4805]: I0226 17:15:22.889574 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:15:22 crc kubenswrapper[4805]: I0226 17:15:22.889585 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:15:22 crc kubenswrapper[4805]: I0226 17:15:22.890204 4805 scope.go:117] "RemoveContainer" containerID="fb5e308a835bd21e4b68bba4ed0cc363233bf17d1c5618fc9233b656eadac6ba" Feb 26 17:15:22 crc kubenswrapper[4805]: E0226 17:15:22.890390 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 17:15:22 crc kubenswrapper[4805]: I0226 17:15:22.905464 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:22Z is after 2026-02-23T05:33:13Z Feb 26 17:15:22 crc kubenswrapper[4805]: I0226 17:15:22.935401 4805 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 17:15:22 crc kubenswrapper[4805]: I0226 17:15:22.935521 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 17:15:23 crc kubenswrapper[4805]: I0226 17:15:23.904777 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:23Z is after 2026-02-23T05:33:13Z Feb 26 17:15:24 crc kubenswrapper[4805]: E0226 17:15:24.738188 4805 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:24Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1897db4972bcd154 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:46.897652052 +0000 UTC m=+1.459406411,LastTimestamp:2026-02-26 17:14:46.897652052 +0000 UTC m=+1.459406411,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:24 crc kubenswrapper[4805]: I0226 17:15:24.904768 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:24Z is after 2026-02-23T05:33:13Z Feb 26 17:15:25 crc kubenswrapper[4805]: E0226 17:15:25.721055 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:25Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 26 17:15:25 crc kubenswrapper[4805]: I0226 17:15:25.725278 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:15:25 crc kubenswrapper[4805]: I0226 17:15:25.726970 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:15:25 crc kubenswrapper[4805]: I0226 17:15:25.727042 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:15:25 crc kubenswrapper[4805]: I0226 17:15:25.727053 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:15:25 crc kubenswrapper[4805]: I0226 17:15:25.727083 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 17:15:25 crc kubenswrapper[4805]: E0226 17:15:25.729832 4805 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:25Z is after 2026-02-23T05:33:13Z" node="crc" Feb 26 17:15:25 crc kubenswrapper[4805]: I0226 17:15:25.903411 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:25Z is after 2026-02-23T05:33:13Z Feb 26 17:15:26 crc kubenswrapper[4805]: W0226 17:15:26.356738 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:26Z is after 2026-02-23T05:33:13Z Feb 26 17:15:26 crc kubenswrapper[4805]: E0226 17:15:26.356854 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:26Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 17:15:26 crc kubenswrapper[4805]: I0226 17:15:26.903217 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:26Z is after 2026-02-23T05:33:13Z Feb 26 17:15:27 crc kubenswrapper[4805]: E0226 17:15:27.061986 4805 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 17:15:27 crc kubenswrapper[4805]: I0226 17:15:27.480563 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:15:27 crc kubenswrapper[4805]: I0226 17:15:27.480893 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:15:27 crc kubenswrapper[4805]: I0226 17:15:27.482651 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:15:27 crc kubenswrapper[4805]: I0226 17:15:27.482715 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:15:27 crc kubenswrapper[4805]: I0226 17:15:27.482734 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:15:27 crc kubenswrapper[4805]: I0226 17:15:27.483786 4805 scope.go:117] "RemoveContainer" containerID="fb5e308a835bd21e4b68bba4ed0cc363233bf17d1c5618fc9233b656eadac6ba" Feb 26 17:15:27 crc kubenswrapper[4805]: E0226 17:15:27.484186 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 17:15:27 crc kubenswrapper[4805]: I0226 17:15:27.904570 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:27Z is after 2026-02-23T05:33:13Z Feb 26 17:15:28 crc kubenswrapper[4805]: I0226 17:15:28.904947 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:28Z is after 2026-02-23T05:33:13Z Feb 26 17:15:29 crc kubenswrapper[4805]: I0226 17:15:29.905216 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:29Z is after 2026-02-23T05:33:13Z Feb 26 17:15:30 crc kubenswrapper[4805]: W0226 17:15:30.558486 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:30Z is after 2026-02-23T05:33:13Z Feb 26 17:15:30 crc kubenswrapper[4805]: E0226 17:15:30.558623 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:30Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 17:15:30 crc kubenswrapper[4805]: I0226 17:15:30.905271 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:30Z is after 2026-02-23T05:33:13Z Feb 26 17:15:31 crc kubenswrapper[4805]: I0226 17:15:31.902067 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:31Z is after 2026-02-23T05:33:13Z Feb 26 17:15:32 crc kubenswrapper[4805]: E0226 17:15:32.725128 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:32Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 26 17:15:32 crc kubenswrapper[4805]: I0226 17:15:32.730319 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:15:32 crc kubenswrapper[4805]: I0226 17:15:32.732205 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:15:32 crc kubenswrapper[4805]: I0226 17:15:32.732261 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:15:32 crc kubenswrapper[4805]: I0226 17:15:32.732272 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:15:32 crc kubenswrapper[4805]: I0226 17:15:32.732302 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 17:15:32 crc kubenswrapper[4805]: E0226 17:15:32.735210 4805 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:32Z is after 2026-02-23T05:33:13Z" node="crc" Feb 26 17:15:32 crc kubenswrapper[4805]: I0226 17:15:32.906226 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:32Z is after 2026-02-23T05:33:13Z Feb 26 17:15:32 crc kubenswrapper[4805]: I0226 17:15:32.934894 4805 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 17:15:32 crc kubenswrapper[4805]: I0226 17:15:32.934987 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 17:15:33 crc kubenswrapper[4805]: I0226 17:15:33.902777 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:33Z is after 2026-02-23T05:33:13Z Feb 26 17:15:34 crc kubenswrapper[4805]: E0226 17:15:34.742966 4805 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:34Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1897db4972bcd154 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:46.897652052 +0000 UTC m=+1.459406411,LastTimestamp:2026-02-26 17:14:46.897652052 +0000 UTC m=+1.459406411,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:34 crc kubenswrapper[4805]: I0226 17:15:34.903142 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:34Z is after 2026-02-23T05:33:13Z Feb 26 17:15:35 crc kubenswrapper[4805]: I0226 17:15:35.903777 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:35Z is after 2026-02-23T05:33:13Z Feb 26 17:15:36 crc kubenswrapper[4805]: I0226 17:15:36.904484 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:36Z is after 2026-02-23T05:33:13Z Feb 26 17:15:37 crc kubenswrapper[4805]: E0226 17:15:37.062111 4805 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 17:15:37 crc kubenswrapper[4805]: W0226 17:15:37.836638 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:37Z is after 2026-02-23T05:33:13Z Feb 26 17:15:37 crc kubenswrapper[4805]: E0226 17:15:37.836763 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:37Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 26 17:15:37 crc kubenswrapper[4805]: I0226 17:15:37.903061 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:37Z is after 2026-02-23T05:33:13Z Feb 26 17:15:38 crc kubenswrapper[4805]: I0226 17:15:38.903405 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:38Z is after 2026-02-23T05:33:13Z Feb 26 17:15:39 crc kubenswrapper[4805]: E0226 17:15:39.731730 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:39Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 26 17:15:39 crc kubenswrapper[4805]: I0226 17:15:39.735852 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:15:39 crc kubenswrapper[4805]: I0226 17:15:39.737464 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:15:39 crc kubenswrapper[4805]: I0226 17:15:39.737521 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:15:39 crc kubenswrapper[4805]: I0226 17:15:39.737546 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:15:39 crc kubenswrapper[4805]: I0226 17:15:39.737590 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 17:15:39 crc kubenswrapper[4805]: E0226 17:15:39.742779 4805 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:39Z is after 2026-02-23T05:33:13Z" node="crc" Feb 26 17:15:39 crc kubenswrapper[4805]: I0226 17:15:39.904368 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:39Z is after 2026-02-23T05:33:13Z Feb 26 17:15:40 crc kubenswrapper[4805]: I0226 17:15:40.904087 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:40Z is after 2026-02-23T05:33:13Z Feb 26 17:15:41 crc kubenswrapper[4805]: I0226 17:15:41.904255 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:41Z is after 2026-02-23T05:33:13Z Feb 26 17:15:41 crc kubenswrapper[4805]: I0226 17:15:41.952792 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:15:41 crc kubenswrapper[4805]: I0226 17:15:41.954589 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:15:41 crc kubenswrapper[4805]: I0226 17:15:41.954820 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:15:41 crc kubenswrapper[4805]: I0226 17:15:41.954968 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:15:41 crc kubenswrapper[4805]: I0226 17:15:41.956075 4805 scope.go:117] "RemoveContainer" containerID="fb5e308a835bd21e4b68bba4ed0cc363233bf17d1c5618fc9233b656eadac6ba" Feb 26 17:15:42 crc kubenswrapper[4805]: I0226 17:15:42.208922 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 26 17:15:42 crc kubenswrapper[4805]: I0226 17:15:42.211044 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4"} Feb 26 17:15:42 crc kubenswrapper[4805]: I0226 17:15:42.903412 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:13Z Feb 26 17:15:42 crc kubenswrapper[4805]: I0226 17:15:42.934372 4805 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 17:15:42 crc kubenswrapper[4805]: I0226 17:15:42.934522 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 17:15:42 crc kubenswrapper[4805]: I0226 17:15:42.934601 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 17:15:42 crc kubenswrapper[4805]: I0226 17:15:42.934796 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:15:42 crc kubenswrapper[4805]: I0226 17:15:42.936064 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:15:42 crc kubenswrapper[4805]: I0226 17:15:42.936097 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:15:42 crc kubenswrapper[4805]: I0226 17:15:42.936115 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:15:42 crc kubenswrapper[4805]: I0226 17:15:42.936874 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"f8326f4c52e7600d16343e34c506cc2965d03fb136016e58b4e6bbd2a3220da0"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 26 17:15:42 crc kubenswrapper[4805]: I0226 17:15:42.936978 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://f8326f4c52e7600d16343e34c506cc2965d03fb136016e58b4e6bbd2a3220da0" gracePeriod=30 Feb 26 17:15:43 crc kubenswrapper[4805]: I0226 17:15:43.216119 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 26 17:15:43 crc kubenswrapper[4805]: I0226 17:15:43.216897 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 26 17:15:43 crc kubenswrapper[4805]: I0226 17:15:43.219374 4805 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4" exitCode=255 Feb 26 17:15:43 crc kubenswrapper[4805]: I0226 17:15:43.219437 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4"} Feb 26 17:15:43 crc kubenswrapper[4805]: I0226 17:15:43.219797 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:15:43 crc kubenswrapper[4805]: I0226 17:15:43.220056 4805 scope.go:117] "RemoveContainer" containerID="fb5e308a835bd21e4b68bba4ed0cc363233bf17d1c5618fc9233b656eadac6ba" Feb 26 17:15:43 crc kubenswrapper[4805]: I0226 17:15:43.227612 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:15:43 crc kubenswrapper[4805]: I0226 17:15:43.227660 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:15:43 crc kubenswrapper[4805]: I0226 17:15:43.227674 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:15:43 crc kubenswrapper[4805]: I0226 17:15:43.228379 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 26 17:15:43 crc kubenswrapper[4805]: I0226 17:15:43.228409 4805 scope.go:117] "RemoveContainer" containerID="8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4" Feb 26 17:15:43 crc kubenswrapper[4805]: E0226 17:15:43.229009 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 17:15:43 crc kubenswrapper[4805]: I0226 17:15:43.230892 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 26 17:15:43 crc kubenswrapper[4805]: I0226 17:15:43.231873 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"f8326f4c52e7600d16343e34c506cc2965d03fb136016e58b4e6bbd2a3220da0"} Feb 26 17:15:43 crc kubenswrapper[4805]: I0226 17:15:43.231799 4805 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="f8326f4c52e7600d16343e34c506cc2965d03fb136016e58b4e6bbd2a3220da0" exitCode=255 Feb 26 17:15:43 crc kubenswrapper[4805]: I0226 17:15:43.253231 4805 scope.go:117] "RemoveContainer" containerID="ba08578e2e225a893f884321ea71492c216b89d6b785ef5e2bca0bee1b16029e" Feb 26 17:15:43 crc kubenswrapper[4805]: I0226 17:15:43.799377 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 26 17:15:43 crc kubenswrapper[4805]: I0226 17:15:43.799552 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:15:43 crc kubenswrapper[4805]: I0226 17:15:43.800508 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:15:43 crc kubenswrapper[4805]: I0226 17:15:43.800537 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:15:43 crc kubenswrapper[4805]: I0226 17:15:43.800545 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:15:43 crc kubenswrapper[4805]: I0226 17:15:43.903734 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:43Z is after 2026-02-23T05:33:13Z Feb 26 17:15:44 crc kubenswrapper[4805]: I0226 17:15:44.236559 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 26 17:15:44 crc kubenswrapper[4805]: I0226 17:15:44.239525 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:15:44 crc kubenswrapper[4805]: I0226 17:15:44.241043 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:15:44 crc kubenswrapper[4805]: I0226 17:15:44.241077 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:15:44 crc kubenswrapper[4805]: I0226 17:15:44.241086 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:15:44 crc kubenswrapper[4805]: I0226 17:15:44.241520 4805 scope.go:117] "RemoveContainer" containerID="8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4" Feb 26 17:15:44 crc kubenswrapper[4805]: E0226 17:15:44.241683 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 17:15:44 crc kubenswrapper[4805]: I0226 17:15:44.242243 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 26 17:15:44 crc kubenswrapper[4805]: I0226 17:15:44.243948 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0d2eebfb7c493826ba7323db89101f2e8328db0114a3315e98bebba40c4e45a8"} Feb 26 17:15:44 crc kubenswrapper[4805]: I0226 17:15:44.244129 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:15:44 crc kubenswrapper[4805]: I0226 17:15:44.245417 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:15:44 crc kubenswrapper[4805]: I0226 17:15:44.245446 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:15:44 crc kubenswrapper[4805]: I0226 17:15:44.245457 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:15:44 crc kubenswrapper[4805]: E0226 17:15:44.749699 4805 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:44Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1897db4972bcd154 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:46.897652052 +0000 UTC m=+1.459406411,LastTimestamp:2026-02-26 17:14:46.897652052 +0000 UTC m=+1.459406411,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:44 crc kubenswrapper[4805]: I0226 17:15:44.902850 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:44Z is after 2026-02-23T05:33:13Z Feb 26 17:15:45 crc kubenswrapper[4805]: I0226 17:15:45.246929 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:15:45 crc kubenswrapper[4805]: I0226 17:15:45.248815 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:15:45 crc kubenswrapper[4805]: I0226 17:15:45.248850 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:15:45 crc kubenswrapper[4805]: I0226 17:15:45.248860 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:15:45 crc kubenswrapper[4805]: I0226 17:15:45.903242 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:45Z is after 2026-02-23T05:33:13Z Feb 26 17:15:46 crc kubenswrapper[4805]: E0226 17:15:46.736341 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:46Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 26 17:15:46 crc kubenswrapper[4805]: I0226 17:15:46.742898 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:15:46 crc kubenswrapper[4805]: I0226 17:15:46.744420 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:15:46 crc kubenswrapper[4805]: I0226 17:15:46.744469 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:15:46 crc kubenswrapper[4805]: I0226 17:15:46.744486 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:15:46 crc kubenswrapper[4805]: I0226 17:15:46.744518 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 17:15:46 crc kubenswrapper[4805]: E0226 17:15:46.749678 4805 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:46Z is after 2026-02-23T05:33:13Z" node="crc" Feb 26 17:15:46 crc kubenswrapper[4805]: I0226 17:15:46.905609 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:46Z is after 2026-02-23T05:33:13Z Feb 26 17:15:47 crc kubenswrapper[4805]: E0226 17:15:47.062519 4805 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 17:15:47 crc kubenswrapper[4805]: I0226 17:15:47.480403 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:15:47 crc kubenswrapper[4805]: I0226 17:15:47.480631 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:15:47 crc kubenswrapper[4805]: I0226 17:15:47.482227 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:15:47 crc kubenswrapper[4805]: I0226 17:15:47.482278 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:15:47 crc kubenswrapper[4805]: I0226 17:15:47.482297 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:15:47 crc kubenswrapper[4805]: I0226 17:15:47.483116 4805 scope.go:117] "RemoveContainer" containerID="8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4" Feb 26 17:15:47 crc kubenswrapper[4805]: E0226 17:15:47.483493 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 17:15:47 crc kubenswrapper[4805]: I0226 17:15:47.903361 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:47Z is after 2026-02-23T05:33:13Z Feb 26 17:15:48 crc kubenswrapper[4805]: I0226 17:15:48.902946 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:48Z is after 2026-02-23T05:33:13Z Feb 26 17:15:49 crc kubenswrapper[4805]: I0226 17:15:49.680464 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 17:15:49 crc kubenswrapper[4805]: I0226 17:15:49.680610 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:15:49 crc kubenswrapper[4805]: I0226 17:15:49.681836 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:15:49 crc kubenswrapper[4805]: I0226 17:15:49.681885 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:15:49 crc kubenswrapper[4805]: I0226 17:15:49.681902 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:15:49 crc kubenswrapper[4805]: I0226 17:15:49.904097 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:49Z is after 2026-02-23T05:33:13Z Feb 26 17:15:49 crc kubenswrapper[4805]: I0226 17:15:49.933841 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 17:15:50 crc kubenswrapper[4805]: I0226 17:15:50.257630 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:15:50 crc kubenswrapper[4805]: I0226 17:15:50.259040 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:15:50 crc kubenswrapper[4805]: I0226 17:15:50.259074 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:15:50 crc kubenswrapper[4805]: I0226 17:15:50.259084 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:15:50 crc kubenswrapper[4805]: I0226 17:15:50.906969 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 17:15:51 crc kubenswrapper[4805]: I0226 17:15:51.907008 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 17:15:52 crc kubenswrapper[4805]: W0226 17:15:52.549346 4805 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 26 17:15:52 crc kubenswrapper[4805]: E0226 17:15:52.549394 4805 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 26 17:15:52 crc kubenswrapper[4805]: I0226 17:15:52.887069 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:15:52 crc kubenswrapper[4805]: I0226 17:15:52.887557 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:15:52 crc kubenswrapper[4805]: I0226 17:15:52.888664 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:15:52 crc kubenswrapper[4805]: I0226 17:15:52.888823 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:15:52 crc kubenswrapper[4805]: I0226 17:15:52.888944 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:15:52 crc kubenswrapper[4805]: I0226 17:15:52.889705 4805 scope.go:117] "RemoveContainer" containerID="8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4" Feb 26 17:15:52 crc kubenswrapper[4805]: E0226 17:15:52.890002 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 17:15:52 crc kubenswrapper[4805]: I0226 17:15:52.904684 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 17:15:52 crc kubenswrapper[4805]: I0226 17:15:52.934800 4805 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 17:15:52 crc kubenswrapper[4805]: I0226 17:15:52.934870 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 17:15:53 crc kubenswrapper[4805]: I0226 17:15:53.148180 4805 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 26 17:15:53 crc kubenswrapper[4805]: I0226 17:15:53.161741 4805 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 26 17:15:53 crc kubenswrapper[4805]: E0226 17:15:53.741972 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 26 17:15:53 crc kubenswrapper[4805]: I0226 17:15:53.750090 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:15:53 crc kubenswrapper[4805]: I0226 17:15:53.751313 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:15:53 crc kubenswrapper[4805]: I0226 17:15:53.751353 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:15:53 crc kubenswrapper[4805]: I0226 17:15:53.751371 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:15:53 crc kubenswrapper[4805]: I0226 17:15:53.751399 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 17:15:53 crc kubenswrapper[4805]: E0226 17:15:53.755328 4805 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 26 17:15:53 crc kubenswrapper[4805]: I0226 17:15:53.905733 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.755956 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897db4972bcd154 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:46.897652052 +0000 UTC m=+1.459406411,LastTimestamp:2026-02-26 17:14:46.897652052 +0000 UTC m=+1.459406411,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.760394 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897db4975e99b49 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:46.950918985 +0000 UTC m=+1.512673344,LastTimestamp:2026-02-26 17:14:46.950918985 +0000 UTC m=+1.512673344,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.765270 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897db4975e9fcb6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:46.950943926 +0000 UTC m=+1.512698285,LastTimestamp:2026-02-26 17:14:46.950943926 +0000 UTC m=+1.512698285,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.769041 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897db4975ea28db default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:46.950955227 +0000 UTC m=+1.512709586,LastTimestamp:2026-02-26 17:14:46.950955227 +0000 UTC m=+1.512709586,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.773247 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897db497bf1cb60 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:47.05211888 +0000 UTC m=+1.613873219,LastTimestamp:2026-02-26 17:14:47.05211888 +0000 UTC m=+1.613873219,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.777697 4805 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897db4975e99b49\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897db4975e99b49 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:46.950918985 +0000 UTC m=+1.512673344,LastTimestamp:2026-02-26 17:14:47.053152432 +0000 UTC m=+1.614906771,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.783783 4805 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897db4975e9fcb6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897db4975e9fcb6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:46.950943926 +0000 UTC m=+1.512698285,LastTimestamp:2026-02-26 17:14:47.053168392 +0000 UTC m=+1.614922731,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.788154 4805 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897db4975ea28db\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897db4975ea28db default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:46.950955227 +0000 UTC m=+1.512709586,LastTimestamp:2026-02-26 17:14:47.053175943 +0000 UTC m=+1.614930282,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.792746 4805 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897db4975e99b49\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897db4975e99b49 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:46.950918985 +0000 UTC m=+1.512673344,LastTimestamp:2026-02-26 17:14:47.054138061 +0000 UTC m=+1.615892400,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.797623 4805 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897db4975e9fcb6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897db4975e9fcb6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:46.950943926 +0000 UTC m=+1.512698285,LastTimestamp:2026-02-26 17:14:47.054174102 +0000 UTC m=+1.615928441,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.802352 4805 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897db4975ea28db\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897db4975ea28db default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:46.950955227 +0000 UTC m=+1.512709586,LastTimestamp:2026-02-26 17:14:47.054181673 +0000 UTC m=+1.615936012,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.807320 4805 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897db4975e99b49\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897db4975e99b49 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:46.950918985 +0000 UTC m=+1.512673344,LastTimestamp:2026-02-26 17:14:47.05462289 +0000 UTC m=+1.616377239,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.813792 4805 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897db4975e9fcb6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897db4975e9fcb6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:46.950943926 +0000 UTC m=+1.512698285,LastTimestamp:2026-02-26 17:14:47.054695593 +0000 UTC m=+1.616449942,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.817887 4805 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897db4975ea28db\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897db4975ea28db default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:46.950955227 +0000 UTC m=+1.512709586,LastTimestamp:2026-02-26 17:14:47.054710104 +0000 UTC m=+1.616464463,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.823897 4805 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897db4975e99b49\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897db4975e99b49 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:46.950918985 +0000 UTC m=+1.512673344,LastTimestamp:2026-02-26 17:14:47.055172252 +0000 UTC m=+1.616926591,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.829241 4805 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897db4975e9fcb6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897db4975e9fcb6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:46.950943926 +0000 UTC m=+1.512698285,LastTimestamp:2026-02-26 17:14:47.055180842 +0000 UTC m=+1.616935181,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.835966 4805 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897db4975ea28db\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897db4975ea28db default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:46.950955227 +0000 UTC m=+1.512709586,LastTimestamp:2026-02-26 17:14:47.055187953 +0000 UTC m=+1.616942292,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.842401 4805 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897db4975e99b49\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897db4975e99b49 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:46.950918985 +0000 UTC m=+1.512673344,LastTimestamp:2026-02-26 17:14:47.055304077 +0000 UTC m=+1.617058426,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.847114 4805 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897db4975e9fcb6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897db4975e9fcb6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:46.950943926 +0000 UTC m=+1.512698285,LastTimestamp:2026-02-26 17:14:47.055323018 +0000 UTC m=+1.617077367,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.851447 4805 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897db4975ea28db\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897db4975ea28db default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:46.950955227 +0000 UTC m=+1.512709586,LastTimestamp:2026-02-26 17:14:47.055336659 +0000 UTC m=+1.617091008,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.855115 4805 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897db4975e99b49\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897db4975e99b49 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:46.950918985 +0000 UTC m=+1.512673344,LastTimestamp:2026-02-26 17:14:47.055900651 +0000 UTC m=+1.617654980,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.858923 4805 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897db4975e9fcb6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897db4975e9fcb6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:46.950943926 +0000 UTC m=+1.512698285,LastTimestamp:2026-02-26 17:14:47.055910172 +0000 UTC m=+1.617664501,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.863781 4805 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897db4975ea28db\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897db4975ea28db default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:46.950955227 +0000 UTC m=+1.512709586,LastTimestamp:2026-02-26 17:14:47.055917372 +0000 UTC m=+1.617671711,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.868506 4805 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897db4975e99b49\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897db4975e99b49 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:46.950918985 +0000 UTC m=+1.512673344,LastTimestamp:2026-02-26 17:14:47.056308397 +0000 UTC m=+1.618062746,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.873511 4805 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1897db4975e9fcb6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1897db4975e9fcb6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:46.950943926 +0000 UTC m=+1.512698285,LastTimestamp:2026-02-26 17:14:47.056320888 +0000 UTC m=+1.618075237,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.879851 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897db499433d018 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:47.459098648 +0000 UTC m=+2.020852997,LastTimestamp:2026-02-26 17:14:47.459098648 +0000 UTC m=+2.020852997,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.885269 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897db4994367d79 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:47.459274105 +0000 UTC m=+2.021028444,LastTimestamp:2026-02-26 17:14:47.459274105 +0000 UTC m=+2.021028444,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.891009 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897db499446fba2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:47.460354978 +0000 UTC m=+2.022109337,LastTimestamp:2026-02-26 17:14:47.460354978 +0000 UTC m=+2.022109337,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.896895 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897db499549a91a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:47.477307674 +0000 UTC m=+2.039062013,LastTimestamp:2026-02-26 17:14:47.477307674 +0000 UTC m=+2.039062013,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.901493 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897db49954a917e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:47.477367166 +0000 UTC m=+2.039121505,LastTimestamp:2026-02-26 17:14:47.477367166 +0000 UTC m=+2.039121505,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: I0226 17:15:54.901602 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.906075 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897db49b89744b4 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:48.06959634 +0000 UTC m=+2.631350679,LastTimestamp:2026-02-26 17:14:48.06959634 +0000 UTC m=+2.631350679,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.910599 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897db49b89abf0d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:48.069824269 +0000 UTC m=+2.631578608,LastTimestamp:2026-02-26 17:14:48.069824269 +0000 UTC m=+2.631578608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.914172 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897db49b8a36943 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:48.070392131 +0000 UTC m=+2.632146480,LastTimestamp:2026-02-26 17:14:48.070392131 +0000 UTC m=+2.632146480,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.918225 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897db49b8a56d07 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:48.070524167 +0000 UTC m=+2.632278496,LastTimestamp:2026-02-26 17:14:48.070524167 +0000 UTC m=+2.632278496,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.922493 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897db49b8a87883 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:48.070723715 +0000 UTC m=+2.632478054,LastTimestamp:2026-02-26 17:14:48.070723715 +0000 UTC m=+2.632478054,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.928447 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897db49b988f21f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:48.085434911 +0000 UTC m=+2.647189250,LastTimestamp:2026-02-26 17:14:48.085434911 +0000 UTC m=+2.647189250,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.932254 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897db49b9c5fc05 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:48.089435141 +0000 UTC m=+2.651189480,LastTimestamp:2026-02-26 17:14:48.089435141 +0000 UTC m=+2.651189480,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.936767 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897db49b9c781ff openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:48.089534975 +0000 UTC m=+2.651289314,LastTimestamp:2026-02-26 17:14:48.089534975 +0000 UTC m=+2.651289314,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.942409 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897db49b9c97ec4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:48.08966522 +0000 UTC m=+2.651419559,LastTimestamp:2026-02-26 17:14:48.08966522 +0000 UTC m=+2.651419559,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.948130 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897db49b9cdc449 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:48.089945161 +0000 UTC m=+2.651699500,LastTimestamp:2026-02-26 17:14:48.089945161 +0000 UTC m=+2.651699500,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.953574 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897db49b9d1d602 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:48.090211842 +0000 UTC m=+2.651966181,LastTimestamp:2026-02-26 17:14:48.090211842 +0000 UTC m=+2.651966181,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.957681 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897db49cb0a9acf openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:48.379144911 +0000 UTC m=+2.940899250,LastTimestamp:2026-02-26 17:14:48.379144911 +0000 UTC m=+2.940899250,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.961678 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897db49cba90fd3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:48.389529555 +0000 UTC m=+2.951283904,LastTimestamp:2026-02-26 17:14:48.389529555 +0000 UTC m=+2.951283904,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.967217 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897db49cbb9c147 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:48.390623559 +0000 UTC m=+2.952377918,LastTimestamp:2026-02-26 17:14:48.390623559 +0000 UTC m=+2.952377918,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.972407 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897db49d61392f7 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:48.564282103 +0000 UTC m=+3.126036442,LastTimestamp:2026-02-26 17:14:48.564282103 +0000 UTC m=+3.126036442,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.978191 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897db49d6dd6eda openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:48.57751113 +0000 UTC m=+3.139265469,LastTimestamp:2026-02-26 17:14:48.57751113 +0000 UTC m=+3.139265469,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.982699 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897db49d6f2c236 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:48.578908726 +0000 UTC m=+3.140663085,LastTimestamp:2026-02-26 17:14:48.578908726 +0000 UTC m=+3.140663085,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.989141 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897db49e0c9444d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:48.743961677 +0000 UTC m=+3.305716016,LastTimestamp:2026-02-26 17:14:48.743961677 +0000 UTC m=+3.305716016,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:54 crc kubenswrapper[4805]: E0226 17:15:54.996009 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897db49e19a6d90 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:48.757669264 +0000 UTC m=+3.319423613,LastTimestamp:2026-02-26 17:14:48.757669264 +0000 UTC m=+3.319423613,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.002359 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897db49ee08f64c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:48.96623982 +0000 UTC m=+3.527994159,LastTimestamp:2026-02-26 17:14:48.96623982 +0000 UTC m=+3.527994159,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.008835 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897db49ee1cffd6 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:48.967552982 +0000 UTC m=+3.529307321,LastTimestamp:2026-02-26 17:14:48.967552982 +0000 UTC m=+3.529307321,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.013945 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897db49ee73d32b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:48.973243179 +0000 UTC m=+3.534997518,LastTimestamp:2026-02-26 17:14:48.973243179 +0000 UTC m=+3.534997518,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.018799 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897db49eea3527b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:48.976355963 +0000 UTC m=+3.538110302,LastTimestamp:2026-02-26 17:14:48.976355963 +0000 UTC m=+3.538110302,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.023420 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897db49f92dde75 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:49.153207925 +0000 UTC m=+3.714962264,LastTimestamp:2026-02-26 17:14:49.153207925 +0000 UTC m=+3.714962264,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.027764 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897db49f96dcf1e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:49.157398302 +0000 UTC m=+3.719152641,LastTimestamp:2026-02-26 17:14:49.157398302 +0000 UTC m=+3.719152641,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.032417 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897db49f980bdf3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:49.158639091 +0000 UTC m=+3.720393430,LastTimestamp:2026-02-26 17:14:49.158639091 +0000 UTC m=+3.720393430,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.035414 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897db49f9891347 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:49.159185223 +0000 UTC m=+3.720939572,LastTimestamp:2026-02-26 17:14:49.159185223 +0000 UTC m=+3.720939572,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.036756 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1897db49fa037943 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:49.167206723 +0000 UTC m=+3.728961062,LastTimestamp:2026-02-26 17:14:49.167206723 +0000 UTC m=+3.728961062,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.040832 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897db49fa262123 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:49.169477923 +0000 UTC m=+3.731232262,LastTimestamp:2026-02-26 17:14:49.169477923 +0000 UTC m=+3.731232262,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.047163 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897db49fa3722de openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:49.170592478 +0000 UTC m=+3.732346827,LastTimestamp:2026-02-26 17:14:49.170592478 +0000 UTC m=+3.732346827,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.051891 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897db49fa667ab9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:49.173695161 +0000 UTC m=+3.735449490,LastTimestamp:2026-02-26 17:14:49.173695161 +0000 UTC m=+3.735449490,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.055780 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897db49fabc1af9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:49.179306745 +0000 UTC m=+3.741061084,LastTimestamp:2026-02-26 17:14:49.179306745 +0000 UTC m=+3.741061084,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.059068 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897db49fd060514 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:49.217705236 +0000 UTC m=+3.779459575,LastTimestamp:2026-02-26 17:14:49.217705236 +0000 UTC m=+3.779459575,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.062722 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897db4a07a3a6a4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:49.395807908 +0000 UTC m=+3.957562257,LastTimestamp:2026-02-26 17:14:49.395807908 +0000 UTC m=+3.957562257,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.066814 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897db4a07a67151 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:49.395990865 +0000 UTC m=+3.957745214,LastTimestamp:2026-02-26 17:14:49.395990865 +0000 UTC m=+3.957745214,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.071307 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897db4a0980fc0d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:49.427090445 +0000 UTC m=+3.988844784,LastTimestamp:2026-02-26 17:14:49.427090445 +0000 UTC m=+3.988844784,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.075534 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897db4a098f6a61 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:49.428036193 +0000 UTC m=+3.989790532,LastTimestamp:2026-02-26 17:14:49.428036193 +0000 UTC m=+3.989790532,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.079816 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897db4a09f450aa openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:49.434648746 +0000 UTC m=+3.996403085,LastTimestamp:2026-02-26 17:14:49.434648746 +0000 UTC m=+3.996403085,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.084628 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897db4a0a1cff39 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:49.437314873 +0000 UTC m=+3.999069212,LastTimestamp:2026-02-26 17:14:49.437314873 +0000 UTC m=+3.999069212,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.089353 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897db4a17694404 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:49.660417028 +0000 UTC m=+4.222171367,LastTimestamp:2026-02-26 17:14:49.660417028 +0000 UTC m=+4.222171367,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.096517 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897db4a194edb57 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:49.692240727 +0000 UTC m=+4.253995066,LastTimestamp:2026-02-26 17:14:49.692240727 +0000 UTC m=+4.253995066,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.100713 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1897db4a1da25ceb openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:49.764822251 +0000 UTC m=+4.326576590,LastTimestamp:2026-02-26 17:14:49.764822251 +0000 UTC m=+4.326576590,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.104699 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897db4a1e59c20f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:49.776841231 +0000 UTC m=+4.338595600,LastTimestamp:2026-02-26 17:14:49.776841231 +0000 UTC m=+4.338595600,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.108566 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897db4a1e6b78bb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:49.778002107 +0000 UTC m=+4.339756446,LastTimestamp:2026-02-26 17:14:49.778002107 +0000 UTC m=+4.339756446,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.113616 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897db4a2a7fead4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:49.980668628 +0000 UTC m=+4.542422967,LastTimestamp:2026-02-26 17:14:49.980668628 +0000 UTC m=+4.542422967,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.119407 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897db4a34d18ff8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:50.15379148 +0000 UTC m=+4.715545829,LastTimestamp:2026-02-26 17:14:50.15379148 +0000 UTC m=+4.715545829,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.125613 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897db4a3c02b901 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:50.274453761 +0000 UTC m=+4.836208110,LastTimestamp:2026-02-26 17:14:50.274453761 +0000 UTC m=+4.836208110,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.132421 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897db4a3c1dacda openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:50.276220122 +0000 UTC m=+4.837974501,LastTimestamp:2026-02-26 17:14:50.276220122 +0000 UTC m=+4.837974501,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.136994 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897db4a3d17814f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:50.292592975 +0000 UTC m=+4.854347354,LastTimestamp:2026-02-26 17:14:50.292592975 +0000 UTC m=+4.854347354,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.140721 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897db4a49062d55 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:50.492783957 +0000 UTC m=+5.054538336,LastTimestamp:2026-02-26 17:14:50.492783957 +0000 UTC m=+5.054538336,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.144907 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897db4a50e6bfca openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:50.624942026 +0000 UTC m=+5.186696375,LastTimestamp:2026-02-26 17:14:50.624942026 +0000 UTC m=+5.186696375,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.149043 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897db4a5374508e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:50.667774094 +0000 UTC m=+5.229528443,LastTimestamp:2026-02-26 17:14:50.667774094 +0000 UTC m=+5.229528443,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.154442 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897db4a6760ce5e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:51.002039902 +0000 UTC m=+5.563794241,LastTimestamp:2026-02-26 17:14:51.002039902 +0000 UTC m=+5.563794241,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.158707 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897db4a856446a4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:51.50558378 +0000 UTC m=+6.067338119,LastTimestamp:2026-02-26 17:14:51.50558378 +0000 UTC m=+6.067338119,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.163932 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897db4a88771b4c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:51.557149516 +0000 UTC m=+6.118903855,LastTimestamp:2026-02-26 17:14:51.557149516 +0000 UTC m=+6.118903855,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.168553 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897db4a88842192 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:51.55800309 +0000 UTC m=+6.119757429,LastTimestamp:2026-02-26 17:14:51.55800309 +0000 UTC m=+6.119757429,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.173385 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897db4aacd61a19 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:52.167354905 +0000 UTC m=+6.729109254,LastTimestamp:2026-02-26 17:14:52.167354905 +0000 UTC m=+6.729109254,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.178561 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897db4ab4c4f4f4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:52.300449012 +0000 UTC m=+6.862203381,LastTimestamp:2026-02-26 17:14:52.300449012 +0000 UTC m=+6.862203381,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.183170 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897db4ab4d832da openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:52.301710042 +0000 UTC m=+6.863464391,LastTimestamp:2026-02-26 17:14:52.301710042 +0000 UTC m=+6.863464391,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.188652 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 26 17:15:55 crc kubenswrapper[4805]: &Event{ObjectMeta:{kube-controller-manager-crc.1897db4ada8bc9ed openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 26 17:15:55 crc kubenswrapper[4805]: body: Feb 26 17:15:55 crc kubenswrapper[4805]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:52.934236653 +0000 UTC m=+7.495991042,LastTimestamp:2026-02-26 17:14:52.934236653 +0000 UTC m=+7.495991042,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 17:15:55 crc kubenswrapper[4805]: > Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.192445 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897db4ada8ce799 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:52.934309785 +0000 UTC m=+7.496064174,LastTimestamp:2026-02-26 17:14:52.934309785 +0000 UTC m=+7.496064174,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.199010 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897db4aeb7d0792 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:53.218482066 +0000 UTC m=+7.780236415,LastTimestamp:2026-02-26 17:14:53.218482066 +0000 UTC m=+7.780236415,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.203731 4805 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897db4a3c1dacda\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897db4a3c1dacda openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:50.276220122 +0000 UTC m=+4.837974501,LastTimestamp:2026-02-26 17:14:54.019326758 +0000 UTC m=+8.581081117,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.207789 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 26 17:15:55 crc kubenswrapper[4805]: &Event{ObjectMeta:{kube-apiserver-crc.1897db4b1bf5f600 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:6443/livez": dial tcp 192.168.126.11:6443: connect: connection refused Feb 26 17:15:55 crc kubenswrapper[4805]: body: Feb 26 17:15:55 crc kubenswrapper[4805]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:54.031713792 +0000 UTC m=+8.593468131,LastTimestamp:2026-02-26 17:14:54.031713792 +0000 UTC m=+8.593468131,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 17:15:55 crc kubenswrapper[4805]: > Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.211902 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897db4b1bf6fd22 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:54.031781154 +0000 UTC m=+8.593535493,LastTimestamp:2026-02-26 17:14:54.031781154 +0000 UTC m=+8.593535493,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.216426 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897db4b2289cd78 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:54.14206604 +0000 UTC m=+8.703820389,LastTimestamp:2026-02-26 17:14:54.14206604 +0000 UTC m=+8.703820389,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.220235 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897db4b229f77a9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:54.143485865 +0000 UTC m=+8.705240204,LastTimestamp:2026-02-26 17:14:54.143485865 +0000 UTC m=+8.705240204,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.225300 4805 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897db4a50e6bfca\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897db4a50e6bfca openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:50.624942026 +0000 UTC m=+5.186696375,LastTimestamp:2026-02-26 17:14:54.74344005 +0000 UTC m=+9.305194419,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.229192 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897db4b469e4dad openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:54.747389357 +0000 UTC m=+9.309143736,LastTimestamp:2026-02-26 17:14:54.747389357 +0000 UTC m=+9.309143736,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.232760 4805 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1897db4a5374508e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897db4a5374508e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:50.667774094 +0000 UTC m=+5.229528443,LastTimestamp:2026-02-26 17:14:54.808830445 +0000 UTC m=+9.370584824,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.243346 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897db4b4aa57600 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:54.814967296 +0000 UTC m=+9.376721635,LastTimestamp:2026-02-26 17:14:54.814967296 +0000 UTC m=+9.376721635,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.246595 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897db4b4ac77345 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:54.817194821 +0000 UTC m=+9.378949200,LastTimestamp:2026-02-26 17:14:54.817194821 +0000 UTC m=+9.378949200,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.250600 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897db4b54f529e0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:54.987962848 +0000 UTC m=+9.549717227,LastTimestamp:2026-02-26 17:14:54.987962848 +0000 UTC m=+9.549717227,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.254811 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1897db4b561aba47 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:55.007201863 +0000 UTC m=+9.568956222,LastTimestamp:2026-02-26 17:14:55.007201863 +0000 UTC m=+9.568956222,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.260689 4805 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897db4ada8bc9ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 26 17:15:55 crc kubenswrapper[4805]: &Event{ObjectMeta:{kube-controller-manager-crc.1897db4ada8bc9ed openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 26 17:15:55 crc kubenswrapper[4805]: body: Feb 26 17:15:55 crc kubenswrapper[4805]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:52.934236653 +0000 UTC m=+7.495991042,LastTimestamp:2026-02-26 17:15:02.934321571 +0000 UTC m=+17.496075950,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 17:15:55 crc kubenswrapper[4805]: > Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.263802 4805 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897db4ada8ce799\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897db4ada8ce799 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:52.934309785 +0000 UTC m=+7.496064174,LastTimestamp:2026-02-26 17:15:02.934390942 +0000 UTC m=+17.496145321,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.267733 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 26 17:15:55 crc kubenswrapper[4805]: &Event{ObjectMeta:{kube-apiserver-crc.1897db4d98dedb0e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 26 17:15:55 crc kubenswrapper[4805]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 26 17:15:55 crc kubenswrapper[4805]: Feb 26 17:15:55 crc kubenswrapper[4805]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:15:04.717286158 +0000 UTC m=+19.279040537,LastTimestamp:2026-02-26 17:15:04.717286158 +0000 UTC m=+19.279040537,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 17:15:55 crc kubenswrapper[4805]: > Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.271901 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1897db4d98dfe847 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:15:04.717355079 +0000 UTC m=+19.279109458,LastTimestamp:2026-02-26 17:15:04.717355079 +0000 UTC m=+19.279109458,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.283912 4805 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897db4ada8bc9ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 26 17:15:55 crc kubenswrapper[4805]: &Event{ObjectMeta:{kube-controller-manager-crc.1897db4ada8bc9ed openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 26 17:15:55 crc kubenswrapper[4805]: body: Feb 26 17:15:55 crc kubenswrapper[4805]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:52.934236653 +0000 UTC m=+7.495991042,LastTimestamp:2026-02-26 17:15:12.935300831 +0000 UTC m=+27.497055210,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 17:15:55 crc kubenswrapper[4805]: > Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.290698 4805 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897db4ada8ce799\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897db4ada8ce799 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:52.934309785 +0000 UTC m=+7.496064174,LastTimestamp:2026-02-26 17:15:12.935420324 +0000 UTC m=+27.497174703,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.295976 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897db4f82e7c03b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:15:12.938704955 +0000 UTC m=+27.500459324,LastTimestamp:2026-02-26 17:15:12.938704955 +0000 UTC m=+27.500459324,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.300921 4805 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897db49b9d1d602\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897db49b9d1d602 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:48.090211842 +0000 UTC m=+2.651966181,LastTimestamp:2026-02-26 17:15:13.056160495 +0000 UTC m=+27.617914834,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.305752 4805 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897db49cb0a9acf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897db49cb0a9acf openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:48.379144911 +0000 UTC m=+2.940899250,LastTimestamp:2026-02-26 17:15:13.218558016 +0000 UTC m=+27.780312355,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.311962 4805 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897db49cba90fd3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897db49cba90fd3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:48.389529555 +0000 UTC m=+2.951283904,LastTimestamp:2026-02-26 17:15:13.228123182 +0000 UTC m=+27.789877521,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.317162 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 26 17:15:55 crc kubenswrapper[4805]: &Event{ObjectMeta:{kube-controller-manager-crc.1897db51d6c21538 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Feb 26 17:15:55 crc kubenswrapper[4805]: body: Feb 26 17:15:55 crc kubenswrapper[4805]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:15:22.93545708 +0000 UTC m=+37.497211449,LastTimestamp:2026-02-26 17:15:22.93545708 +0000 UTC m=+37.497211449,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 17:15:55 crc kubenswrapper[4805]: > Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.322007 4805 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1897db51d6c3e447 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:15:22.935575623 +0000 UTC m=+37.497330012,LastTimestamp:2026-02-26 17:15:22.935575623 +0000 UTC m=+37.497330012,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:15:55 crc kubenswrapper[4805]: E0226 17:15:55.327734 4805 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1897db4ada8bc9ed\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 26 17:15:55 crc kubenswrapper[4805]: &Event{ObjectMeta:{kube-controller-manager-crc.1897db4ada8bc9ed openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 26 17:15:55 crc kubenswrapper[4805]: body: Feb 26 17:15:55 crc kubenswrapper[4805]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:14:52.934236653 +0000 UTC m=+7.495991042,LastTimestamp:2026-02-26 17:15:32.934962223 +0000 UTC m=+47.496716602,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 26 17:15:55 crc kubenswrapper[4805]: > Feb 26 17:15:55 crc kubenswrapper[4805]: I0226 17:15:55.905295 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 17:15:56 crc kubenswrapper[4805]: I0226 17:15:56.903947 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 17:15:57 crc kubenswrapper[4805]: E0226 17:15:57.063293 4805 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 17:15:57 crc kubenswrapper[4805]: I0226 17:15:57.906197 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 17:15:58 crc kubenswrapper[4805]: I0226 17:15:58.905823 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 17:15:59 crc kubenswrapper[4805]: I0226 17:15:59.908077 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 17:15:59 crc kubenswrapper[4805]: I0226 17:15:59.937510 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 17:15:59 crc kubenswrapper[4805]: I0226 17:15:59.937634 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:15:59 crc kubenswrapper[4805]: I0226 17:15:59.938538 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:15:59 crc kubenswrapper[4805]: I0226 17:15:59.938597 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:15:59 crc kubenswrapper[4805]: I0226 17:15:59.938610 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:15:59 crc kubenswrapper[4805]: I0226 17:15:59.941880 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 17:16:00 crc kubenswrapper[4805]: I0226 17:16:00.282212 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:16:00 crc kubenswrapper[4805]: I0226 17:16:00.283189 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:00 crc kubenswrapper[4805]: I0226 17:16:00.283226 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:00 crc kubenswrapper[4805]: I0226 17:16:00.283239 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:00 crc kubenswrapper[4805]: E0226 17:16:00.747824 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 26 17:16:00 crc kubenswrapper[4805]: I0226 17:16:00.755997 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:16:00 crc kubenswrapper[4805]: I0226 17:16:00.757398 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:00 crc kubenswrapper[4805]: I0226 17:16:00.757441 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:00 crc kubenswrapper[4805]: I0226 17:16:00.757454 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:00 crc kubenswrapper[4805]: I0226 17:16:00.757482 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 17:16:00 crc kubenswrapper[4805]: E0226 17:16:00.763879 4805 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 26 17:16:00 crc kubenswrapper[4805]: I0226 17:16:00.907352 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 17:16:00 crc kubenswrapper[4805]: I0226 17:16:00.952743 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:16:00 crc kubenswrapper[4805]: I0226 17:16:00.954414 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:00 crc kubenswrapper[4805]: I0226 17:16:00.954496 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:00 crc kubenswrapper[4805]: I0226 17:16:00.954522 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:01 crc kubenswrapper[4805]: I0226 17:16:01.904852 4805 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 26 17:16:01 crc kubenswrapper[4805]: I0226 17:16:01.985875 4805 csr.go:261] certificate signing request csr-qndxb is approved, waiting to be issued Feb 26 17:16:01 crc kubenswrapper[4805]: I0226 17:16:01.995144 4805 csr.go:257] certificate signing request csr-qndxb is issued Feb 26 17:16:02 crc kubenswrapper[4805]: I0226 17:16:02.048165 4805 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 26 17:16:02 crc kubenswrapper[4805]: I0226 17:16:02.738803 4805 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 26 17:16:02 crc kubenswrapper[4805]: I0226 17:16:02.996445 4805 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-27 19:56:22.189906358 +0000 UTC Feb 26 17:16:02 crc kubenswrapper[4805]: I0226 17:16:02.996509 4805 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6578h40m19.193403033s for next certificate rotation Feb 26 17:16:04 crc kubenswrapper[4805]: I0226 17:16:04.952790 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:16:04 crc kubenswrapper[4805]: I0226 17:16:04.955118 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:04 crc kubenswrapper[4805]: I0226 17:16:04.955595 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:04 crc kubenswrapper[4805]: I0226 17:16:04.955689 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:04 crc kubenswrapper[4805]: I0226 17:16:04.956705 4805 scope.go:117] "RemoveContainer" containerID="8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4" Feb 26 17:16:04 crc kubenswrapper[4805]: E0226 17:16:04.957007 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 17:16:07 crc kubenswrapper[4805]: E0226 17:16:07.063909 4805 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 26 17:16:07 crc kubenswrapper[4805]: I0226 17:16:07.764506 4805 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 26 17:16:07 crc kubenswrapper[4805]: I0226 17:16:07.766243 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:07 crc kubenswrapper[4805]: I0226 17:16:07.766296 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:07 crc kubenswrapper[4805]: I0226 17:16:07.766313 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:07 crc kubenswrapper[4805]: I0226 17:16:07.766454 4805 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 26 17:16:07 crc kubenswrapper[4805]: I0226 17:16:07.783651 4805 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 26 17:16:07 crc kubenswrapper[4805]: I0226 17:16:07.784083 4805 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 26 17:16:07 crc kubenswrapper[4805]: E0226 17:16:07.784122 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 26 17:16:07 crc kubenswrapper[4805]: I0226 17:16:07.790529 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:07 crc kubenswrapper[4805]: I0226 17:16:07.790575 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:07 crc kubenswrapper[4805]: I0226 17:16:07.790590 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:07 crc kubenswrapper[4805]: I0226 17:16:07.790613 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:07 crc kubenswrapper[4805]: I0226 17:16:07.790626 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:07Z","lastTransitionTime":"2026-02-26T17:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:07 crc kubenswrapper[4805]: E0226 17:16:07.805385 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:07 crc kubenswrapper[4805]: I0226 17:16:07.814494 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:07 crc kubenswrapper[4805]: I0226 17:16:07.814540 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:07 crc kubenswrapper[4805]: I0226 17:16:07.814555 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:07 crc kubenswrapper[4805]: I0226 17:16:07.814576 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:07 crc kubenswrapper[4805]: I0226 17:16:07.814590 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:07Z","lastTransitionTime":"2026-02-26T17:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:07 crc kubenswrapper[4805]: E0226 17:16:07.828823 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:07 crc kubenswrapper[4805]: I0226 17:16:07.835908 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:07 crc kubenswrapper[4805]: I0226 17:16:07.835960 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:07 crc kubenswrapper[4805]: I0226 17:16:07.835979 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:07 crc kubenswrapper[4805]: I0226 17:16:07.836004 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:07 crc kubenswrapper[4805]: I0226 17:16:07.836055 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:07Z","lastTransitionTime":"2026-02-26T17:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:07 crc kubenswrapper[4805]: E0226 17:16:07.853291 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:07 crc kubenswrapper[4805]: I0226 17:16:07.863684 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:07 crc kubenswrapper[4805]: I0226 17:16:07.863744 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:07 crc kubenswrapper[4805]: I0226 17:16:07.863764 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:07 crc kubenswrapper[4805]: I0226 17:16:07.863788 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:07 crc kubenswrapper[4805]: I0226 17:16:07.863808 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:07Z","lastTransitionTime":"2026-02-26T17:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:07 crc kubenswrapper[4805]: E0226 17:16:07.877113 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:07 crc kubenswrapper[4805]: E0226 17:16:07.877325 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 17:16:07 crc kubenswrapper[4805]: E0226 17:16:07.877364 4805 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 17:16:07 crc kubenswrapper[4805]: E0226 17:16:07.978086 4805 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 17:16:08 crc kubenswrapper[4805]: E0226 17:16:08.078320 4805 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 17:16:08 crc kubenswrapper[4805]: E0226 17:16:08.178596 4805 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 17:16:08 crc kubenswrapper[4805]: E0226 17:16:08.279128 4805 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 17:16:08 crc kubenswrapper[4805]: E0226 17:16:08.380511 4805 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 17:16:08 crc kubenswrapper[4805]: E0226 17:16:08.481669 4805 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 17:16:08 crc kubenswrapper[4805]: E0226 17:16:08.582437 4805 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 17:16:08 crc kubenswrapper[4805]: E0226 17:16:08.683126 4805 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 17:16:08 crc kubenswrapper[4805]: E0226 17:16:08.783342 4805 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 17:16:08 crc kubenswrapper[4805]: E0226 17:16:08.884146 4805 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 17:16:08 crc kubenswrapper[4805]: E0226 17:16:08.984862 4805 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 17:16:09 crc kubenswrapper[4805]: E0226 17:16:09.085919 4805 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 17:16:09 crc kubenswrapper[4805]: E0226 17:16:09.186131 4805 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 17:16:09 crc kubenswrapper[4805]: E0226 17:16:09.287243 4805 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 17:16:09 crc kubenswrapper[4805]: E0226 17:16:09.388130 4805 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 17:16:09 crc kubenswrapper[4805]: E0226 17:16:09.489130 4805 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 17:16:09 crc kubenswrapper[4805]: I0226 17:16:09.496842 4805 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 26 17:16:09 crc kubenswrapper[4805]: E0226 17:16:09.589945 4805 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 26 17:16:09 crc kubenswrapper[4805]: I0226 17:16:09.645192 4805 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 26 17:16:09 crc kubenswrapper[4805]: I0226 17:16:09.692876 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:09 crc kubenswrapper[4805]: I0226 17:16:09.692947 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:09 crc kubenswrapper[4805]: I0226 17:16:09.692968 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:09 crc kubenswrapper[4805]: I0226 17:16:09.692991 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:09 crc kubenswrapper[4805]: I0226 17:16:09.693008 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:09Z","lastTransitionTime":"2026-02-26T17:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:09 crc kubenswrapper[4805]: I0226 17:16:09.797608 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:09 crc kubenswrapper[4805]: I0226 17:16:09.797690 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:09 crc kubenswrapper[4805]: I0226 17:16:09.797717 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:09 crc kubenswrapper[4805]: I0226 17:16:09.797752 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:09 crc kubenswrapper[4805]: I0226 17:16:09.797779 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:09Z","lastTransitionTime":"2026-02-26T17:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:09 crc kubenswrapper[4805]: I0226 17:16:09.901132 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:09 crc kubenswrapper[4805]: I0226 17:16:09.901187 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:09 crc kubenswrapper[4805]: I0226 17:16:09.901226 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:09 crc kubenswrapper[4805]: I0226 17:16:09.901257 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:09 crc kubenswrapper[4805]: I0226 17:16:09.901280 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:09Z","lastTransitionTime":"2026-02-26T17:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:09 crc kubenswrapper[4805]: I0226 17:16:09.932753 4805 apiserver.go:52] "Watching apiserver" Feb 26 17:16:09 crc kubenswrapper[4805]: I0226 17:16:09.950672 4805 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 26 17:16:09 crc kubenswrapper[4805]: I0226 17:16:09.951131 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Feb 26 17:16:09 crc kubenswrapper[4805]: I0226 17:16:09.951666 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 17:16:09 crc kubenswrapper[4805]: I0226 17:16:09.951769 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:16:09 crc kubenswrapper[4805]: E0226 17:16:09.951893 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:16:09 crc kubenswrapper[4805]: I0226 17:16:09.951951 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:16:09 crc kubenswrapper[4805]: E0226 17:16:09.952077 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:16:09 crc kubenswrapper[4805]: I0226 17:16:09.952323 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 17:16:09 crc kubenswrapper[4805]: I0226 17:16:09.952333 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:16:09 crc kubenswrapper[4805]: I0226 17:16:09.952428 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 17:16:09 crc kubenswrapper[4805]: E0226 17:16:09.953154 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:16:09 crc kubenswrapper[4805]: I0226 17:16:09.954728 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 26 17:16:09 crc kubenswrapper[4805]: I0226 17:16:09.954730 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 26 17:16:09 crc kubenswrapper[4805]: I0226 17:16:09.956340 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 26 17:16:09 crc kubenswrapper[4805]: I0226 17:16:09.956726 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 26 17:16:09 crc kubenswrapper[4805]: I0226 17:16:09.957224 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 26 17:16:09 crc kubenswrapper[4805]: I0226 17:16:09.957530 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 26 17:16:09 crc kubenswrapper[4805]: I0226 17:16:09.957932 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 26 17:16:09 crc kubenswrapper[4805]: I0226 17:16:09.958258 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 26 17:16:09 crc kubenswrapper[4805]: I0226 17:16:09.958595 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 26 17:16:09 crc kubenswrapper[4805]: I0226 17:16:09.987380 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.002398 4805 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.003731 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.003789 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.003808 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.003865 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.003883 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:10Z","lastTransitionTime":"2026-02-26T17:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.007142 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.028120 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.044788 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.061905 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.076193 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.089230 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.097470 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.097534 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.097566 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.097600 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.097632 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.098179 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.098206 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.098181 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.098362 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.098432 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.098502 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.098545 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.098617 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.099196 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.099765 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.100046 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.100480 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.100550 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.100642 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.101310 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.101352 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.101353 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.101381 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.101414 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.101450 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.101484 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.101516 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.101544 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.101574 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.101681 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.101717 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.101803 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.101834 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.101863 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.101897 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.101971 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.101994 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.101984 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.102032 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.102075 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.102105 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.102135 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.102170 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.102201 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.102234 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.102261 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.102283 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.102305 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.102327 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.102350 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.102371 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.102393 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.102414 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.102439 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.102461 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.102483 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.102509 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.102564 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.102567 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.102597 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.102633 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.102663 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.102684 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.102703 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.102723 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.102746 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.102765 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.102787 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.102807 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.102829 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.102827 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.102849 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.102871 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.102891 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.102921 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.102951 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.102978 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.103006 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.103109 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.103217 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.103398 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.103715 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.103838 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.103932 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.104473 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.104704 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.104773 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.104806 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.105810 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.105991 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.106140 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.106215 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.106612 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.106797 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.107316 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.107348 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.107538 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.107697 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.107767 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.108075 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.108230 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.108279 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.108343 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.108319 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.108758 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.109325 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.109724 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.109970 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.110270 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.110386 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.110483 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.110941 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.111519 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.111526 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.111616 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.112566 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.112184 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.112249 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.112840 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.112864 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.112904 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.112229 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.113582 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.113870 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.113961 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.114009 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.114073 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.114108 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.114149 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.114176 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:10Z","lastTransitionTime":"2026-02-26T17:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.114660 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.115339 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.115951 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.116285 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.116447 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.116797 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.116346 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.117270 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.117307 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.117376 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.117373 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.117581 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.117620 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.118077 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.118172 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.118203 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.118227 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.118248 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.118269 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.118297 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.118322 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.118339 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.118377 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.118400 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.118420 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.118440 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.118461 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.118483 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.118501 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.118522 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.118545 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.118562 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.118584 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.118606 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.118626 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.118643 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.118664 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.118686 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.118705 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.118725 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.118744 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.118648 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.118761 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.119181 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.119540 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.119838 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.120111 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.120159 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.120182 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.120205 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.120227 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.120247 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.120269 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.120289 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.120309 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.120329 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.120350 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.120487 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.120727 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.120754 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.120774 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.120790 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.120818 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.120837 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.120842 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.120860 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.120880 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.120899 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.120918 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.120938 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.121043 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.121191 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.121248 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.121273 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.121272 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.121425 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.122705 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.122757 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.122838 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.121437 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.122981 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.123059 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.123098 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.123125 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.123160 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.123225 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.123275 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.123322 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.123368 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.123416 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.123467 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.123513 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.123559 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.123592 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.123624 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.123657 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.123682 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.123704 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.123726 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.123747 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.123768 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.123791 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.123822 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.123843 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.123877 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.123948 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.123971 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.123991 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.124013 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.124054 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.124080 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.124176 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.124213 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.124247 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.124280 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.124317 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.124349 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.124374 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.124397 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.124418 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.124439 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.124479 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.124521 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.124553 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.124584 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.124614 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.124645 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.124678 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.124712 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.124744 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.124775 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.124806 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.124835 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.124860 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.124883 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.124904 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.124926 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.124949 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.124973 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.124997 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.125041 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.125063 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.125084 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.125118 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.125244 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.125271 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.125296 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.125325 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.125357 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.125388 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.125418 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.125517 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.125568 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.125603 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.125635 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.125661 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.125683 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.125712 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.125738 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.125760 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.125784 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.125808 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.125834 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.125874 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.125901 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.125979 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.125994 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126009 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126043 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126058 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126072 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126085 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126098 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126111 4805 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126124 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126137 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126152 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126165 4805 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126178 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126191 4805 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126203 4805 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126215 4805 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126228 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126240 4805 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126253 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126266 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126278 4805 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126291 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126308 4805 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126320 4805 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126332 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126344 4805 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126356 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126370 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126381 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126393 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126405 4805 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126418 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126430 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126442 4805 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126454 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126466 4805 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126480 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126492 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126504 4805 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126517 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126529 4805 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126542 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126554 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126566 4805 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126578 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126593 4805 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126605 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126618 4805 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126631 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126643 4805 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126657 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126669 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126681 4805 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126694 4805 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126714 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126727 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126740 4805 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126752 4805 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126765 4805 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126779 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126791 4805 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126803 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126815 4805 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126827 4805 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126839 4805 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126851 4805 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126868 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126886 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126904 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126923 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126940 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126957 4805 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126975 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126991 4805 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.127009 4805 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.127047 4805 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.127066 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.127086 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.127105 4805 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.133696 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.123170 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.123333 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.134882 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.123226 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.123438 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.123588 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.123698 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.124050 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.124074 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.124151 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.124650 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.124767 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.125186 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.125213 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.125630 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.125827 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.125893 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.125906 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.126883 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.127076 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.127107 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.127452 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.127844 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.128039 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.128146 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.128548 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.128947 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.129058 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.129078 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.129243 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.129873 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.129877 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.129984 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.131749 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.131806 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.131855 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.131825 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.133322 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.133430 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.133413 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.133589 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: E0226 17:16:10.133614 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:16:10.633588611 +0000 UTC m=+85.195342960 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.133949 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.133993 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.133996 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.134865 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.134949 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.135305 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.138430 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.138451 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.138669 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.135653 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.136093 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.136389 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.136427 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.136937 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.137107 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.138911 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.137357 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.137481 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.137531 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.137969 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.138170 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.138970 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.139070 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.139381 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.140458 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.140534 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.140603 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: E0226 17:16:10.140667 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 17:16:10 crc kubenswrapper[4805]: E0226 17:16:10.140724 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 17:16:10.640707276 +0000 UTC m=+85.202461625 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.140760 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.140772 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: E0226 17:16:10.140906 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.140914 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: E0226 17:16:10.140947 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 17:16:10.640936992 +0000 UTC m=+85.202691341 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.140612 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.141328 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.141709 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.142233 4805 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.142385 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.142835 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.142929 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.142984 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.143403 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.144162 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.144341 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.144592 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.145097 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.145233 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.144239 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.145632 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.145769 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.145953 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.146039 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.146074 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.146141 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.146318 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: E0226 17:16:10.148881 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 17:16:10 crc kubenswrapper[4805]: E0226 17:16:10.148916 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 17:16:10 crc kubenswrapper[4805]: E0226 17:16:10.148935 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 17:16:10 crc kubenswrapper[4805]: E0226 17:16:10.149037 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 17:16:10.64899467 +0000 UTC m=+85.210749129 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.150186 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.151210 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.153785 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: E0226 17:16:10.154257 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 17:16:10 crc kubenswrapper[4805]: E0226 17:16:10.154280 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 17:16:10 crc kubenswrapper[4805]: E0226 17:16:10.154297 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 17:16:10 crc kubenswrapper[4805]: E0226 17:16:10.154353 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 17:16:10.654335712 +0000 UTC m=+85.216090061 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.154985 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.155170 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.155345 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.159684 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.161036 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.162122 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.163029 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.163250 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.163553 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.163849 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.164544 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.164944 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.165533 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.165905 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.166530 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.166649 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.167249 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.168265 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.168780 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.169061 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.169194 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.169235 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.172328 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.172371 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.182853 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.186097 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.194636 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.217184 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.217225 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.217250 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.217267 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.217277 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:10Z","lastTransitionTime":"2026-02-26T17:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.227632 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.227688 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.227736 4805 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.227752 4805 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.227753 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.227767 4805 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.227818 4805 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.227821 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.227831 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.227841 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.227865 4805 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.227877 4805 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.227885 4805 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.227895 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.227905 4805 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.227915 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.227950 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.227961 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.227969 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.227979 4805 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.227990 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.227999 4805 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228007 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228041 4805 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228050 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228057 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228067 4805 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228077 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228086 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228095 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228118 4805 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228127 4805 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228136 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228145 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228155 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228164 4805 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228173 4805 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228196 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228205 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228213 4805 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228221 4805 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228229 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228237 4805 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228246 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228254 4805 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228276 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228284 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228292 4805 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228302 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228310 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228320 4805 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228330 4805 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228352 4805 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228361 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228371 4805 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228380 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228389 4805 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228397 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228406 4805 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228429 4805 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228437 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228446 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228454 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228464 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228472 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228482 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228504 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228514 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228523 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228559 4805 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228582 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228591 4805 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228600 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228611 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228621 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228630 4805 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228639 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228663 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228672 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228681 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228690 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228700 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228708 4805 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228718 4805 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228740 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228748 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228757 4805 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228765 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228773 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228783 4805 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228792 4805 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228816 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228825 4805 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228833 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228841 4805 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228849 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228858 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228868 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228892 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228904 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228912 4805 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228920 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228929 4805 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228937 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228945 4805 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228967 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228976 4805 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228984 4805 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.228993 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.229002 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.229011 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.229036 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.229044 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.229052 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.229060 4805 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.229068 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.229076 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.229085 4805 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.276103 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.292315 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 26 17:16:10 crc kubenswrapper[4805]: W0226 17:16:10.298949 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-cd7b7958a32141cf40f724a7510c61d33457c643c5fa780e4aa49c72e87f848b WatchSource:0}: Error finding container cd7b7958a32141cf40f724a7510c61d33457c643c5fa780e4aa49c72e87f848b: Status 404 returned error can't find the container with id cd7b7958a32141cf40f724a7510c61d33457c643c5fa780e4aa49c72e87f848b Feb 26 17:16:10 crc kubenswrapper[4805]: E0226 17:16:10.302562 4805 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 17:16:10 crc kubenswrapper[4805]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 26 17:16:10 crc kubenswrapper[4805]: set -o allexport Feb 26 17:16:10 crc kubenswrapper[4805]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 26 17:16:10 crc kubenswrapper[4805]: source /etc/kubernetes/apiserver-url.env Feb 26 17:16:10 crc kubenswrapper[4805]: else Feb 26 17:16:10 crc kubenswrapper[4805]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 26 17:16:10 crc kubenswrapper[4805]: exit 1 Feb 26 17:16:10 crc kubenswrapper[4805]: fi Feb 26 17:16:10 crc kubenswrapper[4805]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 26 17:16:10 crc kubenswrapper[4805]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 17:16:10 crc kubenswrapper[4805]: > logger="UnhandledError" Feb 26 17:16:10 crc kubenswrapper[4805]: E0226 17:16:10.303710 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.305210 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 26 17:16:10 crc kubenswrapper[4805]: W0226 17:16:10.308854 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-fe220024f45211040887a8dc157b1f5c9865ad1898d93b65a1db8e8ed3e21846 WatchSource:0}: Error finding container fe220024f45211040887a8dc157b1f5c9865ad1898d93b65a1db8e8ed3e21846: Status 404 returned error can't find the container with id fe220024f45211040887a8dc157b1f5c9865ad1898d93b65a1db8e8ed3e21846 Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.309092 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"cd7b7958a32141cf40f724a7510c61d33457c643c5fa780e4aa49c72e87f848b"} Feb 26 17:16:10 crc kubenswrapper[4805]: E0226 17:16:10.314505 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 26 17:16:10 crc kubenswrapper[4805]: E0226 17:16:10.314865 4805 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 17:16:10 crc kubenswrapper[4805]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 26 17:16:10 crc kubenswrapper[4805]: set -o allexport Feb 26 17:16:10 crc kubenswrapper[4805]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 26 17:16:10 crc kubenswrapper[4805]: source /etc/kubernetes/apiserver-url.env Feb 26 17:16:10 crc kubenswrapper[4805]: else Feb 26 17:16:10 crc kubenswrapper[4805]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 26 17:16:10 crc kubenswrapper[4805]: exit 1 Feb 26 17:16:10 crc kubenswrapper[4805]: fi Feb 26 17:16:10 crc kubenswrapper[4805]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 26 17:16:10 crc kubenswrapper[4805]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 17:16:10 crc kubenswrapper[4805]: > logger="UnhandledError" Feb 26 17:16:10 crc kubenswrapper[4805]: E0226 17:16:10.316180 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 26 17:16:10 crc kubenswrapper[4805]: E0226 17:16:10.316211 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.320257 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.320304 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.320316 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.320334 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.320346 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:10Z","lastTransitionTime":"2026-02-26T17:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:10 crc kubenswrapper[4805]: W0226 17:16:10.323959 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-4d2d84c1c00182f9dae17f5d612ce175c3b4b8b017662016b925471f22bcbfd1 WatchSource:0}: Error finding container 4d2d84c1c00182f9dae17f5d612ce175c3b4b8b017662016b925471f22bcbfd1: Status 404 returned error can't find the container with id 4d2d84c1c00182f9dae17f5d612ce175c3b4b8b017662016b925471f22bcbfd1 Feb 26 17:16:10 crc kubenswrapper[4805]: E0226 17:16:10.327100 4805 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 17:16:10 crc kubenswrapper[4805]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 26 17:16:10 crc kubenswrapper[4805]: if [[ -f "/env/_master" ]]; then Feb 26 17:16:10 crc kubenswrapper[4805]: set -o allexport Feb 26 17:16:10 crc kubenswrapper[4805]: source "/env/_master" Feb 26 17:16:10 crc kubenswrapper[4805]: set +o allexport Feb 26 17:16:10 crc kubenswrapper[4805]: fi Feb 26 17:16:10 crc kubenswrapper[4805]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 26 17:16:10 crc kubenswrapper[4805]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 26 17:16:10 crc kubenswrapper[4805]: ho_enable="--enable-hybrid-overlay" Feb 26 17:16:10 crc kubenswrapper[4805]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 26 17:16:10 crc kubenswrapper[4805]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 26 17:16:10 crc kubenswrapper[4805]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 26 17:16:10 crc kubenswrapper[4805]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 26 17:16:10 crc kubenswrapper[4805]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 26 17:16:10 crc kubenswrapper[4805]: --webhook-host=127.0.0.1 \ Feb 26 17:16:10 crc kubenswrapper[4805]: --webhook-port=9743 \ Feb 26 17:16:10 crc kubenswrapper[4805]: ${ho_enable} \ Feb 26 17:16:10 crc kubenswrapper[4805]: --enable-interconnect \ Feb 26 17:16:10 crc kubenswrapper[4805]: --disable-approver \ Feb 26 17:16:10 crc kubenswrapper[4805]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 26 17:16:10 crc kubenswrapper[4805]: --wait-for-kubernetes-api=200s \ Feb 26 17:16:10 crc kubenswrapper[4805]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 26 17:16:10 crc kubenswrapper[4805]: --loglevel="${LOGLEVEL}" Feb 26 17:16:10 crc kubenswrapper[4805]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 17:16:10 crc kubenswrapper[4805]: > logger="UnhandledError" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.328137 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:10 crc kubenswrapper[4805]: E0226 17:16:10.329356 4805 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 17:16:10 crc kubenswrapper[4805]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 26 17:16:10 crc kubenswrapper[4805]: if [[ -f "/env/_master" ]]; then Feb 26 17:16:10 crc kubenswrapper[4805]: set -o allexport Feb 26 17:16:10 crc kubenswrapper[4805]: source "/env/_master" Feb 26 17:16:10 crc kubenswrapper[4805]: set +o allexport Feb 26 17:16:10 crc kubenswrapper[4805]: fi Feb 26 17:16:10 crc kubenswrapper[4805]: Feb 26 17:16:10 crc kubenswrapper[4805]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 26 17:16:10 crc kubenswrapper[4805]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 26 17:16:10 crc kubenswrapper[4805]: --disable-webhook \ Feb 26 17:16:10 crc kubenswrapper[4805]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 26 17:16:10 crc kubenswrapper[4805]: --loglevel="${LOGLEVEL}" Feb 26 17:16:10 crc kubenswrapper[4805]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 17:16:10 crc kubenswrapper[4805]: > logger="UnhandledError" Feb 26 17:16:10 crc kubenswrapper[4805]: E0226 17:16:10.330509 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.340592 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.352607 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.362215 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.372109 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.382074 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.423496 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.423557 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.423581 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.423610 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.423635 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:10Z","lastTransitionTime":"2026-02-26T17:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.527427 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.527516 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.527550 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.527580 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.527604 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:10Z","lastTransitionTime":"2026-02-26T17:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.631000 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.631124 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.631215 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.631304 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.631323 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:10Z","lastTransitionTime":"2026-02-26T17:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.734149 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.734286 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:16:10 crc kubenswrapper[4805]: E0226 17:16:10.734406 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:16:11.734370425 +0000 UTC m=+86.296124804 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:16:10 crc kubenswrapper[4805]: E0226 17:16:10.734482 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 17:16:10 crc kubenswrapper[4805]: E0226 17:16:10.734519 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 17:16:10 crc kubenswrapper[4805]: E0226 17:16:10.734547 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.734544 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:16:10 crc kubenswrapper[4805]: E0226 17:16:10.734632 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 17:16:11.73460349 +0000 UTC m=+86.296357899 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 17:16:10 crc kubenswrapper[4805]: E0226 17:16:10.734784 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 17:16:10 crc kubenswrapper[4805]: E0226 17:16:10.734809 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 17:16:10 crc kubenswrapper[4805]: E0226 17:16:10.734831 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 17:16:10 crc kubenswrapper[4805]: E0226 17:16:10.734897 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 17:16:11.734881347 +0000 UTC m=+86.296635726 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.734949 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.735308 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.735340 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.735386 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.735410 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:10 crc kubenswrapper[4805]: E0226 17:16:10.735436 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 17:16:10 crc kubenswrapper[4805]: E0226 17:16:10.735487 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 17:16:11.735471972 +0000 UTC m=+86.297226351 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.735439 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.735544 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:10Z","lastTransitionTime":"2026-02-26T17:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:10 crc kubenswrapper[4805]: E0226 17:16:10.735440 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 17:16:10 crc kubenswrapper[4805]: E0226 17:16:10.735962 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 17:16:11.735889492 +0000 UTC m=+86.297643871 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.838115 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.838192 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.838216 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.838248 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.838270 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:10Z","lastTransitionTime":"2026-02-26T17:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.940770 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.940827 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.940845 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.940871 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.940893 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:10Z","lastTransitionTime":"2026-02-26T17:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.958638 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.959642 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.962252 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.963596 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.965871 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.966910 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.968174 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.970111 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.971349 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.973258 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.974335 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.976473 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.977571 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.978789 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.980643 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.981684 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.983617 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.984598 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.986071 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.987920 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.989866 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.992951 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.994187 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.996336 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.997208 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 26 17:16:10 crc kubenswrapper[4805]: I0226 17:16:10.998443 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.000536 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.001495 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.003398 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.004385 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.005953 4805 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.006130 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.007766 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.008364 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.008765 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.009981 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.010623 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.011161 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.011735 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.012402 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.012876 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.013485 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.014071 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.014641 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.017826 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.018343 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.019243 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.020117 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.020991 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.021452 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.022567 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.023090 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.023636 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.025067 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.044007 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.044198 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.044289 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.044374 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.044401 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:11Z","lastTransitionTime":"2026-02-26T17:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.147056 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.147106 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.147124 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.147148 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.147165 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:11Z","lastTransitionTime":"2026-02-26T17:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.250072 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.250121 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.250136 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.250157 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.250171 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:11Z","lastTransitionTime":"2026-02-26T17:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.314617 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4d2d84c1c00182f9dae17f5d612ce175c3b4b8b017662016b925471f22bcbfd1"} Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.316148 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"fe220024f45211040887a8dc157b1f5c9865ad1898d93b65a1db8e8ed3e21846"} Feb 26 17:16:11 crc kubenswrapper[4805]: E0226 17:16:11.317179 4805 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 17:16:11 crc kubenswrapper[4805]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 26 17:16:11 crc kubenswrapper[4805]: if [[ -f "/env/_master" ]]; then Feb 26 17:16:11 crc kubenswrapper[4805]: set -o allexport Feb 26 17:16:11 crc kubenswrapper[4805]: source "/env/_master" Feb 26 17:16:11 crc kubenswrapper[4805]: set +o allexport Feb 26 17:16:11 crc kubenswrapper[4805]: fi Feb 26 17:16:11 crc kubenswrapper[4805]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 26 17:16:11 crc kubenswrapper[4805]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 26 17:16:11 crc kubenswrapper[4805]: ho_enable="--enable-hybrid-overlay" Feb 26 17:16:11 crc kubenswrapper[4805]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 26 17:16:11 crc kubenswrapper[4805]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 26 17:16:11 crc kubenswrapper[4805]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 26 17:16:11 crc kubenswrapper[4805]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 26 17:16:11 crc kubenswrapper[4805]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 26 17:16:11 crc kubenswrapper[4805]: --webhook-host=127.0.0.1 \ Feb 26 17:16:11 crc kubenswrapper[4805]: --webhook-port=9743 \ Feb 26 17:16:11 crc kubenswrapper[4805]: ${ho_enable} \ Feb 26 17:16:11 crc kubenswrapper[4805]: --enable-interconnect \ Feb 26 17:16:11 crc kubenswrapper[4805]: --disable-approver \ Feb 26 17:16:11 crc kubenswrapper[4805]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 26 17:16:11 crc kubenswrapper[4805]: --wait-for-kubernetes-api=200s \ Feb 26 17:16:11 crc kubenswrapper[4805]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 26 17:16:11 crc kubenswrapper[4805]: --loglevel="${LOGLEVEL}" Feb 26 17:16:11 crc kubenswrapper[4805]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 17:16:11 crc kubenswrapper[4805]: > logger="UnhandledError" Feb 26 17:16:11 crc kubenswrapper[4805]: E0226 17:16:11.319503 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 26 17:16:11 crc kubenswrapper[4805]: E0226 17:16:11.320552 4805 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 17:16:11 crc kubenswrapper[4805]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 26 17:16:11 crc kubenswrapper[4805]: if [[ -f "/env/_master" ]]; then Feb 26 17:16:11 crc kubenswrapper[4805]: set -o allexport Feb 26 17:16:11 crc kubenswrapper[4805]: source "/env/_master" Feb 26 17:16:11 crc kubenswrapper[4805]: set +o allexport Feb 26 17:16:11 crc kubenswrapper[4805]: fi Feb 26 17:16:11 crc kubenswrapper[4805]: Feb 26 17:16:11 crc kubenswrapper[4805]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 26 17:16:11 crc kubenswrapper[4805]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 26 17:16:11 crc kubenswrapper[4805]: --disable-webhook \ Feb 26 17:16:11 crc kubenswrapper[4805]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 26 17:16:11 crc kubenswrapper[4805]: --loglevel="${LOGLEVEL}" Feb 26 17:16:11 crc kubenswrapper[4805]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 17:16:11 crc kubenswrapper[4805]: > logger="UnhandledError" Feb 26 17:16:11 crc kubenswrapper[4805]: E0226 17:16:11.320984 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 26 17:16:11 crc kubenswrapper[4805]: E0226 17:16:11.322285 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.333369 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.350749 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.352691 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.352756 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.352782 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.352812 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.352836 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:11Z","lastTransitionTime":"2026-02-26T17:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.367102 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.382971 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.398538 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.415526 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.427773 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.438595 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.454560 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.454588 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.454596 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.454610 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.454619 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:11Z","lastTransitionTime":"2026-02-26T17:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.455697 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.473961 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.492920 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.503563 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.557548 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.557584 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.557593 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.557606 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.557616 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:11Z","lastTransitionTime":"2026-02-26T17:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.660256 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.660296 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.660309 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.660324 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.660336 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:11Z","lastTransitionTime":"2026-02-26T17:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.746696 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.746849 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:16:11 crc kubenswrapper[4805]: E0226 17:16:11.746990 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:16:13.746965853 +0000 UTC m=+88.308720202 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:16:11 crc kubenswrapper[4805]: E0226 17:16:11.747013 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 17:16:11 crc kubenswrapper[4805]: E0226 17:16:11.747141 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 17:16:13.747115346 +0000 UTC m=+88.308869735 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.747216 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.747268 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:16:11 crc kubenswrapper[4805]: E0226 17:16:11.747426 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 17:16:11 crc kubenswrapper[4805]: E0226 17:16:11.747463 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 17:16:11 crc kubenswrapper[4805]: E0226 17:16:11.747467 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 17:16:11 crc kubenswrapper[4805]: E0226 17:16:11.747515 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 17:16:11 crc kubenswrapper[4805]: E0226 17:16:11.747484 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 17:16:11 crc kubenswrapper[4805]: E0226 17:16:11.747541 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 17:16:11 crc kubenswrapper[4805]: E0226 17:16:11.747599 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.747307 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:16:11 crc kubenswrapper[4805]: E0226 17:16:11.747699 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 17:16:13.747594358 +0000 UTC m=+88.309348737 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 17:16:11 crc kubenswrapper[4805]: E0226 17:16:11.747742 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 17:16:13.747722961 +0000 UTC m=+88.309477330 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 17:16:11 crc kubenswrapper[4805]: E0226 17:16:11.747772 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 17:16:13.747756392 +0000 UTC m=+88.309510771 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.762965 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.763053 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.763072 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.763098 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.763124 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:11Z","lastTransitionTime":"2026-02-26T17:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.865476 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.865502 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.865510 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.865524 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.865533 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:11Z","lastTransitionTime":"2026-02-26T17:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.952772 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.952875 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:16:11 crc kubenswrapper[4805]: E0226 17:16:11.953005 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.952780 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:16:11 crc kubenswrapper[4805]: E0226 17:16:11.953190 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:16:11 crc kubenswrapper[4805]: E0226 17:16:11.953300 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.967632 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.967672 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.967682 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.967698 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:11 crc kubenswrapper[4805]: I0226 17:16:11.967708 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:11Z","lastTransitionTime":"2026-02-26T17:16:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.070411 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.070449 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.070456 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.070469 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.070480 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:12Z","lastTransitionTime":"2026-02-26T17:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.173765 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.173807 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.173816 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.173831 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.173841 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:12Z","lastTransitionTime":"2026-02-26T17:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.281477 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.281542 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.281554 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.281571 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.281609 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:12Z","lastTransitionTime":"2026-02-26T17:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.384457 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.384492 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.384502 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.384517 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.384529 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:12Z","lastTransitionTime":"2026-02-26T17:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.486847 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.486901 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.486910 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.486924 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.486933 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:12Z","lastTransitionTime":"2026-02-26T17:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.589329 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.589579 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.589672 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.589753 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.589836 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:12Z","lastTransitionTime":"2026-02-26T17:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.691641 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.691694 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.691704 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.691718 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.691727 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:12Z","lastTransitionTime":"2026-02-26T17:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.794836 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.794874 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.794884 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.794899 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.794908 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:12Z","lastTransitionTime":"2026-02-26T17:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.897842 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.897881 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.897890 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.897905 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:12 crc kubenswrapper[4805]: I0226 17:16:12.897916 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:12Z","lastTransitionTime":"2026-02-26T17:16:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.000651 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.000718 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.000741 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.000766 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.000783 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:13Z","lastTransitionTime":"2026-02-26T17:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.103589 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.103664 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.103683 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.103709 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.103729 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:13Z","lastTransitionTime":"2026-02-26T17:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.206608 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.206652 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.206666 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.206682 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.206693 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:13Z","lastTransitionTime":"2026-02-26T17:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.309594 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.309649 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.309670 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.309694 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.309712 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:13Z","lastTransitionTime":"2026-02-26T17:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.412908 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.412991 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.413047 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.413085 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.413107 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:13Z","lastTransitionTime":"2026-02-26T17:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.515935 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.515988 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.516005 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.516066 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.516092 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:13Z","lastTransitionTime":"2026-02-26T17:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.619464 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.619526 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.619551 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.619581 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.619608 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:13Z","lastTransitionTime":"2026-02-26T17:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.722838 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.722886 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.722898 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.722920 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.722933 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:13Z","lastTransitionTime":"2026-02-26T17:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.765518 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.765618 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.765641 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.765662 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:16:13 crc kubenswrapper[4805]: E0226 17:16:13.765723 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:16:17.765662999 +0000 UTC m=+92.327417348 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:16:13 crc kubenswrapper[4805]: E0226 17:16:13.765754 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 17:16:13 crc kubenswrapper[4805]: E0226 17:16:13.765767 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 17:16:13 crc kubenswrapper[4805]: E0226 17:16:13.765785 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 17:16:13 crc kubenswrapper[4805]: E0226 17:16:13.765796 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 17:16:13 crc kubenswrapper[4805]: E0226 17:16:13.765814 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 17:16:17.765801862 +0000 UTC m=+92.327556211 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 17:16:13 crc kubenswrapper[4805]: E0226 17:16:13.765840 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 17:16:17.765827063 +0000 UTC m=+92.327581402 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.765811 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:16:13 crc kubenswrapper[4805]: E0226 17:16:13.765937 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 17:16:13 crc kubenswrapper[4805]: E0226 17:16:13.765937 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 17:16:13 crc kubenswrapper[4805]: E0226 17:16:13.766047 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 17:16:13 crc kubenswrapper[4805]: E0226 17:16:13.766065 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 17:16:13 crc kubenswrapper[4805]: E0226 17:16:13.765986 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 17:16:17.765976136 +0000 UTC m=+92.327730485 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 17:16:13 crc kubenswrapper[4805]: E0226 17:16:13.766155 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 17:16:17.76612386 +0000 UTC m=+92.327878199 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.825170 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.825225 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.825246 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.825268 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.825280 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:13Z","lastTransitionTime":"2026-02-26T17:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.927836 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.927863 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.927871 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.927883 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.927891 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:13Z","lastTransitionTime":"2026-02-26T17:16:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.952899 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.952983 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:16:13 crc kubenswrapper[4805]: I0226 17:16:13.952919 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:16:13 crc kubenswrapper[4805]: E0226 17:16:13.953139 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:16:13 crc kubenswrapper[4805]: E0226 17:16:13.953302 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:16:13 crc kubenswrapper[4805]: E0226 17:16:13.953454 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.030162 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.030237 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.030262 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.030292 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.030315 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:14Z","lastTransitionTime":"2026-02-26T17:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.133358 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.133450 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.133471 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.133497 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.133515 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:14Z","lastTransitionTime":"2026-02-26T17:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.236562 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.237287 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.237429 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.237680 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.237846 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:14Z","lastTransitionTime":"2026-02-26T17:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.340992 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.341099 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.341126 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.341154 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.341175 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:14Z","lastTransitionTime":"2026-02-26T17:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.444087 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.444182 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.444201 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.444228 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.444244 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:14Z","lastTransitionTime":"2026-02-26T17:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.546412 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.546470 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.546483 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.546505 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.546520 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:14Z","lastTransitionTime":"2026-02-26T17:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.650422 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.650474 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.650494 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.650527 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.650550 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:14Z","lastTransitionTime":"2026-02-26T17:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.753704 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.753763 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.753778 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.753800 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.753816 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:14Z","lastTransitionTime":"2026-02-26T17:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.857232 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.857266 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.857278 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.857296 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.857309 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:14Z","lastTransitionTime":"2026-02-26T17:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.959580 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.959646 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.959669 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.959697 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:14 crc kubenswrapper[4805]: I0226 17:16:14.959719 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:14Z","lastTransitionTime":"2026-02-26T17:16:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.062777 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.063200 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.063541 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.063760 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.063967 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:15Z","lastTransitionTime":"2026-02-26T17:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.167395 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.167468 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.167483 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.167528 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.167541 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:15Z","lastTransitionTime":"2026-02-26T17:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.269723 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.269761 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.269770 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.269785 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.269796 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:15Z","lastTransitionTime":"2026-02-26T17:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.373403 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.373468 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.373493 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.373519 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.373543 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:15Z","lastTransitionTime":"2026-02-26T17:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.476282 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.476374 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.476396 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.476418 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.476434 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:15Z","lastTransitionTime":"2026-02-26T17:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.578597 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.578642 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.578653 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.578669 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.578680 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:15Z","lastTransitionTime":"2026-02-26T17:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.681053 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.681106 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.681127 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.681151 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.681175 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:15Z","lastTransitionTime":"2026-02-26T17:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.783861 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.783893 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.783903 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.783915 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.783924 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:15Z","lastTransitionTime":"2026-02-26T17:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.886449 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.886562 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.886588 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.886618 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.886638 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:15Z","lastTransitionTime":"2026-02-26T17:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.952365 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.952404 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.952462 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:16:15 crc kubenswrapper[4805]: E0226 17:16:15.952774 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:16:15 crc kubenswrapper[4805]: E0226 17:16:15.952989 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:16:15 crc kubenswrapper[4805]: E0226 17:16:15.953152 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.966550 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.967209 4805 scope.go:117] "RemoveContainer" containerID="8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4" Feb 26 17:16:15 crc kubenswrapper[4805]: E0226 17:16:15.967412 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.989208 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.989275 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.989289 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.989309 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:15 crc kubenswrapper[4805]: I0226 17:16:15.989322 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:15Z","lastTransitionTime":"2026-02-26T17:16:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.091950 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.092069 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.092088 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.092112 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.092130 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:16Z","lastTransitionTime":"2026-02-26T17:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.194716 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.194791 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.194809 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.194834 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.194852 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:16Z","lastTransitionTime":"2026-02-26T17:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.297516 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.297579 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.297594 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.297611 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.297622 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:16Z","lastTransitionTime":"2026-02-26T17:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.335665 4805 scope.go:117] "RemoveContainer" containerID="8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4" Feb 26 17:16:16 crc kubenswrapper[4805]: E0226 17:16:16.335888 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.400116 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.400247 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.400261 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.400276 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.400286 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:16Z","lastTransitionTime":"2026-02-26T17:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.502389 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.502449 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.502460 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.502476 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.502485 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:16Z","lastTransitionTime":"2026-02-26T17:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.605646 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.605704 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.605718 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.605736 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.605748 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:16Z","lastTransitionTime":"2026-02-26T17:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.708505 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.708570 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.708583 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.708603 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.708619 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:16Z","lastTransitionTime":"2026-02-26T17:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.810680 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.810721 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.810733 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.810749 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.810761 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:16Z","lastTransitionTime":"2026-02-26T17:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.913820 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.913876 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.913890 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.913913 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.913928 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:16Z","lastTransitionTime":"2026-02-26T17:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.967340 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.981508 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:16 crc kubenswrapper[4805]: I0226 17:16:16.991975 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.000433 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.009576 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.016587 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.016644 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.016662 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.016687 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.016705 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:17Z","lastTransitionTime":"2026-02-26T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.018915 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.030184 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.118384 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.118456 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.118469 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.118487 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.118500 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:17Z","lastTransitionTime":"2026-02-26T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.220982 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.221047 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.221062 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.221080 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.221094 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:17Z","lastTransitionTime":"2026-02-26T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.324132 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.324214 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.324241 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.324291 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.324320 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:17Z","lastTransitionTime":"2026-02-26T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.427191 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.427265 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.427276 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.427290 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.427302 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:17Z","lastTransitionTime":"2026-02-26T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.529810 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.529847 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.529858 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.529872 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.529882 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:17Z","lastTransitionTime":"2026-02-26T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.631905 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.631938 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.631947 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.631960 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.631970 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:17Z","lastTransitionTime":"2026-02-26T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.734428 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.734523 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.734540 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.734563 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.734579 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:17Z","lastTransitionTime":"2026-02-26T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.801082 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.801159 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.801184 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.801206 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.801224 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:16:17 crc kubenswrapper[4805]: E0226 17:16:17.801285 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 17:16:17 crc kubenswrapper[4805]: E0226 17:16:17.801329 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 17:16:25.801316905 +0000 UTC m=+100.363071234 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 17:16:17 crc kubenswrapper[4805]: E0226 17:16:17.801634 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:16:25.801626433 +0000 UTC m=+100.363380772 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:16:17 crc kubenswrapper[4805]: E0226 17:16:17.801689 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 17:16:17 crc kubenswrapper[4805]: E0226 17:16:17.801714 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 17:16:25.801706405 +0000 UTC m=+100.363460744 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 17:16:17 crc kubenswrapper[4805]: E0226 17:16:17.801791 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 17:16:17 crc kubenswrapper[4805]: E0226 17:16:17.801808 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 17:16:17 crc kubenswrapper[4805]: E0226 17:16:17.801818 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 17:16:17 crc kubenswrapper[4805]: E0226 17:16:17.801848 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 17:16:25.801839638 +0000 UTC m=+100.363593977 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 17:16:17 crc kubenswrapper[4805]: E0226 17:16:17.801909 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 17:16:17 crc kubenswrapper[4805]: E0226 17:16:17.801955 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 17:16:17 crc kubenswrapper[4805]: E0226 17:16:17.801973 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 17:16:17 crc kubenswrapper[4805]: E0226 17:16:17.802094 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 17:16:25.802064624 +0000 UTC m=+100.363818983 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.837093 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.837136 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.837148 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.837173 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.837185 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:17Z","lastTransitionTime":"2026-02-26T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.940000 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.940046 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.940055 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.940069 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.940078 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:17Z","lastTransitionTime":"2026-02-26T17:16:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.952503 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.952612 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:16:17 crc kubenswrapper[4805]: I0226 17:16:17.952509 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:16:17 crc kubenswrapper[4805]: E0226 17:16:17.952828 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:16:17 crc kubenswrapper[4805]: E0226 17:16:17.952964 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:16:17 crc kubenswrapper[4805]: E0226 17:16:17.952643 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.018668 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.018710 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.018721 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.018770 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.018826 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:18Z","lastTransitionTime":"2026-02-26T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:18 crc kubenswrapper[4805]: E0226 17:16:18.029819 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.033201 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.033224 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.033234 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.033246 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.033256 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:18Z","lastTransitionTime":"2026-02-26T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:18 crc kubenswrapper[4805]: E0226 17:16:18.042328 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.046137 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.046198 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.046215 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.046236 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.046251 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:18Z","lastTransitionTime":"2026-02-26T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:18 crc kubenswrapper[4805]: E0226 17:16:18.057126 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.060499 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.060531 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.060542 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.060559 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.060570 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:18Z","lastTransitionTime":"2026-02-26T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:18 crc kubenswrapper[4805]: E0226 17:16:18.071429 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.074898 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.074931 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.074940 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.074954 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.074963 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:18Z","lastTransitionTime":"2026-02-26T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:18 crc kubenswrapper[4805]: E0226 17:16:18.085446 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:18 crc kubenswrapper[4805]: E0226 17:16:18.085572 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.086908 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.086950 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.086964 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.086993 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.087003 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:18Z","lastTransitionTime":"2026-02-26T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.190059 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.190151 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.190169 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.190193 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.190210 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:18Z","lastTransitionTime":"2026-02-26T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.293150 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.293608 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.293713 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.293825 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.293928 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:18Z","lastTransitionTime":"2026-02-26T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.397480 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.397567 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.397587 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.397614 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.397632 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:18Z","lastTransitionTime":"2026-02-26T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.503365 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.503426 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.503446 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.503465 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.503476 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:18Z","lastTransitionTime":"2026-02-26T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.605773 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.605823 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.605833 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.605847 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.605856 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:18Z","lastTransitionTime":"2026-02-26T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.708943 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.709011 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.709074 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.709103 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.709126 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:18Z","lastTransitionTime":"2026-02-26T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.811168 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.811224 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.811241 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.811263 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.811281 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:18Z","lastTransitionTime":"2026-02-26T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.913314 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.913363 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.913378 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.913398 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.913413 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:18Z","lastTransitionTime":"2026-02-26T17:16:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:18 crc kubenswrapper[4805]: I0226 17:16:18.969035 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.015672 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.015715 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.015726 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.015742 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.015755 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:19Z","lastTransitionTime":"2026-02-26T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.119073 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.119110 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.119121 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.119137 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.119148 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:19Z","lastTransitionTime":"2026-02-26T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.221639 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.221673 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.221682 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.221695 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.221704 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:19Z","lastTransitionTime":"2026-02-26T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.323938 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.323973 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.323985 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.324001 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.324035 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:19Z","lastTransitionTime":"2026-02-26T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.427064 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.427110 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.427123 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.427139 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.427150 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:19Z","lastTransitionTime":"2026-02-26T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.529899 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.529953 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.529965 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.529982 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.529993 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:19Z","lastTransitionTime":"2026-02-26T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.632591 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.632630 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.632641 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.632657 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.632668 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:19Z","lastTransitionTime":"2026-02-26T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.734823 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.734861 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.734870 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.734884 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.734893 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:19Z","lastTransitionTime":"2026-02-26T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.837809 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.837849 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.837857 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.837875 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.837885 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:19Z","lastTransitionTime":"2026-02-26T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.940785 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.940838 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.940850 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.940865 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.940874 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:19Z","lastTransitionTime":"2026-02-26T17:16:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.952036 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.952059 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:16:19 crc kubenswrapper[4805]: E0226 17:16:19.952189 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:16:19 crc kubenswrapper[4805]: I0226 17:16:19.952053 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:16:19 crc kubenswrapper[4805]: E0226 17:16:19.952593 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:16:19 crc kubenswrapper[4805]: E0226 17:16:19.952521 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.042970 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.043066 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.043092 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.043119 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.043136 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:20Z","lastTransitionTime":"2026-02-26T17:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.145689 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.145732 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.145745 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.145762 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.145773 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:20Z","lastTransitionTime":"2026-02-26T17:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.249091 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.249166 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.249185 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.249212 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.249230 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:20Z","lastTransitionTime":"2026-02-26T17:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.352081 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.352158 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.352177 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.352213 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.352231 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:20Z","lastTransitionTime":"2026-02-26T17:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.455322 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.455389 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.455406 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.455433 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.455454 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:20Z","lastTransitionTime":"2026-02-26T17:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.558370 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.558432 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.558445 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.558466 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.558477 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:20Z","lastTransitionTime":"2026-02-26T17:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.660839 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.660884 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.660896 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.660909 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.660919 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:20Z","lastTransitionTime":"2026-02-26T17:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.763753 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.763819 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.763838 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.763862 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.763879 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:20Z","lastTransitionTime":"2026-02-26T17:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.866745 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.866780 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.866791 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.866809 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.866819 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:20Z","lastTransitionTime":"2026-02-26T17:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.969058 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.969140 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.969160 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.969190 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:20 crc kubenswrapper[4805]: I0226 17:16:20.969209 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:20Z","lastTransitionTime":"2026-02-26T17:16:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.071217 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.071280 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.071300 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.071325 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.071343 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:21Z","lastTransitionTime":"2026-02-26T17:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.174457 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.174516 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.174533 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.174555 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.174576 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:21Z","lastTransitionTime":"2026-02-26T17:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.276490 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.276532 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.276541 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.276558 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.276566 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:21Z","lastTransitionTime":"2026-02-26T17:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.381481 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.381522 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.381536 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.381554 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.381565 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:21Z","lastTransitionTime":"2026-02-26T17:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.484204 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.484263 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.484278 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.484296 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.484309 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:21Z","lastTransitionTime":"2026-02-26T17:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.586743 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.586785 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.586797 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.586814 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.586828 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:21Z","lastTransitionTime":"2026-02-26T17:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.689810 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.689880 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.689894 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.689911 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.689923 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:21Z","lastTransitionTime":"2026-02-26T17:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.791875 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.791917 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.791928 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.791944 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.791956 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:21Z","lastTransitionTime":"2026-02-26T17:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.894611 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.894832 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.894922 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.894991 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.895090 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:21Z","lastTransitionTime":"2026-02-26T17:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.952599 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.952738 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:16:21 crc kubenswrapper[4805]: E0226 17:16:21.952883 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.953041 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:16:21 crc kubenswrapper[4805]: E0226 17:16:21.953294 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:16:21 crc kubenswrapper[4805]: E0226 17:16:21.953402 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:16:21 crc kubenswrapper[4805]: E0226 17:16:21.954771 4805 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 17:16:21 crc kubenswrapper[4805]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 26 17:16:21 crc kubenswrapper[4805]: set -o allexport Feb 26 17:16:21 crc kubenswrapper[4805]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 26 17:16:21 crc kubenswrapper[4805]: source /etc/kubernetes/apiserver-url.env Feb 26 17:16:21 crc kubenswrapper[4805]: else Feb 26 17:16:21 crc kubenswrapper[4805]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 26 17:16:21 crc kubenswrapper[4805]: exit 1 Feb 26 17:16:21 crc kubenswrapper[4805]: fi Feb 26 17:16:21 crc kubenswrapper[4805]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 26 17:16:21 crc kubenswrapper[4805]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 26 17:16:21 crc kubenswrapper[4805]: > logger="UnhandledError" Feb 26 17:16:21 crc kubenswrapper[4805]: E0226 17:16:21.956116 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.997920 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.997976 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.997993 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.998042 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:21 crc kubenswrapper[4805]: I0226 17:16:21.998059 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:21Z","lastTransitionTime":"2026-02-26T17:16:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.099873 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.099959 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.099981 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.100006 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.100056 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:22Z","lastTransitionTime":"2026-02-26T17:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.202635 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.202688 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.202701 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.202720 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.202733 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:22Z","lastTransitionTime":"2026-02-26T17:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.305413 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.305458 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.305466 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.305534 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.305547 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:22Z","lastTransitionTime":"2026-02-26T17:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.413259 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.413309 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.413324 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.413347 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.413364 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:22Z","lastTransitionTime":"2026-02-26T17:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.516216 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.516268 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.516279 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.516296 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.516307 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:22Z","lastTransitionTime":"2026-02-26T17:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.619076 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.619132 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.619143 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.619160 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.619172 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:22Z","lastTransitionTime":"2026-02-26T17:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.721198 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.721241 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.721249 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.721263 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.721271 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:22Z","lastTransitionTime":"2026-02-26T17:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.823320 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.823368 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.823381 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.823425 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.823449 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:22Z","lastTransitionTime":"2026-02-26T17:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.925751 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.925843 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.925865 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.925895 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:22 crc kubenswrapper[4805]: I0226 17:16:22.925919 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:22Z","lastTransitionTime":"2026-02-26T17:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.027928 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.027971 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.027979 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.027994 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.028003 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:23Z","lastTransitionTime":"2026-02-26T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.130407 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.130452 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.130471 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.130487 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.130496 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:23Z","lastTransitionTime":"2026-02-26T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.233443 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.233491 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.233500 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.233516 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.233529 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:23Z","lastTransitionTime":"2026-02-26T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.336711 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.336768 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.336783 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.336806 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.336820 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:23Z","lastTransitionTime":"2026-02-26T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.439079 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.439130 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.439144 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.439163 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.439178 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:23Z","lastTransitionTime":"2026-02-26T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.541052 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.541088 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.541097 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.541111 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.541120 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:23Z","lastTransitionTime":"2026-02-26T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.644304 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.644582 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.644686 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.644768 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.644848 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:23Z","lastTransitionTime":"2026-02-26T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.747092 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.747127 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.747138 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.747153 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.747166 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:23Z","lastTransitionTime":"2026-02-26T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.848981 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.849039 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.849050 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.849066 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.849077 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:23Z","lastTransitionTime":"2026-02-26T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.951894 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.951962 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.951985 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.952051 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.952065 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.952089 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.952104 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:23Z","lastTransitionTime":"2026-02-26T17:16:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:23 crc kubenswrapper[4805]: I0226 17:16:23.952145 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:16:23 crc kubenswrapper[4805]: E0226 17:16:23.952273 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:16:23 crc kubenswrapper[4805]: E0226 17:16:23.952413 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:16:23 crc kubenswrapper[4805]: E0226 17:16:23.952537 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.055753 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.055829 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.055857 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.055886 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.055908 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:24Z","lastTransitionTime":"2026-02-26T17:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.158793 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.158857 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.158878 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.158909 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.158933 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:24Z","lastTransitionTime":"2026-02-26T17:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.257569 4805 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.261801 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.261906 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.261932 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.261961 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.261980 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:24Z","lastTransitionTime":"2026-02-26T17:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.364674 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.364744 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.364767 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.364792 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.364813 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:24Z","lastTransitionTime":"2026-02-26T17:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.468067 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.468126 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.468144 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.468169 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.468186 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:24Z","lastTransitionTime":"2026-02-26T17:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.571135 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.571173 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.571185 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.571199 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.571211 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:24Z","lastTransitionTime":"2026-02-26T17:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.673445 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.673478 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.673487 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.673499 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.673508 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:24Z","lastTransitionTime":"2026-02-26T17:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.775685 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.775770 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.775816 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.775838 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.775850 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:24Z","lastTransitionTime":"2026-02-26T17:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.878516 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.878571 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.878585 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.878605 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.878621 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:24Z","lastTransitionTime":"2026-02-26T17:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.981454 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.981506 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.981522 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.981545 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:24 crc kubenswrapper[4805]: I0226 17:16:24.981562 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:24Z","lastTransitionTime":"2026-02-26T17:16:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.084093 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.084122 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.084131 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.084145 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.084155 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:25Z","lastTransitionTime":"2026-02-26T17:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.187062 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.187106 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.187121 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.187141 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.187155 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:25Z","lastTransitionTime":"2026-02-26T17:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.289813 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.290191 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.290277 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.290391 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.290474 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:25Z","lastTransitionTime":"2026-02-26T17:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.359408 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb"} Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.359485 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63"} Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.369967 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.382808 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.392761 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.392820 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.392828 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.392845 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.392854 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:25Z","lastTransitionTime":"2026-02-26T17:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.398083 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.415504 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.450726 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.466681 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.478264 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.495565 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.497556 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.497618 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.497633 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.497651 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.497668 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:25Z","lastTransitionTime":"2026-02-26T17:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.601384 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.601451 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.601470 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.601497 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.601515 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:25Z","lastTransitionTime":"2026-02-26T17:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.704883 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.704939 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.704956 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.704977 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.704996 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:25Z","lastTransitionTime":"2026-02-26T17:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.808523 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.808599 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.808623 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.808656 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.808681 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:25Z","lastTransitionTime":"2026-02-26T17:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.872566 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.872726 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:16:25 crc kubenswrapper[4805]: E0226 17:16:25.872817 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:16:41.872788935 +0000 UTC m=+116.434543274 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:16:25 crc kubenswrapper[4805]: E0226 17:16:25.872916 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 17:16:25 crc kubenswrapper[4805]: E0226 17:16:25.873160 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 17:16:41.873121693 +0000 UTC m=+116.434876082 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.873218 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.873289 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.873367 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:16:25 crc kubenswrapper[4805]: E0226 17:16:25.873439 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 17:16:25 crc kubenswrapper[4805]: E0226 17:16:25.873473 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 17:16:25 crc kubenswrapper[4805]: E0226 17:16:25.873480 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 17:16:25 crc kubenswrapper[4805]: E0226 17:16:25.873486 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 17:16:25 crc kubenswrapper[4805]: E0226 17:16:25.873558 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 17:16:41.873539723 +0000 UTC m=+116.435294102 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 17:16:25 crc kubenswrapper[4805]: E0226 17:16:25.873581 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 17:16:25 crc kubenswrapper[4805]: E0226 17:16:25.873637 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 17:16:25 crc kubenswrapper[4805]: E0226 17:16:25.873659 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 17:16:25 crc kubenswrapper[4805]: E0226 17:16:25.873610 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 17:16:41.873596555 +0000 UTC m=+116.435350894 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 17:16:25 crc kubenswrapper[4805]: E0226 17:16:25.873744 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 17:16:41.873719598 +0000 UTC m=+116.435473987 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.911229 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.911297 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.911346 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.911370 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.911385 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:25Z","lastTransitionTime":"2026-02-26T17:16:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.952804 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.952826 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:16:25 crc kubenswrapper[4805]: I0226 17:16:25.952852 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:16:25 crc kubenswrapper[4805]: E0226 17:16:25.953076 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:16:25 crc kubenswrapper[4805]: E0226 17:16:25.953179 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:16:25 crc kubenswrapper[4805]: E0226 17:16:25.953258 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.014052 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.014098 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.014106 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.014120 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.014129 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:26Z","lastTransitionTime":"2026-02-26T17:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.118107 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.118184 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.118207 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.118239 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.118263 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:26Z","lastTransitionTime":"2026-02-26T17:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.221228 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.221297 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.221316 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.221340 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.221357 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:26Z","lastTransitionTime":"2026-02-26T17:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.323801 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.323867 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.323884 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.323911 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.323936 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:26Z","lastTransitionTime":"2026-02-26T17:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.427121 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.427230 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.427250 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.427275 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.427294 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:26Z","lastTransitionTime":"2026-02-26T17:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.529753 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.530186 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.530357 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.530534 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.530704 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:26Z","lastTransitionTime":"2026-02-26T17:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.633268 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.633818 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.633895 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.633989 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.634098 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:26Z","lastTransitionTime":"2026-02-26T17:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.736687 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.736768 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.736794 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.736821 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.736841 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:26Z","lastTransitionTime":"2026-02-26T17:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.840391 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.840814 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.840874 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.840929 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.840966 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:26Z","lastTransitionTime":"2026-02-26T17:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.943066 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.943108 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.943121 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.943138 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.943150 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:26Z","lastTransitionTime":"2026-02-26T17:16:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.953592 4805 scope.go:117] "RemoveContainer" containerID="8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.968223 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:26Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:26 crc kubenswrapper[4805]: I0226 17:16:26.988217 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:26Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.004356 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:27Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.030702 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:27Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.045472 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:27Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.048292 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.048340 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.048352 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.048370 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.048381 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:27Z","lastTransitionTime":"2026-02-26T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.058245 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:27Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.071779 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:27Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.085439 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:27Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.151004 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.151344 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.151358 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.151375 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.151387 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:27Z","lastTransitionTime":"2026-02-26T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.259525 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.259566 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.259576 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.259590 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.259599 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:27Z","lastTransitionTime":"2026-02-26T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.362628 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.362671 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.362693 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.362707 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.362717 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:27Z","lastTransitionTime":"2026-02-26T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.367073 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.369154 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c"} Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.369609 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.385048 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:27Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.415538 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:27Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.428178 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:27Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.448646 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:27Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.461555 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:27Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.464917 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.464948 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.464959 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.464976 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.464987 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:27Z","lastTransitionTime":"2026-02-26T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.473281 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:27Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.485595 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:27Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.498596 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:27Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.567160 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.567223 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.567232 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.567255 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.567273 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:27Z","lastTransitionTime":"2026-02-26T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.669697 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.669748 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.669760 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.669776 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.669789 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:27Z","lastTransitionTime":"2026-02-26T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.772649 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.772692 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.772702 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.772731 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.772742 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:27Z","lastTransitionTime":"2026-02-26T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.875318 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.875387 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.875413 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.875437 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.875452 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:27Z","lastTransitionTime":"2026-02-26T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.952619 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.952619 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.952630 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:16:27 crc kubenswrapper[4805]: E0226 17:16:27.952848 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:16:27 crc kubenswrapper[4805]: E0226 17:16:27.952914 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:16:27 crc kubenswrapper[4805]: E0226 17:16:27.952762 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.977976 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.978028 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.978038 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.978052 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:27 crc kubenswrapper[4805]: I0226 17:16:27.978065 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:27Z","lastTransitionTime":"2026-02-26T17:16:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.080305 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.080338 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.080348 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.080361 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.080370 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:28Z","lastTransitionTime":"2026-02-26T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.182476 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.182520 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.182531 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.182550 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.182562 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:28Z","lastTransitionTime":"2026-02-26T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.272325 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.272382 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.272391 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.272408 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.272417 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:28Z","lastTransitionTime":"2026-02-26T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:28 crc kubenswrapper[4805]: E0226 17:16:28.292659 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:28Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.296822 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.296863 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.296873 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.296886 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.296895 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:28Z","lastTransitionTime":"2026-02-26T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:28 crc kubenswrapper[4805]: E0226 17:16:28.314174 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:28Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.319458 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.319517 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.319528 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.319552 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.319567 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:28Z","lastTransitionTime":"2026-02-26T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:28 crc kubenswrapper[4805]: E0226 17:16:28.336417 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:28Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.339760 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.339791 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.339802 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.339817 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.339828 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:28Z","lastTransitionTime":"2026-02-26T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:28 crc kubenswrapper[4805]: E0226 17:16:28.351468 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:28Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.354466 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.354495 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.354503 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.354514 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.354526 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:28Z","lastTransitionTime":"2026-02-26T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:28 crc kubenswrapper[4805]: E0226 17:16:28.370652 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:28Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:28 crc kubenswrapper[4805]: E0226 17:16:28.370809 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.372441 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.372476 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.372486 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.372500 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.372510 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:28Z","lastTransitionTime":"2026-02-26T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.475487 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.475572 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.475590 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.475613 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.475634 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:28Z","lastTransitionTime":"2026-02-26T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.578508 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.578579 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.578602 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.578639 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.578658 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:28Z","lastTransitionTime":"2026-02-26T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.681827 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.681877 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.681888 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.681903 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.681913 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:28Z","lastTransitionTime":"2026-02-26T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.783899 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.783936 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.783946 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.783961 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.783972 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:28Z","lastTransitionTime":"2026-02-26T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.886640 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.886717 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.886732 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.886758 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.886772 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:28Z","lastTransitionTime":"2026-02-26T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.989415 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.989479 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.989499 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.989524 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:28 crc kubenswrapper[4805]: I0226 17:16:28.989542 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:28Z","lastTransitionTime":"2026-02-26T17:16:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.092949 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.092999 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.093045 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.093070 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.093087 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:29Z","lastTransitionTime":"2026-02-26T17:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.196250 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.196327 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.196349 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.196768 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.197008 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:29Z","lastTransitionTime":"2026-02-26T17:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.299687 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.299720 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.299729 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.299743 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.299751 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:29Z","lastTransitionTime":"2026-02-26T17:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.375217 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631"} Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.395713 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:29Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.402236 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.402295 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.402305 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.402319 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.402330 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:29Z","lastTransitionTime":"2026-02-26T17:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.411362 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:29Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.427149 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:29Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.441695 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:29Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.454375 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:29Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.467490 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:29Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.500223 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:29Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.504631 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.504672 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.504687 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.504707 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.504720 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:29Z","lastTransitionTime":"2026-02-26T17:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.516218 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:29Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.607389 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.607693 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.607804 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.607926 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.608067 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:29Z","lastTransitionTime":"2026-02-26T17:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.710876 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.711497 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.711587 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.711679 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.711764 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:29Z","lastTransitionTime":"2026-02-26T17:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.814933 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.815053 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.815081 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.815108 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.815125 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:29Z","lastTransitionTime":"2026-02-26T17:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.917677 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.917724 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.917735 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.917751 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.917760 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:29Z","lastTransitionTime":"2026-02-26T17:16:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.952581 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.952641 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:16:29 crc kubenswrapper[4805]: I0226 17:16:29.952597 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:16:29 crc kubenswrapper[4805]: E0226 17:16:29.952714 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:16:29 crc kubenswrapper[4805]: E0226 17:16:29.952769 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:16:29 crc kubenswrapper[4805]: E0226 17:16:29.952835 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.019735 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.019795 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.019814 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.019838 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.019855 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:30Z","lastTransitionTime":"2026-02-26T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.122094 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.122174 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.122193 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.122216 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.122233 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:30Z","lastTransitionTime":"2026-02-26T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.225253 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.225322 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.225340 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.225363 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.225380 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:30Z","lastTransitionTime":"2026-02-26T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.328922 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.329000 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.329051 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.329079 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.329103 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:30Z","lastTransitionTime":"2026-02-26T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.432103 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.432191 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.432225 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.432256 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.432276 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:30Z","lastTransitionTime":"2026-02-26T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.535624 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.535738 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.535764 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.535795 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.535817 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:30Z","lastTransitionTime":"2026-02-26T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.637878 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.637933 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.637951 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.637974 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.637991 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:30Z","lastTransitionTime":"2026-02-26T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.741163 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.741217 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.741240 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.741268 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.741289 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:30Z","lastTransitionTime":"2026-02-26T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.844892 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.844967 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.844990 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.845078 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.845104 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:30Z","lastTransitionTime":"2026-02-26T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.947252 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.947314 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.947328 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.947352 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:30 crc kubenswrapper[4805]: I0226 17:16:30.947368 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:30Z","lastTransitionTime":"2026-02-26T17:16:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.049986 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.050058 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.050074 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.050094 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.050109 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:31Z","lastTransitionTime":"2026-02-26T17:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.152785 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.152863 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.152899 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.152930 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.152954 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:31Z","lastTransitionTime":"2026-02-26T17:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.255812 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.255864 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.255880 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.255904 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.255922 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:31Z","lastTransitionTime":"2026-02-26T17:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.359176 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.359250 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.359276 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.359305 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.359327 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:31Z","lastTransitionTime":"2026-02-26T17:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.464809 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.464926 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.465607 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.465648 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.465663 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:31Z","lastTransitionTime":"2026-02-26T17:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.567886 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.567923 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.567932 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.567945 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.567954 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:31Z","lastTransitionTime":"2026-02-26T17:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.671358 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.671395 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.671406 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.671422 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.671433 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:31Z","lastTransitionTime":"2026-02-26T17:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.774140 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.774234 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.774252 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.774277 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.774300 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:31Z","lastTransitionTime":"2026-02-26T17:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.876830 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.876869 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.876879 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.876891 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.876900 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:31Z","lastTransitionTime":"2026-02-26T17:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.952518 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.952567 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.952534 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:16:31 crc kubenswrapper[4805]: E0226 17:16:31.952760 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:16:31 crc kubenswrapper[4805]: E0226 17:16:31.952940 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:16:31 crc kubenswrapper[4805]: E0226 17:16:31.953122 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.981366 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.981440 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.981465 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.981497 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:31 crc kubenswrapper[4805]: I0226 17:16:31.981516 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:31Z","lastTransitionTime":"2026-02-26T17:16:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.084378 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.084453 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.084477 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.084508 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.084531 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:32Z","lastTransitionTime":"2026-02-26T17:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.187125 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.187185 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.187198 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.187215 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.187228 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:32Z","lastTransitionTime":"2026-02-26T17:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.274653 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-d4ls2"] Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.274993 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-d4ls2" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.278826 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.279068 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.279444 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.290128 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.290170 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.290178 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.290194 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.290203 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:32Z","lastTransitionTime":"2026-02-26T17:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.294660 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.313873 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.327498 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.340175 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.355447 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.369307 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.379745 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.391959 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.391982 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.391990 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.392003 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.392027 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:32Z","lastTransitionTime":"2026-02-26T17:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.398526 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.410541 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d4ls2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxgv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d4ls2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.442264 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxgv9\" (UniqueName: \"kubernetes.io/projected/5488ec5b-183b-423e-a38d-bf3aaf73b6f5-kube-api-access-jxgv9\") pod \"node-resolver-d4ls2\" (UID: \"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\") " pod="openshift-dns/node-resolver-d4ls2" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.442338 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5488ec5b-183b-423e-a38d-bf3aaf73b6f5-hosts-file\") pod \"node-resolver-d4ls2\" (UID: \"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\") " pod="openshift-dns/node-resolver-d4ls2" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.494799 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.494869 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.494887 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.494910 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.494930 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:32Z","lastTransitionTime":"2026-02-26T17:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.543281 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5488ec5b-183b-423e-a38d-bf3aaf73b6f5-hosts-file\") pod \"node-resolver-d4ls2\" (UID: \"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\") " pod="openshift-dns/node-resolver-d4ls2" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.543397 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxgv9\" (UniqueName: \"kubernetes.io/projected/5488ec5b-183b-423e-a38d-bf3aaf73b6f5-kube-api-access-jxgv9\") pod \"node-resolver-d4ls2\" (UID: \"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\") " pod="openshift-dns/node-resolver-d4ls2" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.543426 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/5488ec5b-183b-423e-a38d-bf3aaf73b6f5-hosts-file\") pod \"node-resolver-d4ls2\" (UID: \"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\") " pod="openshift-dns/node-resolver-d4ls2" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.570298 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxgv9\" (UniqueName: \"kubernetes.io/projected/5488ec5b-183b-423e-a38d-bf3aaf73b6f5-kube-api-access-jxgv9\") pod \"node-resolver-d4ls2\" (UID: \"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\") " pod="openshift-dns/node-resolver-d4ls2" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.597546 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.597576 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.597583 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.597597 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.597606 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:32Z","lastTransitionTime":"2026-02-26T17:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.597712 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-d4ls2" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.655865 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-tv2pd"] Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.656135 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-54ch7"] Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.656571 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-2mnb9"] Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.656832 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.657141 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.657492 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-54ch7" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.662216 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.662241 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.664767 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.664798 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.664905 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.664916 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.665032 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.665268 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.665334 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.665634 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.666157 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.666717 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.680728 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25e83477-65d0-41be-8e55-fdacfc5871a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2mnb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.695137 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.700190 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.700220 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.700233 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.700259 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.700271 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:32Z","lastTransitionTime":"2026-02-26T17:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.707411 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.721411 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.731836 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d4ls2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxgv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d4ls2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.744820 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-multus-cni-dir\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.744876 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-host-run-k8s-cni-cncf-io\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.744897 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-host-var-lib-kubelet\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.744922 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-host-var-lib-cni-bin\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.744941 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-host-run-multus-certs\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.744962 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/25e83477-65d0-41be-8e55-fdacfc5871a8-mcd-auth-proxy-config\") pod \"machine-config-daemon-2mnb9\" (UID: \"25e83477-65d0-41be-8e55-fdacfc5871a8\") " pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.744980 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-multus-conf-dir\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.744997 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4cefacfa-0108-4252-aa69-4b35bcc0f69f-multus-daemon-config\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.745032 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5fb6c537-e08f-48af-a1c8-5879a8519a5c-cni-binary-copy\") pod \"multus-additional-cni-plugins-54ch7\" (UID: \"5fb6c537-e08f-48af-a1c8-5879a8519a5c\") " pod="openshift-multus/multus-additional-cni-plugins-54ch7" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.745052 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/25e83477-65d0-41be-8e55-fdacfc5871a8-rootfs\") pod \"machine-config-daemon-2mnb9\" (UID: \"25e83477-65d0-41be-8e55-fdacfc5871a8\") " pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.745085 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-cnibin\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.745103 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5fb6c537-e08f-48af-a1c8-5879a8519a5c-os-release\") pod \"multus-additional-cni-plugins-54ch7\" (UID: \"5fb6c537-e08f-48af-a1c8-5879a8519a5c\") " pod="openshift-multus/multus-additional-cni-plugins-54ch7" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.745121 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5fb6c537-e08f-48af-a1c8-5879a8519a5c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-54ch7\" (UID: \"5fb6c537-e08f-48af-a1c8-5879a8519a5c\") " pod="openshift-multus/multus-additional-cni-plugins-54ch7" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.745141 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-system-cni-dir\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.745165 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-hostroot\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.745185 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-etc-kubernetes\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.745206 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5fb6c537-e08f-48af-a1c8-5879a8519a5c-system-cni-dir\") pod \"multus-additional-cni-plugins-54ch7\" (UID: \"5fb6c537-e08f-48af-a1c8-5879a8519a5c\") " pod="openshift-multus/multus-additional-cni-plugins-54ch7" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.745229 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc9dn\" (UniqueName: \"kubernetes.io/projected/5fb6c537-e08f-48af-a1c8-5879a8519a5c-kube-api-access-pc9dn\") pod \"multus-additional-cni-plugins-54ch7\" (UID: \"5fb6c537-e08f-48af-a1c8-5879a8519a5c\") " pod="openshift-multus/multus-additional-cni-plugins-54ch7" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.745324 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-host-var-lib-cni-multus\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.745360 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-os-release\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.745387 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpw5p\" (UniqueName: \"kubernetes.io/projected/25e83477-65d0-41be-8e55-fdacfc5871a8-kube-api-access-lpw5p\") pod \"machine-config-daemon-2mnb9\" (UID: \"25e83477-65d0-41be-8e55-fdacfc5871a8\") " pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.745415 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5fb6c537-e08f-48af-a1c8-5879a8519a5c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-54ch7\" (UID: \"5fb6c537-e08f-48af-a1c8-5879a8519a5c\") " pod="openshift-multus/multus-additional-cni-plugins-54ch7" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.745438 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-host-run-netns\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.745516 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rswkm\" (UniqueName: \"kubernetes.io/projected/4cefacfa-0108-4252-aa69-4b35bcc0f69f-kube-api-access-rswkm\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.745554 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/25e83477-65d0-41be-8e55-fdacfc5871a8-proxy-tls\") pod \"machine-config-daemon-2mnb9\" (UID: \"25e83477-65d0-41be-8e55-fdacfc5871a8\") " pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.745621 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4cefacfa-0108-4252-aa69-4b35bcc0f69f-cni-binary-copy\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.745638 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-multus-socket-dir-parent\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.745655 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5fb6c537-e08f-48af-a1c8-5879a8519a5c-cnibin\") pod \"multus-additional-cni-plugins-54ch7\" (UID: \"5fb6c537-e08f-48af-a1c8-5879a8519a5c\") " pod="openshift-multus/multus-additional-cni-plugins-54ch7" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.745941 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb6c537-e08f-48af-a1c8-5879a8519a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-54ch7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.767686 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.783982 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.799112 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.802473 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.802510 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.802523 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.802540 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.802551 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:32Z","lastTransitionTime":"2026-02-26T17:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.810404 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.826089 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.841909 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tv2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cefacfa-0108-4252-aa69-4b35bcc0f69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rswkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tv2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.846316 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-host-var-lib-cni-multus\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.846389 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-os-release\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.846409 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpw5p\" (UniqueName: \"kubernetes.io/projected/25e83477-65d0-41be-8e55-fdacfc5871a8-kube-api-access-lpw5p\") pod \"machine-config-daemon-2mnb9\" (UID: \"25e83477-65d0-41be-8e55-fdacfc5871a8\") " pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.846425 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5fb6c537-e08f-48af-a1c8-5879a8519a5c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-54ch7\" (UID: \"5fb6c537-e08f-48af-a1c8-5879a8519a5c\") " pod="openshift-multus/multus-additional-cni-plugins-54ch7" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.846440 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-host-run-netns\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.846456 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rswkm\" (UniqueName: \"kubernetes.io/projected/4cefacfa-0108-4252-aa69-4b35bcc0f69f-kube-api-access-rswkm\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.846470 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4cefacfa-0108-4252-aa69-4b35bcc0f69f-cni-binary-copy\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.846484 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-multus-socket-dir-parent\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.846498 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5fb6c537-e08f-48af-a1c8-5879a8519a5c-cnibin\") pod \"multus-additional-cni-plugins-54ch7\" (UID: \"5fb6c537-e08f-48af-a1c8-5879a8519a5c\") " pod="openshift-multus/multus-additional-cni-plugins-54ch7" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.846512 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/25e83477-65d0-41be-8e55-fdacfc5871a8-proxy-tls\") pod \"machine-config-daemon-2mnb9\" (UID: \"25e83477-65d0-41be-8e55-fdacfc5871a8\") " pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.846534 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-multus-cni-dir\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.846557 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-host-run-k8s-cni-cncf-io\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.846573 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-host-var-lib-kubelet\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.846605 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/25e83477-65d0-41be-8e55-fdacfc5871a8-mcd-auth-proxy-config\") pod \"machine-config-daemon-2mnb9\" (UID: \"25e83477-65d0-41be-8e55-fdacfc5871a8\") " pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.846648 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-host-var-lib-cni-bin\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.846672 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-host-run-multus-certs\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.846691 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-multus-conf-dir\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.846711 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4cefacfa-0108-4252-aa69-4b35bcc0f69f-multus-daemon-config\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.846731 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5fb6c537-e08f-48af-a1c8-5879a8519a5c-cni-binary-copy\") pod \"multus-additional-cni-plugins-54ch7\" (UID: \"5fb6c537-e08f-48af-a1c8-5879a8519a5c\") " pod="openshift-multus/multus-additional-cni-plugins-54ch7" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.846752 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/25e83477-65d0-41be-8e55-fdacfc5871a8-rootfs\") pod \"machine-config-daemon-2mnb9\" (UID: \"25e83477-65d0-41be-8e55-fdacfc5871a8\") " pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.846773 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5fb6c537-e08f-48af-a1c8-5879a8519a5c-os-release\") pod \"multus-additional-cni-plugins-54ch7\" (UID: \"5fb6c537-e08f-48af-a1c8-5879a8519a5c\") " pod="openshift-multus/multus-additional-cni-plugins-54ch7" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.846795 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5fb6c537-e08f-48af-a1c8-5879a8519a5c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-54ch7\" (UID: \"5fb6c537-e08f-48af-a1c8-5879a8519a5c\") " pod="openshift-multus/multus-additional-cni-plugins-54ch7" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.846825 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-cnibin\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.846847 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-system-cni-dir\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.846868 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-hostroot\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.846887 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-etc-kubernetes\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.846893 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-host-var-lib-cni-multus\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.846908 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5fb6c537-e08f-48af-a1c8-5879a8519a5c-system-cni-dir\") pod \"multus-additional-cni-plugins-54ch7\" (UID: \"5fb6c537-e08f-48af-a1c8-5879a8519a5c\") " pod="openshift-multus/multus-additional-cni-plugins-54ch7" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.846955 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5fb6c537-e08f-48af-a1c8-5879a8519a5c-system-cni-dir\") pod \"multus-additional-cni-plugins-54ch7\" (UID: \"5fb6c537-e08f-48af-a1c8-5879a8519a5c\") " pod="openshift-multus/multus-additional-cni-plugins-54ch7" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.846973 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc9dn\" (UniqueName: \"kubernetes.io/projected/5fb6c537-e08f-48af-a1c8-5879a8519a5c-kube-api-access-pc9dn\") pod \"multus-additional-cni-plugins-54ch7\" (UID: \"5fb6c537-e08f-48af-a1c8-5879a8519a5c\") " pod="openshift-multus/multus-additional-cni-plugins-54ch7" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.846995 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5fb6c537-e08f-48af-a1c8-5879a8519a5c-os-release\") pod \"multus-additional-cni-plugins-54ch7\" (UID: \"5fb6c537-e08f-48af-a1c8-5879a8519a5c\") " pod="openshift-multus/multus-additional-cni-plugins-54ch7" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.846996 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-host-run-multus-certs\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.847092 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-multus-socket-dir-parent\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.847117 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5fb6c537-e08f-48af-a1c8-5879a8519a5c-cnibin\") pod \"multus-additional-cni-plugins-54ch7\" (UID: \"5fb6c537-e08f-48af-a1c8-5879a8519a5c\") " pod="openshift-multus/multus-additional-cni-plugins-54ch7" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.847124 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-os-release\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.847638 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5fb6c537-e08f-48af-a1c8-5879a8519a5c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-54ch7\" (UID: \"5fb6c537-e08f-48af-a1c8-5879a8519a5c\") " pod="openshift-multus/multus-additional-cni-plugins-54ch7" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.847727 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5fb6c537-e08f-48af-a1c8-5879a8519a5c-cni-binary-copy\") pod \"multus-additional-cni-plugins-54ch7\" (UID: \"5fb6c537-e08f-48af-a1c8-5879a8519a5c\") " pod="openshift-multus/multus-additional-cni-plugins-54ch7" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.847801 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-etc-kubernetes\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.847829 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-host-run-netns\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.847858 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-multus-cni-dir\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.847880 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-host-run-k8s-cni-cncf-io\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.847881 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/25e83477-65d0-41be-8e55-fdacfc5871a8-rootfs\") pod \"machine-config-daemon-2mnb9\" (UID: \"25e83477-65d0-41be-8e55-fdacfc5871a8\") " pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.847896 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-host-var-lib-kubelet\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.847922 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-host-var-lib-cni-bin\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.847988 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-system-cni-dir\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.848048 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-cnibin\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.848071 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-hostroot\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.848144 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4cefacfa-0108-4252-aa69-4b35bcc0f69f-multus-conf-dir\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.848513 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/25e83477-65d0-41be-8e55-fdacfc5871a8-mcd-auth-proxy-config\") pod \"machine-config-daemon-2mnb9\" (UID: \"25e83477-65d0-41be-8e55-fdacfc5871a8\") " pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.848572 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4cefacfa-0108-4252-aa69-4b35bcc0f69f-cni-binary-copy\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.848596 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4cefacfa-0108-4252-aa69-4b35bcc0f69f-multus-daemon-config\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.848871 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5fb6c537-e08f-48af-a1c8-5879a8519a5c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-54ch7\" (UID: \"5fb6c537-e08f-48af-a1c8-5879a8519a5c\") " pod="openshift-multus/multus-additional-cni-plugins-54ch7" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.853788 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/25e83477-65d0-41be-8e55-fdacfc5871a8-proxy-tls\") pod \"machine-config-daemon-2mnb9\" (UID: \"25e83477-65d0-41be-8e55-fdacfc5871a8\") " pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.863306 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rswkm\" (UniqueName: \"kubernetes.io/projected/4cefacfa-0108-4252-aa69-4b35bcc0f69f-kube-api-access-rswkm\") pod \"multus-tv2pd\" (UID: \"4cefacfa-0108-4252-aa69-4b35bcc0f69f\") " pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.863381 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc9dn\" (UniqueName: \"kubernetes.io/projected/5fb6c537-e08f-48af-a1c8-5879a8519a5c-kube-api-access-pc9dn\") pod \"multus-additional-cni-plugins-54ch7\" (UID: \"5fb6c537-e08f-48af-a1c8-5879a8519a5c\") " pod="openshift-multus/multus-additional-cni-plugins-54ch7" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.863608 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpw5p\" (UniqueName: \"kubernetes.io/projected/25e83477-65d0-41be-8e55-fdacfc5871a8-kube-api-access-lpw5p\") pod \"machine-config-daemon-2mnb9\" (UID: \"25e83477-65d0-41be-8e55-fdacfc5871a8\") " pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.864246 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.880228 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.896041 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.904845 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.904882 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.904892 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.904907 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.904918 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:32Z","lastTransitionTime":"2026-02-26T17:16:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.907614 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.920918 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.935278 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tv2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cefacfa-0108-4252-aa69-4b35bcc0f69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rswkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tv2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.947899 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25e83477-65d0-41be-8e55-fdacfc5871a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2mnb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.961676 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.976956 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.980662 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.987431 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-tv2pd" Feb 26 17:16:32 crc kubenswrapper[4805]: W0226 17:16:32.988460 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25e83477_65d0_41be_8e55_fdacfc5871a8.slice/crio-bd514520075de249df14d47693f16a917c22f5ba805dc9e572c937d63b4a31ce WatchSource:0}: Error finding container bd514520075de249df14d47693f16a917c22f5ba805dc9e572c937d63b4a31ce: Status 404 returned error can't find the container with id bd514520075de249df14d47693f16a917c22f5ba805dc9e572c937d63b4a31ce Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.995424 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:32 crc kubenswrapper[4805]: I0226 17:16:32.996994 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-54ch7" Feb 26 17:16:32 crc kubenswrapper[4805]: W0226 17:16:32.997868 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cefacfa_0108_4252_aa69_4b35bcc0f69f.slice/crio-790c4b646c0432701ccde829e29c1558d877424252653c7e609c451ebe7198cb WatchSource:0}: Error finding container 790c4b646c0432701ccde829e29c1558d877424252653c7e609c451ebe7198cb: Status 404 returned error can't find the container with id 790c4b646c0432701ccde829e29c1558d877424252653c7e609c451ebe7198cb Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.006675 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.006689 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d4ls2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxgv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d4ls2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.006736 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.006746 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.006762 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.006775 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:33Z","lastTransitionTime":"2026-02-26T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.022032 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pqbgw"] Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.022736 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.024139 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb6c537-e08f-48af-a1c8-5879a8519a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-54ch7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.025510 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.025628 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.025662 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.025515 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.025595 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.025846 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.025989 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.038988 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.053378 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.066102 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.075602 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d4ls2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxgv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d4ls2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.094448 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d434db3-db90-41b2-9bd3-e6ef3009f878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqbgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.110845 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb6c537-e08f-48af-a1c8-5879a8519a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-54ch7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.110993 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.111069 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.111085 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.111887 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.111912 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:33Z","lastTransitionTime":"2026-02-26T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.124499 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tv2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cefacfa-0108-4252-aa69-4b35bcc0f69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rswkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tv2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.141947 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.148905 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-var-lib-openvswitch\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.148966 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-run-ovn\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.149002 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-cni-netd\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.149087 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-slash\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.149132 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-run-systemd\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.149154 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-log-socket\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.149175 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-run-ovn-kubernetes\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.149235 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1d434db3-db90-41b2-9bd3-e6ef3009f878-ovnkube-script-lib\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.149272 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.149323 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-etc-openvswitch\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.149346 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-run-openvswitch\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.149551 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-cni-bin\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.149610 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-systemd-units\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.149638 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1d434db3-db90-41b2-9bd3-e6ef3009f878-env-overrides\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.149687 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1d434db3-db90-41b2-9bd3-e6ef3009f878-ovn-node-metrics-cert\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.149712 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kffw\" (UniqueName: \"kubernetes.io/projected/1d434db3-db90-41b2-9bd3-e6ef3009f878-kube-api-access-5kffw\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.149760 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-kubelet\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.149803 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1d434db3-db90-41b2-9bd3-e6ef3009f878-ovnkube-config\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.149854 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-run-netns\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.149877 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-node-log\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.153662 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.166486 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.176845 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.187344 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.197444 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25e83477-65d0-41be-8e55-fdacfc5871a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2mnb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.213859 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.213887 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.213897 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.213914 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.213931 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:33Z","lastTransitionTime":"2026-02-26T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.250671 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1d434db3-db90-41b2-9bd3-e6ef3009f878-ovn-node-metrics-cert\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.250705 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kffw\" (UniqueName: \"kubernetes.io/projected/1d434db3-db90-41b2-9bd3-e6ef3009f878-kube-api-access-5kffw\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.250724 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-kubelet\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.250747 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1d434db3-db90-41b2-9bd3-e6ef3009f878-ovnkube-config\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.250764 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-run-netns\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.250777 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-node-log\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.250792 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-var-lib-openvswitch\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.250806 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-run-ovn\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.250827 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-cni-netd\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.250850 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-run-systemd\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.250866 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-log-socket\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.250864 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-kubelet\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.250872 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-run-netns\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.250883 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-slash\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.250955 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-run-ovn-kubernetes\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.250978 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.250978 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-node-log\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.250994 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1d434db3-db90-41b2-9bd3-e6ef3009f878-ovnkube-script-lib\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.251148 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-run-openvswitch\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.251183 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-cni-bin\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.251206 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.251221 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-etc-openvswitch\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.251246 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-run-systemd\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.251254 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-systemd-units\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.251254 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-run-ovn-kubernetes\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.251268 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-var-lib-openvswitch\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.251287 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1d434db3-db90-41b2-9bd3-e6ef3009f878-env-overrides\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.251270 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-cni-netd\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.251302 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-run-openvswitch\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.251320 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-etc-openvswitch\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.251285 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-log-socket\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.251333 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-systemd-units\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.251333 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-cni-bin\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.251381 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-slash\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.251495 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-run-ovn\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.251670 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1d434db3-db90-41b2-9bd3-e6ef3009f878-ovnkube-script-lib\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.251706 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1d434db3-db90-41b2-9bd3-e6ef3009f878-ovnkube-config\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.251977 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1d434db3-db90-41b2-9bd3-e6ef3009f878-env-overrides\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.254696 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1d434db3-db90-41b2-9bd3-e6ef3009f878-ovn-node-metrics-cert\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.272990 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kffw\" (UniqueName: \"kubernetes.io/projected/1d434db3-db90-41b2-9bd3-e6ef3009f878-kube-api-access-5kffw\") pod \"ovnkube-node-pqbgw\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.316184 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.316235 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.316244 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.316259 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.316269 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:33Z","lastTransitionTime":"2026-02-26T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.353747 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:33 crc kubenswrapper[4805]: W0226 17:16:33.365069 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d434db3_db90_41b2_9bd3_e6ef3009f878.slice/crio-cad97b9b4310c1cae6fdac72c6b42fc2ff79310dd1574698d457f241a171bf5f WatchSource:0}: Error finding container cad97b9b4310c1cae6fdac72c6b42fc2ff79310dd1574698d457f241a171bf5f: Status 404 returned error can't find the container with id cad97b9b4310c1cae6fdac72c6b42fc2ff79310dd1574698d457f241a171bf5f Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.391792 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" event={"ID":"5fb6c537-e08f-48af-a1c8-5879a8519a5c","Type":"ContainerStarted","Data":"51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237"} Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.391854 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" event={"ID":"5fb6c537-e08f-48af-a1c8-5879a8519a5c","Type":"ContainerStarted","Data":"07acf273f2cf759ffb377f6c05be437bf77bf65c0a59dfc61448a3e41856a686"} Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.393941 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tv2pd" event={"ID":"4cefacfa-0108-4252-aa69-4b35bcc0f69f","Type":"ContainerStarted","Data":"bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4"} Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.394109 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tv2pd" event={"ID":"4cefacfa-0108-4252-aa69-4b35bcc0f69f","Type":"ContainerStarted","Data":"790c4b646c0432701ccde829e29c1558d877424252653c7e609c451ebe7198cb"} Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.394916 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" event={"ID":"1d434db3-db90-41b2-9bd3-e6ef3009f878","Type":"ContainerStarted","Data":"cad97b9b4310c1cae6fdac72c6b42fc2ff79310dd1574698d457f241a171bf5f"} Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.396200 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" event={"ID":"25e83477-65d0-41be-8e55-fdacfc5871a8","Type":"ContainerStarted","Data":"57fc8ccb64122947b5a7cca9213198f432744cd553224168c0c46c30692dee70"} Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.396237 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" event={"ID":"25e83477-65d0-41be-8e55-fdacfc5871a8","Type":"ContainerStarted","Data":"d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb"} Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.396256 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" event={"ID":"25e83477-65d0-41be-8e55-fdacfc5871a8","Type":"ContainerStarted","Data":"bd514520075de249df14d47693f16a917c22f5ba805dc9e572c937d63b4a31ce"} Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.397404 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-d4ls2" event={"ID":"5488ec5b-183b-423e-a38d-bf3aaf73b6f5","Type":"ContainerStarted","Data":"5f108050dc8903541643b76d657da2ecc032d6ae2700c3150b7bcde71e25cc45"} Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.397469 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-d4ls2" event={"ID":"5488ec5b-183b-423e-a38d-bf3aaf73b6f5","Type":"ContainerStarted","Data":"a25987bdeaf4975055626f71a7b69910765479363478df80391cb44c0e900776"} Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.404626 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25e83477-65d0-41be-8e55-fdacfc5871a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2mnb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.417810 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.418755 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.418788 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.418799 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.418816 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.418827 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:33Z","lastTransitionTime":"2026-02-26T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.430872 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.441793 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.454972 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d4ls2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxgv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d4ls2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.474152 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d434db3-db90-41b2-9bd3-e6ef3009f878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqbgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.488593 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb6c537-e08f-48af-a1c8-5879a8519a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-54ch7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.507191 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.520461 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.520895 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.520954 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.520968 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.520993 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.521004 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:33Z","lastTransitionTime":"2026-02-26T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.536467 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.545959 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.557370 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.574419 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tv2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cefacfa-0108-4252-aa69-4b35bcc0f69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rswkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tv2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.588506 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25e83477-65d0-41be-8e55-fdacfc5871a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57fc8ccb64122947b5a7cca9213198f432744cd553224168c0c46c30692dee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2mnb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.605482 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.618930 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.622778 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.622811 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.622822 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.622838 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.622848 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:33Z","lastTransitionTime":"2026-02-26T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.629871 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d4ls2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f108050dc8903541643b76d657da2ecc032d6ae2700c3150b7bcde71e25cc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxgv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d4ls2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.645993 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d434db3-db90-41b2-9bd3-e6ef3009f878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqbgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.658997 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.674381 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb6c537-e08f-48af-a1c8-5879a8519a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-54ch7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.687669 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.699477 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.710937 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.724901 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.724996 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.725085 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.725106 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.725132 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.725150 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:33Z","lastTransitionTime":"2026-02-26T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.740149 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tv2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cefacfa-0108-4252-aa69-4b35bcc0f69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rswkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tv2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.758970 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.827812 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.827854 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.827865 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.827900 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.827912 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:33Z","lastTransitionTime":"2026-02-26T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.930507 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.930555 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.930566 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.930583 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.930595 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:33Z","lastTransitionTime":"2026-02-26T17:16:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.953113 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.953167 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:16:33 crc kubenswrapper[4805]: I0226 17:16:33.953117 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:16:33 crc kubenswrapper[4805]: E0226 17:16:33.953280 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:16:33 crc kubenswrapper[4805]: E0226 17:16:33.953455 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:16:33 crc kubenswrapper[4805]: E0226 17:16:33.953561 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.033126 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.033160 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.033169 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.033183 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.033196 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:34Z","lastTransitionTime":"2026-02-26T17:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.136395 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.136734 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.136919 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.137118 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.137282 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:34Z","lastTransitionTime":"2026-02-26T17:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.240252 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.240305 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.240323 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.240347 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.240364 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:34Z","lastTransitionTime":"2026-02-26T17:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.343734 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.343792 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.343813 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.343837 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.343856 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:34Z","lastTransitionTime":"2026-02-26T17:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.403272 4805 generic.go:334] "Generic (PLEG): container finished" podID="5fb6c537-e08f-48af-a1c8-5879a8519a5c" containerID="51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237" exitCode=0 Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.403355 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" event={"ID":"5fb6c537-e08f-48af-a1c8-5879a8519a5c","Type":"ContainerDied","Data":"51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237"} Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.405642 4805 generic.go:334] "Generic (PLEG): container finished" podID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerID="e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607" exitCode=0 Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.405770 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" event={"ID":"1d434db3-db90-41b2-9bd3-e6ef3009f878","Type":"ContainerDied","Data":"e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607"} Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.427397 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb6c537-e08f-48af-a1c8-5879a8519a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-54ch7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:34Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.444793 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:34Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.446762 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.446809 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.446817 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.446831 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.446841 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:34Z","lastTransitionTime":"2026-02-26T17:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.465356 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:34Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.487350 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tv2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cefacfa-0108-4252-aa69-4b35bcc0f69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rswkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tv2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:34Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.517153 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:34Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.539603 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:34Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.551397 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.551434 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.551442 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.551455 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.551465 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:34Z","lastTransitionTime":"2026-02-26T17:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.554640 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:34Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.567835 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25e83477-65d0-41be-8e55-fdacfc5871a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57fc8ccb64122947b5a7cca9213198f432744cd553224168c0c46c30692dee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2mnb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:34Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.578460 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d4ls2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f108050dc8903541643b76d657da2ecc032d6ae2700c3150b7bcde71e25cc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxgv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d4ls2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:34Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.593589 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d434db3-db90-41b2-9bd3-e6ef3009f878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqbgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:34Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.606356 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:34Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.619698 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:34Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.637326 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:34Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.650890 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb6c537-e08f-48af-a1c8-5879a8519a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-54ch7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:34Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.653420 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.653469 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.653481 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.653501 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.653514 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:34Z","lastTransitionTime":"2026-02-26T17:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.672859 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:34Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.688888 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:34Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.705135 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:34Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.719009 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:34Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.737491 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:34Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.756319 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tv2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cefacfa-0108-4252-aa69-4b35bcc0f69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rswkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tv2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:34Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.756985 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.757092 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.757110 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.757134 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.757154 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:34Z","lastTransitionTime":"2026-02-26T17:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.771583 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25e83477-65d0-41be-8e55-fdacfc5871a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57fc8ccb64122947b5a7cca9213198f432744cd553224168c0c46c30692dee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2mnb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:34Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.788429 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:34Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.801418 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:34Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.818378 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:34Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.830902 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d4ls2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f108050dc8903541643b76d657da2ecc032d6ae2700c3150b7bcde71e25cc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxgv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d4ls2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:34Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.859680 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.859711 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.859720 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.859732 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.859741 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:34Z","lastTransitionTime":"2026-02-26T17:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.859805 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d434db3-db90-41b2-9bd3-e6ef3009f878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqbgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:34Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.962364 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.962405 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.962415 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.962433 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:34 crc kubenswrapper[4805]: I0226 17:16:34.962446 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:34Z","lastTransitionTime":"2026-02-26T17:16:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.064655 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.064688 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.064697 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.064709 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.064717 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:35Z","lastTransitionTime":"2026-02-26T17:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.167339 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.167383 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.167395 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.167415 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.167427 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:35Z","lastTransitionTime":"2026-02-26T17:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.270237 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.270503 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.270514 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.270528 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.270539 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:35Z","lastTransitionTime":"2026-02-26T17:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.375883 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.375928 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.375940 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.375956 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.375969 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:35Z","lastTransitionTime":"2026-02-26T17:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.410797 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" event={"ID":"1d434db3-db90-41b2-9bd3-e6ef3009f878","Type":"ContainerStarted","Data":"1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f"} Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.410906 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" event={"ID":"1d434db3-db90-41b2-9bd3-e6ef3009f878","Type":"ContainerStarted","Data":"a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553"} Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.410927 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" event={"ID":"1d434db3-db90-41b2-9bd3-e6ef3009f878","Type":"ContainerStarted","Data":"8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c"} Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.410945 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" event={"ID":"1d434db3-db90-41b2-9bd3-e6ef3009f878","Type":"ContainerStarted","Data":"f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98"} Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.412262 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" event={"ID":"5fb6c537-e08f-48af-a1c8-5879a8519a5c","Type":"ContainerStarted","Data":"77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1"} Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.427772 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb6c537-e08f-48af-a1c8-5879a8519a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-54ch7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:35Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.443968 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:35Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.460524 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:35Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.475923 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:35Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.477800 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.477849 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.477859 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.477872 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.477882 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:35Z","lastTransitionTime":"2026-02-26T17:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.490710 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:35Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.504044 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tv2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cefacfa-0108-4252-aa69-4b35bcc0f69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rswkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tv2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:35Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.523055 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:35Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.534248 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25e83477-65d0-41be-8e55-fdacfc5871a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57fc8ccb64122947b5a7cca9213198f432744cd553224168c0c46c30692dee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2mnb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:35Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.545361 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:35Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.556079 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:35Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.565622 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d4ls2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f108050dc8903541643b76d657da2ecc032d6ae2700c3150b7bcde71e25cc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxgv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d4ls2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:35Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.580161 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.580210 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.580221 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.580233 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.580243 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:35Z","lastTransitionTime":"2026-02-26T17:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.584208 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d434db3-db90-41b2-9bd3-e6ef3009f878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqbgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:35Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.596905 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:35Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.682410 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.682452 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.682461 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.682476 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.682485 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:35Z","lastTransitionTime":"2026-02-26T17:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.784831 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.784874 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.784885 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.784904 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.784917 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:35Z","lastTransitionTime":"2026-02-26T17:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.887935 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.888227 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.888240 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.888259 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.888271 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:35Z","lastTransitionTime":"2026-02-26T17:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.952276 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:16:35 crc kubenswrapper[4805]: E0226 17:16:35.952472 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.952496 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.952551 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:16:35 crc kubenswrapper[4805]: E0226 17:16:35.952661 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:16:35 crc kubenswrapper[4805]: E0226 17:16:35.952746 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.992789 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.992830 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.992846 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.992866 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:35 crc kubenswrapper[4805]: I0226 17:16:35.992881 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:35Z","lastTransitionTime":"2026-02-26T17:16:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.095600 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.095642 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.095657 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.095676 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.095691 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:36Z","lastTransitionTime":"2026-02-26T17:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.197678 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.197713 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.197722 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.197736 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.197745 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:36Z","lastTransitionTime":"2026-02-26T17:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.299860 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.299909 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.299919 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.299934 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.299945 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:36Z","lastTransitionTime":"2026-02-26T17:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.403363 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.403414 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.403423 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.403438 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.403447 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:36Z","lastTransitionTime":"2026-02-26T17:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.417522 4805 generic.go:334] "Generic (PLEG): container finished" podID="5fb6c537-e08f-48af-a1c8-5879a8519a5c" containerID="77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1" exitCode=0 Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.417583 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" event={"ID":"5fb6c537-e08f-48af-a1c8-5879a8519a5c","Type":"ContainerDied","Data":"77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1"} Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.427930 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" event={"ID":"1d434db3-db90-41b2-9bd3-e6ef3009f878","Type":"ContainerStarted","Data":"678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38"} Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.428325 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" event={"ID":"1d434db3-db90-41b2-9bd3-e6ef3009f878","Type":"ContainerStarted","Data":"acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34"} Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.440078 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:36Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.462331 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:36Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.478446 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d4ls2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f108050dc8903541643b76d657da2ecc032d6ae2700c3150b7bcde71e25cc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxgv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d4ls2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:36Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.496678 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d434db3-db90-41b2-9bd3-e6ef3009f878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqbgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:36Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.505940 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.505984 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.505996 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.506032 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.506044 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:36Z","lastTransitionTime":"2026-02-26T17:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.513230 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:36Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.527731 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb6c537-e08f-48af-a1c8-5879a8519a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-54ch7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:36Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.540251 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:36Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.550270 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:36Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.562726 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:36Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.575493 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:36Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.588426 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tv2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cefacfa-0108-4252-aa69-4b35bcc0f69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rswkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tv2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:36Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.606288 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:36Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.608660 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.608702 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.608713 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.608731 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.608740 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:36Z","lastTransitionTime":"2026-02-26T17:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.619927 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25e83477-65d0-41be-8e55-fdacfc5871a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57fc8ccb64122947b5a7cca9213198f432744cd553224168c0c46c30692dee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2mnb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:36Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.711658 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.711704 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.711719 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.711739 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.711755 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:36Z","lastTransitionTime":"2026-02-26T17:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.814847 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.815189 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.815219 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.815238 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.815250 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:36Z","lastTransitionTime":"2026-02-26T17:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.918145 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.918190 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.918202 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.918218 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.918229 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:36Z","lastTransitionTime":"2026-02-26T17:16:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:36 crc kubenswrapper[4805]: I0226 17:16:36.972633 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tv2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cefacfa-0108-4252-aa69-4b35bcc0f69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rswkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tv2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:36Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.004337 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.017327 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.023426 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.023478 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.023493 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.023512 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.023525 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:37Z","lastTransitionTime":"2026-02-26T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.029738 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.041068 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.054515 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.068002 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25e83477-65d0-41be-8e55-fdacfc5871a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57fc8ccb64122947b5a7cca9213198f432744cd553224168c0c46c30692dee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2mnb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.081960 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.094717 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.108381 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.120901 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d4ls2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f108050dc8903541643b76d657da2ecc032d6ae2700c3150b7bcde71e25cc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxgv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d4ls2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.125722 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.125751 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.125762 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.125776 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.125787 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:37Z","lastTransitionTime":"2026-02-26T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.144834 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d434db3-db90-41b2-9bd3-e6ef3009f878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqbgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.159909 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb6c537-e08f-48af-a1c8-5879a8519a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-54ch7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.230599 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.230660 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.230673 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.230695 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.230710 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:37Z","lastTransitionTime":"2026-02-26T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.333846 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.333922 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.333938 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.333987 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.334136 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:37Z","lastTransitionTime":"2026-02-26T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.435279 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"406ee0ca0120ca5ae395a260da90d8b3a15bb6294b69d86fbaf5c9b5502a509d"} Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.442727 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.442768 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.442778 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.442793 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.442803 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:37Z","lastTransitionTime":"2026-02-26T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.445335 4805 generic.go:334] "Generic (PLEG): container finished" podID="5fb6c537-e08f-48af-a1c8-5879a8519a5c" containerID="6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09" exitCode=0 Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.445397 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" event={"ID":"5fb6c537-e08f-48af-a1c8-5879a8519a5c","Type":"ContainerDied","Data":"6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09"} Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.461439 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb6c537-e08f-48af-a1c8-5879a8519a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-54ch7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.477623 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.496273 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tv2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cefacfa-0108-4252-aa69-4b35bcc0f69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rswkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tv2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.519952 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.533966 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.548115 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.548311 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.548340 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.548350 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.548366 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.548378 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:37Z","lastTransitionTime":"2026-02-26T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.561962 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.576227 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25e83477-65d0-41be-8e55-fdacfc5871a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57fc8ccb64122947b5a7cca9213198f432744cd553224168c0c46c30692dee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2mnb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.594888 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d434db3-db90-41b2-9bd3-e6ef3009f878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqbgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.610800 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406ee0ca0120ca5ae395a260da90d8b3a15bb6294b69d86fbaf5c9b5502a509d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.623518 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.636619 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.647768 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d4ls2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f108050dc8903541643b76d657da2ecc032d6ae2700c3150b7bcde71e25cc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxgv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d4ls2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.650347 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.650372 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.650380 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.650392 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.650402 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:37Z","lastTransitionTime":"2026-02-26T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.663557 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25e83477-65d0-41be-8e55-fdacfc5871a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57fc8ccb64122947b5a7cca9213198f432744cd553224168c0c46c30692dee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2mnb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.678495 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406ee0ca0120ca5ae395a260da90d8b3a15bb6294b69d86fbaf5c9b5502a509d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.694961 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.711616 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.723710 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d4ls2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f108050dc8903541643b76d657da2ecc032d6ae2700c3150b7bcde71e25cc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxgv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d4ls2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.749459 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d434db3-db90-41b2-9bd3-e6ef3009f878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqbgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.753063 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.753094 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.753105 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.753121 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.753134 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:37Z","lastTransitionTime":"2026-02-26T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.768893 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb6c537-e08f-48af-a1c8-5879a8519a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-54ch7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.781956 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tv2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cefacfa-0108-4252-aa69-4b35bcc0f69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rswkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tv2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.805278 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.820795 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.834546 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.845557 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.856141 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.856176 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.856185 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.856201 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.856211 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:37Z","lastTransitionTime":"2026-02-26T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.859908 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.953069 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.953072 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:16:37 crc kubenswrapper[4805]: E0226 17:16:37.953805 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.953108 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:16:37 crc kubenswrapper[4805]: E0226 17:16:37.953947 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:16:37 crc kubenswrapper[4805]: E0226 17:16:37.954141 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.958803 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.958829 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.958840 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.958854 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:37 crc kubenswrapper[4805]: I0226 17:16:37.958865 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:37Z","lastTransitionTime":"2026-02-26T17:16:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.060876 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.060924 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.060936 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.060953 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.060966 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:38Z","lastTransitionTime":"2026-02-26T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.166296 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.166334 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.166344 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.166360 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.166371 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:38Z","lastTransitionTime":"2026-02-26T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.268405 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.268450 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.268462 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.268477 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.268487 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:38Z","lastTransitionTime":"2026-02-26T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.371259 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.371295 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.371304 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.371672 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.371686 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:38Z","lastTransitionTime":"2026-02-26T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.452702 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" event={"ID":"1d434db3-db90-41b2-9bd3-e6ef3009f878","Type":"ContainerStarted","Data":"4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa"} Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.455328 4805 generic.go:334] "Generic (PLEG): container finished" podID="5fb6c537-e08f-48af-a1c8-5879a8519a5c" containerID="13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87" exitCode=0 Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.455360 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" event={"ID":"5fb6c537-e08f-48af-a1c8-5879a8519a5c","Type":"ContainerDied","Data":"13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87"} Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.470254 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:38Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.475210 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.475247 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.475256 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.475269 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.475278 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:38Z","lastTransitionTime":"2026-02-26T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.482936 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:38Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.494631 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:38Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.507424 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:38Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.520535 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tv2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cefacfa-0108-4252-aa69-4b35bcc0f69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rswkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tv2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:38Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.553995 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:38Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.574635 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25e83477-65d0-41be-8e55-fdacfc5871a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57fc8ccb64122947b5a7cca9213198f432744cd553224168c0c46c30692dee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2mnb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:38Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.577083 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.577137 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.577152 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.577181 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.577201 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:38Z","lastTransitionTime":"2026-02-26T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.593203 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:38Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.609204 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:38Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.619448 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.619493 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.619504 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.619517 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.619526 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:38Z","lastTransitionTime":"2026-02-26T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.620500 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d4ls2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f108050dc8903541643b76d657da2ecc032d6ae2700c3150b7bcde71e25cc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxgv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d4ls2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:38Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:38 crc kubenswrapper[4805]: E0226 17:16:38.631561 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:38Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.634803 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.634841 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.634850 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.634865 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.634874 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:38Z","lastTransitionTime":"2026-02-26T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.646943 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d434db3-db90-41b2-9bd3-e6ef3009f878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqbgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:38Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:38 crc kubenswrapper[4805]: E0226 17:16:38.647645 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:38Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.650953 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.650985 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.650994 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.651006 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.651027 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:38Z","lastTransitionTime":"2026-02-26T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.659203 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406ee0ca0120ca5ae395a260da90d8b3a15bb6294b69d86fbaf5c9b5502a509d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:38Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:38 crc kubenswrapper[4805]: E0226 17:16:38.669728 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:38Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.673515 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.673555 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.673566 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.673585 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.673597 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:38Z","lastTransitionTime":"2026-02-26T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.676360 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb6c537-e08f-48af-a1c8-5879a8519a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-54ch7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:38Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:38 crc kubenswrapper[4805]: E0226 17:16:38.686412 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:38Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.689537 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.689571 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.689582 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.689597 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.689608 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:38Z","lastTransitionTime":"2026-02-26T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:38 crc kubenswrapper[4805]: E0226 17:16:38.700810 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:38Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:38 crc kubenswrapper[4805]: E0226 17:16:38.700977 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.702064 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.702091 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.702099 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.702116 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.702124 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:38Z","lastTransitionTime":"2026-02-26T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.803690 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.803717 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.803726 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.803740 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.803747 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:38Z","lastTransitionTime":"2026-02-26T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.905834 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.905899 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.905922 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.905951 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:38 crc kubenswrapper[4805]: I0226 17:16:38.905971 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:38Z","lastTransitionTime":"2026-02-26T17:16:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.008554 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.008591 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.008601 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.008615 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.008625 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:39Z","lastTransitionTime":"2026-02-26T17:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.013002 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-bjq6x"] Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.013540 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-bjq6x" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.017283 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.017436 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.017601 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.020323 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.039261 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25e83477-65d0-41be-8e55-fdacfc5871a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57fc8ccb64122947b5a7cca9213198f432744cd553224168c0c46c30692dee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2mnb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:39Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.066677 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406ee0ca0120ca5ae395a260da90d8b3a15bb6294b69d86fbaf5c9b5502a509d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:39Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.078307 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:39Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.089826 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:39Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.099433 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d4ls2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f108050dc8903541643b76d657da2ecc032d6ae2700c3150b7bcde71e25cc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxgv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d4ls2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:39Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.107789 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/619e6250-cc24-43ca-a031-f79f954df6d3-serviceca\") pod \"node-ca-bjq6x\" (UID: \"619e6250-cc24-43ca-a031-f79f954df6d3\") " pod="openshift-image-registry/node-ca-bjq6x" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.107850 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/619e6250-cc24-43ca-a031-f79f954df6d3-host\") pod \"node-ca-bjq6x\" (UID: \"619e6250-cc24-43ca-a031-f79f954df6d3\") " pod="openshift-image-registry/node-ca-bjq6x" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.107887 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zn4s\" (UniqueName: \"kubernetes.io/projected/619e6250-cc24-43ca-a031-f79f954df6d3-kube-api-access-6zn4s\") pod \"node-ca-bjq6x\" (UID: \"619e6250-cc24-43ca-a031-f79f954df6d3\") " pod="openshift-image-registry/node-ca-bjq6x" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.111282 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.111317 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.111328 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.111344 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.111353 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:39Z","lastTransitionTime":"2026-02-26T17:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.116550 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d434db3-db90-41b2-9bd3-e6ef3009f878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqbgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:39Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.130460 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb6c537-e08f-48af-a1c8-5879a8519a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-54ch7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:39Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.140559 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjq6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"619e6250-cc24-43ca-a031-f79f954df6d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjq6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:39Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.163149 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:39Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.177867 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:39Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.191583 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:39Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.205325 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:39Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.208329 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/619e6250-cc24-43ca-a031-f79f954df6d3-serviceca\") pod \"node-ca-bjq6x\" (UID: \"619e6250-cc24-43ca-a031-f79f954df6d3\") " pod="openshift-image-registry/node-ca-bjq6x" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.208372 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/619e6250-cc24-43ca-a031-f79f954df6d3-host\") pod \"node-ca-bjq6x\" (UID: \"619e6250-cc24-43ca-a031-f79f954df6d3\") " pod="openshift-image-registry/node-ca-bjq6x" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.208400 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zn4s\" (UniqueName: \"kubernetes.io/projected/619e6250-cc24-43ca-a031-f79f954df6d3-kube-api-access-6zn4s\") pod \"node-ca-bjq6x\" (UID: \"619e6250-cc24-43ca-a031-f79f954df6d3\") " pod="openshift-image-registry/node-ca-bjq6x" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.208493 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/619e6250-cc24-43ca-a031-f79f954df6d3-host\") pod \"node-ca-bjq6x\" (UID: \"619e6250-cc24-43ca-a031-f79f954df6d3\") " pod="openshift-image-registry/node-ca-bjq6x" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.209494 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/619e6250-cc24-43ca-a031-f79f954df6d3-serviceca\") pod \"node-ca-bjq6x\" (UID: \"619e6250-cc24-43ca-a031-f79f954df6d3\") " pod="openshift-image-registry/node-ca-bjq6x" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.213765 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.213792 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.213801 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.213815 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.213824 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:39Z","lastTransitionTime":"2026-02-26T17:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.222742 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:39Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.229296 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zn4s\" (UniqueName: \"kubernetes.io/projected/619e6250-cc24-43ca-a031-f79f954df6d3-kube-api-access-6zn4s\") pod \"node-ca-bjq6x\" (UID: \"619e6250-cc24-43ca-a031-f79f954df6d3\") " pod="openshift-image-registry/node-ca-bjq6x" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.243783 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tv2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cefacfa-0108-4252-aa69-4b35bcc0f69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rswkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tv2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:39Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.316708 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.316746 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.316758 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.316773 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.316784 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:39Z","lastTransitionTime":"2026-02-26T17:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.329307 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-bjq6x" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.419750 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.420039 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.420056 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.420071 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.420080 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:39Z","lastTransitionTime":"2026-02-26T17:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.461561 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" event={"ID":"5fb6c537-e08f-48af-a1c8-5879a8519a5c","Type":"ContainerStarted","Data":"21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3"} Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.462498 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-bjq6x" event={"ID":"619e6250-cc24-43ca-a031-f79f954df6d3","Type":"ContainerStarted","Data":"8ccb291935f1f134c77fb4a3f7b6ea5796a4679e0331a5e85b26b5ad7df88cff"} Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.473450 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25e83477-65d0-41be-8e55-fdacfc5871a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57fc8ccb64122947b5a7cca9213198f432744cd553224168c0c46c30692dee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2mnb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:39Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.493593 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d434db3-db90-41b2-9bd3-e6ef3009f878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqbgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:39Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.506575 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406ee0ca0120ca5ae395a260da90d8b3a15bb6294b69d86fbaf5c9b5502a509d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:39Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.520642 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:39Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.522659 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.522701 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.522712 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.522729 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.522741 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:39Z","lastTransitionTime":"2026-02-26T17:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.535717 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:39Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.546283 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d4ls2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f108050dc8903541643b76d657da2ecc032d6ae2700c3150b7bcde71e25cc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxgv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d4ls2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:39Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.563656 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb6c537-e08f-48af-a1c8-5879a8519a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-54ch7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:39Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.576737 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjq6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"619e6250-cc24-43ca-a031-f79f954df6d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjq6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:39Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.591738 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:39Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.604729 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tv2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cefacfa-0108-4252-aa69-4b35bcc0f69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rswkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tv2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:39Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.625060 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.625106 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.625119 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.625136 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.625148 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:39Z","lastTransitionTime":"2026-02-26T17:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.626189 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:39Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.643087 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:39Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.656684 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:39Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.669294 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:39Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.727525 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.727554 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.727562 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.727576 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.727586 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:39Z","lastTransitionTime":"2026-02-26T17:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.829809 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.829847 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.829856 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.829871 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.829882 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:39Z","lastTransitionTime":"2026-02-26T17:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.932275 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.932332 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.932348 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.932372 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.932387 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:39Z","lastTransitionTime":"2026-02-26T17:16:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.952910 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.953001 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:16:39 crc kubenswrapper[4805]: E0226 17:16:39.953053 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:16:39 crc kubenswrapper[4805]: I0226 17:16:39.952924 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:16:39 crc kubenswrapper[4805]: E0226 17:16:39.953143 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:16:39 crc kubenswrapper[4805]: E0226 17:16:39.953260 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.034985 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.035112 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.035127 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.035146 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.035158 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:40Z","lastTransitionTime":"2026-02-26T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.139844 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.140239 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.140257 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.140280 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.140296 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:40Z","lastTransitionTime":"2026-02-26T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.242488 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.242530 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.242539 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.242554 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.242563 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:40Z","lastTransitionTime":"2026-02-26T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.345431 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.345473 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.345482 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.345498 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.345509 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:40Z","lastTransitionTime":"2026-02-26T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.447344 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.447453 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.447479 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.447508 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.447531 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:40Z","lastTransitionTime":"2026-02-26T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.472679 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" event={"ID":"1d434db3-db90-41b2-9bd3-e6ef3009f878","Type":"ContainerStarted","Data":"2241bb64bd33564cacd9768da6b4c286f9435754211bf8224871c9bafe8e2bfe"} Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.473076 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.473290 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.473313 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.474294 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-bjq6x" event={"ID":"619e6250-cc24-43ca-a031-f79f954df6d3","Type":"ContainerStarted","Data":"82dafe627cf6c7501f852d717e5f74d8df670de56fe64e20a6f9c45052dd4a22"} Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.480154 4805 generic.go:334] "Generic (PLEG): container finished" podID="5fb6c537-e08f-48af-a1c8-5879a8519a5c" containerID="21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3" exitCode=0 Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.480204 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" event={"ID":"5fb6c537-e08f-48af-a1c8-5879a8519a5c","Type":"ContainerDied","Data":"21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3"} Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.491775 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.499140 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.503305 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.510190 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.525177 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.541578 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tv2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cefacfa-0108-4252-aa69-4b35bcc0f69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rswkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tv2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.549815 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.549841 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.549849 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.549865 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.549875 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:40Z","lastTransitionTime":"2026-02-26T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.569492 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.590269 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.603260 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25e83477-65d0-41be-8e55-fdacfc5871a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57fc8ccb64122947b5a7cca9213198f432744cd553224168c0c46c30692dee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2mnb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.617444 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.635235 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d4ls2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f108050dc8903541643b76d657da2ecc032d6ae2700c3150b7bcde71e25cc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxgv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d4ls2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.652673 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.652711 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.652725 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.652741 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.652750 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:40Z","lastTransitionTime":"2026-02-26T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.660076 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d434db3-db90-41b2-9bd3-e6ef3009f878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2241bb64bd33564cacd9768da6b4c286f9435754211bf8224871c9bafe8e2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqbgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.673488 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406ee0ca0120ca5ae395a260da90d8b3a15bb6294b69d86fbaf5c9b5502a509d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.687767 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.704139 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjq6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"619e6250-cc24-43ca-a031-f79f954df6d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjq6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.717736 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb6c537-e08f-48af-a1c8-5879a8519a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-54ch7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.730785 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406ee0ca0120ca5ae395a260da90d8b3a15bb6294b69d86fbaf5c9b5502a509d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.742241 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.754402 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.755939 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.755989 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.755998 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.756037 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.756052 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:40Z","lastTransitionTime":"2026-02-26T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.764121 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d4ls2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f108050dc8903541643b76d657da2ecc032d6ae2700c3150b7bcde71e25cc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxgv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d4ls2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.782190 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d434db3-db90-41b2-9bd3-e6ef3009f878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2241bb64bd33564cacd9768da6b4c286f9435754211bf8224871c9bafe8e2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqbgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.796470 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb6c537-e08f-48af-a1c8-5879a8519a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-54ch7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.807563 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjq6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"619e6250-cc24-43ca-a031-f79f954df6d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dafe627cf6c7501f852d717e5f74d8df670de56fe64e20a6f9c45052dd4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjq6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.831782 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.843935 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.856890 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.858431 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.858464 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.858477 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.858497 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.858509 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:40Z","lastTransitionTime":"2026-02-26T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.869488 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.881754 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.897101 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tv2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cefacfa-0108-4252-aa69-4b35bcc0f69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rswkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tv2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.907646 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25e83477-65d0-41be-8e55-fdacfc5871a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57fc8ccb64122947b5a7cca9213198f432744cd553224168c0c46c30692dee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2mnb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:40Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.960673 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.960722 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.960732 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.960743 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:40 crc kubenswrapper[4805]: I0226 17:16:40.960753 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:40Z","lastTransitionTime":"2026-02-26T17:16:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.063512 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.063554 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.063565 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.063578 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.063588 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:41Z","lastTransitionTime":"2026-02-26T17:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.167006 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.167136 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.167156 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.167186 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.167204 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:41Z","lastTransitionTime":"2026-02-26T17:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.270338 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.270386 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.270397 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.270414 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.270426 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:41Z","lastTransitionTime":"2026-02-26T17:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.373635 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.373684 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.373697 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.373716 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.373730 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:41Z","lastTransitionTime":"2026-02-26T17:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.476722 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.476770 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.476785 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.476801 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.476814 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:41Z","lastTransitionTime":"2026-02-26T17:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.487981 4805 generic.go:334] "Generic (PLEG): container finished" podID="5fb6c537-e08f-48af-a1c8-5879a8519a5c" containerID="a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659" exitCode=0 Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.488062 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" event={"ID":"5fb6c537-e08f-48af-a1c8-5879a8519a5c","Type":"ContainerDied","Data":"a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659"} Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.507535 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25e83477-65d0-41be-8e55-fdacfc5871a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57fc8ccb64122947b5a7cca9213198f432744cd553224168c0c46c30692dee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2mnb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:41Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.526742 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406ee0ca0120ca5ae395a260da90d8b3a15bb6294b69d86fbaf5c9b5502a509d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:41Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.545610 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:41Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.560897 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:41Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.571689 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d4ls2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f108050dc8903541643b76d657da2ecc032d6ae2700c3150b7bcde71e25cc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxgv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d4ls2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:41Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.578697 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.578728 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.578736 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.578750 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.578759 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:41Z","lastTransitionTime":"2026-02-26T17:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.590835 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d434db3-db90-41b2-9bd3-e6ef3009f878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2241bb64bd33564cacd9768da6b4c286f9435754211bf8224871c9bafe8e2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqbgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:41Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.606129 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb6c537-e08f-48af-a1c8-5879a8519a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-54ch7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:41Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.622278 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjq6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"619e6250-cc24-43ca-a031-f79f954df6d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dafe627cf6c7501f852d717e5f74d8df670de56fe64e20a6f9c45052dd4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjq6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:41Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.644577 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:41Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.660942 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:41Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.676548 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:41Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.681743 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.681807 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.681822 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.681846 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.681859 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:41Z","lastTransitionTime":"2026-02-26T17:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.689341 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:41Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.703268 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:41Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.719516 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tv2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cefacfa-0108-4252-aa69-4b35bcc0f69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rswkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tv2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:41Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.783625 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.783659 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.783670 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.783685 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.783696 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:41Z","lastTransitionTime":"2026-02-26T17:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.886440 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.886522 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.886546 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.886569 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.886588 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:41Z","lastTransitionTime":"2026-02-26T17:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.938807 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.938942 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.938965 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.938986 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.939006 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:16:41 crc kubenswrapper[4805]: E0226 17:16:41.939133 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 17:16:41 crc kubenswrapper[4805]: E0226 17:16:41.939185 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 17:17:13.939171211 +0000 UTC m=+148.500925550 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 17:16:41 crc kubenswrapper[4805]: E0226 17:16:41.939222 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 17:16:41 crc kubenswrapper[4805]: E0226 17:16:41.939241 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 17:16:41 crc kubenswrapper[4805]: E0226 17:16:41.939253 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 17:16:41 crc kubenswrapper[4805]: E0226 17:16:41.939282 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 17:17:13.939274244 +0000 UTC m=+148.501028583 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 17:16:41 crc kubenswrapper[4805]: E0226 17:16:41.939313 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 17:16:41 crc kubenswrapper[4805]: E0226 17:16:41.939333 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 17:17:13.939326925 +0000 UTC m=+148.501081264 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 17:16:41 crc kubenswrapper[4805]: E0226 17:16:41.939370 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 17:16:41 crc kubenswrapper[4805]: E0226 17:16:41.939378 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 17:16:41 crc kubenswrapper[4805]: E0226 17:16:41.939385 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 17:16:41 crc kubenswrapper[4805]: E0226 17:16:41.939390 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:17:13.939352216 +0000 UTC m=+148.501106585 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:16:41 crc kubenswrapper[4805]: E0226 17:16:41.939446 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 17:17:13.939424517 +0000 UTC m=+148.501178886 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.952382 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.952522 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.952729 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:16:41 crc kubenswrapper[4805]: E0226 17:16:41.953215 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:16:41 crc kubenswrapper[4805]: E0226 17:16:41.953091 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:16:41 crc kubenswrapper[4805]: E0226 17:16:41.952895 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.988931 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.988971 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.988980 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.988995 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:41 crc kubenswrapper[4805]: I0226 17:16:41.989006 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:41Z","lastTransitionTime":"2026-02-26T17:16:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.091528 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.091564 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.091573 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.091589 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.091599 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:42Z","lastTransitionTime":"2026-02-26T17:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.194904 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.194962 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.194989 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.195034 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.195053 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:42Z","lastTransitionTime":"2026-02-26T17:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.298188 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.298469 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.298764 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.298979 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.299211 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:42Z","lastTransitionTime":"2026-02-26T17:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.402075 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.402441 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.402553 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.402686 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.402818 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:42Z","lastTransitionTime":"2026-02-26T17:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.493223 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" event={"ID":"5fb6c537-e08f-48af-a1c8-5879a8519a5c","Type":"ContainerStarted","Data":"93d6de27318643cad30debab75938fac7914dda795f3a5c911101cb1da5d5e8a"} Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.505106 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.505294 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.505382 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.505508 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.505572 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:42Z","lastTransitionTime":"2026-02-26T17:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.505998 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25e83477-65d0-41be-8e55-fdacfc5871a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57fc8ccb64122947b5a7cca9213198f432744cd553224168c0c46c30692dee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2mnb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:42Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.517321 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d4ls2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f108050dc8903541643b76d657da2ecc032d6ae2700c3150b7bcde71e25cc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxgv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d4ls2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:42Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.536548 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d434db3-db90-41b2-9bd3-e6ef3009f878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2241bb64bd33564cacd9768da6b4c286f9435754211bf8224871c9bafe8e2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqbgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:42Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.551534 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406ee0ca0120ca5ae395a260da90d8b3a15bb6294b69d86fbaf5c9b5502a509d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:42Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.566758 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:42Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.587532 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:42Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.609147 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.609210 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.609221 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.609239 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.609251 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:42Z","lastTransitionTime":"2026-02-26T17:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.612587 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb6c537-e08f-48af-a1c8-5879a8519a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d6de27318643cad30debab75938fac7914dda795f3a5c911101cb1da5d5e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-54ch7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:42Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.626662 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjq6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"619e6250-cc24-43ca-a031-f79f954df6d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dafe627cf6c7501f852d717e5f74d8df670de56fe64e20a6f9c45052dd4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjq6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:42Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.643222 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:42Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.658322 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:42Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.673179 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tv2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cefacfa-0108-4252-aa69-4b35bcc0f69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rswkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tv2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:42Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.693577 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:42Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.711191 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.711225 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.711236 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.711251 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.711263 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:42Z","lastTransitionTime":"2026-02-26T17:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.718308 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:42Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.733514 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:42Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.814136 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.814217 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.814244 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.814277 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.814331 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:42Z","lastTransitionTime":"2026-02-26T17:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.895443 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.915161 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406ee0ca0120ca5ae395a260da90d8b3a15bb6294b69d86fbaf5c9b5502a509d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:42Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.916978 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.917129 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.917211 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.917317 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.917416 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:42Z","lastTransitionTime":"2026-02-26T17:16:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.934741 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:42Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:42 crc kubenswrapper[4805]: I0226 17:16:42.970306 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:42Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:43 crc kubenswrapper[4805]: I0226 17:16:43.007262 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d4ls2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f108050dc8903541643b76d657da2ecc032d6ae2700c3150b7bcde71e25cc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxgv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d4ls2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:42Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:43 crc kubenswrapper[4805]: I0226 17:16:43.019913 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:43 crc kubenswrapper[4805]: I0226 17:16:43.019953 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:43 crc kubenswrapper[4805]: I0226 17:16:43.019967 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:43 crc kubenswrapper[4805]: I0226 17:16:43.019984 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:43 crc kubenswrapper[4805]: I0226 17:16:43.019996 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:43Z","lastTransitionTime":"2026-02-26T17:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:43 crc kubenswrapper[4805]: I0226 17:16:43.037094 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d434db3-db90-41b2-9bd3-e6ef3009f878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2241bb64bd33564cacd9768da6b4c286f9435754211bf8224871c9bafe8e2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqbgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:43Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:43 crc kubenswrapper[4805]: I0226 17:16:43.052749 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb6c537-e08f-48af-a1c8-5879a8519a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d6de27318643cad30debab75938fac7914dda795f3a5c911101cb1da5d5e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-54ch7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:43Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:43 crc kubenswrapper[4805]: I0226 17:16:43.069932 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjq6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"619e6250-cc24-43ca-a031-f79f954df6d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dafe627cf6c7501f852d717e5f74d8df670de56fe64e20a6f9c45052dd4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjq6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:43Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:43 crc kubenswrapper[4805]: I0226 17:16:43.083965 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tv2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cefacfa-0108-4252-aa69-4b35bcc0f69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rswkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tv2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:43Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:43 crc kubenswrapper[4805]: I0226 17:16:43.109184 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:43Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:43 crc kubenswrapper[4805]: I0226 17:16:43.122276 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:43 crc kubenswrapper[4805]: I0226 17:16:43.122322 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:43 crc kubenswrapper[4805]: I0226 17:16:43.122332 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:43 crc kubenswrapper[4805]: I0226 17:16:43.122351 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:43 crc kubenswrapper[4805]: I0226 17:16:43.122363 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:43Z","lastTransitionTime":"2026-02-26T17:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:43 crc kubenswrapper[4805]: I0226 17:16:43.124852 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:43Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:43 crc kubenswrapper[4805]: I0226 17:16:43.140495 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:43Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:43 crc kubenswrapper[4805]: I0226 17:16:43.152381 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:43Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:43 crc kubenswrapper[4805]: I0226 17:16:43.163079 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:43Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:43 crc kubenswrapper[4805]: I0226 17:16:43.173877 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25e83477-65d0-41be-8e55-fdacfc5871a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57fc8ccb64122947b5a7cca9213198f432744cd553224168c0c46c30692dee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2mnb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:43Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:43 crc kubenswrapper[4805]: I0226 17:16:43.224394 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:43 crc kubenswrapper[4805]: I0226 17:16:43.224645 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:43 crc kubenswrapper[4805]: I0226 17:16:43.224733 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:43 crc kubenswrapper[4805]: I0226 17:16:43.224806 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:43 crc kubenswrapper[4805]: I0226 17:16:43.224939 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:43Z","lastTransitionTime":"2026-02-26T17:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:43 crc kubenswrapper[4805]: I0226 17:16:43.327961 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:43 crc kubenswrapper[4805]: I0226 17:16:43.327995 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:43 crc kubenswrapper[4805]: I0226 17:16:43.328005 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:43 crc kubenswrapper[4805]: I0226 17:16:43.328035 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:43 crc kubenswrapper[4805]: I0226 17:16:43.328046 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:43Z","lastTransitionTime":"2026-02-26T17:16:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:44 crc kubenswrapper[4805]: I0226 17:16:44.843550 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:16:44 crc kubenswrapper[4805]: E0226 17:16:44.843665 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:16:44 crc kubenswrapper[4805]: I0226 17:16:44.843829 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:16:44 crc kubenswrapper[4805]: E0226 17:16:44.843898 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:16:44 crc kubenswrapper[4805]: I0226 17:16:44.844335 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:16:44 crc kubenswrapper[4805]: E0226 17:16:44.844430 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:16:44 crc kubenswrapper[4805]: I0226 17:16:44.848772 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:44 crc kubenswrapper[4805]: I0226 17:16:44.848830 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:44 crc kubenswrapper[4805]: I0226 17:16:44.848843 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:44 crc kubenswrapper[4805]: I0226 17:16:44.848864 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:44 crc kubenswrapper[4805]: I0226 17:16:44.848886 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:44Z","lastTransitionTime":"2026-02-26T17:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:44 crc kubenswrapper[4805]: I0226 17:16:44.937609 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5ss5q"] Feb 26 17:16:44 crc kubenswrapper[4805]: I0226 17:16:44.938254 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5ss5q" Feb 26 17:16:44 crc kubenswrapper[4805]: I0226 17:16:44.942247 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 26 17:16:44 crc kubenswrapper[4805]: I0226 17:16:44.943344 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 26 17:16:44 crc kubenswrapper[4805]: I0226 17:16:44.951663 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:44 crc kubenswrapper[4805]: I0226 17:16:44.951716 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:44 crc kubenswrapper[4805]: I0226 17:16:44.951728 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:44 crc kubenswrapper[4805]: I0226 17:16:44.951748 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:44 crc kubenswrapper[4805]: I0226 17:16:44.951760 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:44Z","lastTransitionTime":"2026-02-26T17:16:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:44 crc kubenswrapper[4805]: I0226 17:16:44.955057 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:44Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:44 crc kubenswrapper[4805]: I0226 17:16:44.971945 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shz7d\" (UniqueName: \"kubernetes.io/projected/3a674c1d-d647-41a7-a989-7e604dd9865a-kube-api-access-shz7d\") pod \"ovnkube-control-plane-749d76644c-5ss5q\" (UID: \"3a674c1d-d647-41a7-a989-7e604dd9865a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5ss5q" Feb 26 17:16:44 crc kubenswrapper[4805]: I0226 17:16:44.972341 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3a674c1d-d647-41a7-a989-7e604dd9865a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5ss5q\" (UID: \"3a674c1d-d647-41a7-a989-7e604dd9865a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5ss5q" Feb 26 17:16:44 crc kubenswrapper[4805]: I0226 17:16:44.972591 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3a674c1d-d647-41a7-a989-7e604dd9865a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5ss5q\" (UID: \"3a674c1d-d647-41a7-a989-7e604dd9865a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5ss5q" Feb 26 17:16:44 crc kubenswrapper[4805]: I0226 17:16:44.972718 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3a674c1d-d647-41a7-a989-7e604dd9865a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5ss5q\" (UID: \"3a674c1d-d647-41a7-a989-7e604dd9865a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5ss5q" Feb 26 17:16:44 crc kubenswrapper[4805]: I0226 17:16:44.973575 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:44Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:44 crc kubenswrapper[4805]: I0226 17:16:44.986215 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tv2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cefacfa-0108-4252-aa69-4b35bcc0f69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rswkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tv2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:44Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.012288 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:45Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.029011 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:45Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.044063 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:45Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.054432 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.054717 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.054804 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.054887 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.054967 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:45Z","lastTransitionTime":"2026-02-26T17:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.059359 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25e83477-65d0-41be-8e55-fdacfc5871a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57fc8ccb64122947b5a7cca9213198f432744cd553224168c0c46c30692dee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2mnb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:45Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.070383 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d4ls2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f108050dc8903541643b76d657da2ecc032d6ae2700c3150b7bcde71e25cc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxgv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d4ls2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:45Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.073057 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shz7d\" (UniqueName: \"kubernetes.io/projected/3a674c1d-d647-41a7-a989-7e604dd9865a-kube-api-access-shz7d\") pod \"ovnkube-control-plane-749d76644c-5ss5q\" (UID: \"3a674c1d-d647-41a7-a989-7e604dd9865a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5ss5q" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.073183 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3a674c1d-d647-41a7-a989-7e604dd9865a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5ss5q\" (UID: \"3a674c1d-d647-41a7-a989-7e604dd9865a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5ss5q" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.073290 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3a674c1d-d647-41a7-a989-7e604dd9865a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5ss5q\" (UID: \"3a674c1d-d647-41a7-a989-7e604dd9865a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5ss5q" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.073397 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3a674c1d-d647-41a7-a989-7e604dd9865a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5ss5q\" (UID: \"3a674c1d-d647-41a7-a989-7e604dd9865a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5ss5q" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.074329 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3a674c1d-d647-41a7-a989-7e604dd9865a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5ss5q\" (UID: \"3a674c1d-d647-41a7-a989-7e604dd9865a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5ss5q" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.074913 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3a674c1d-d647-41a7-a989-7e604dd9865a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5ss5q\" (UID: \"3a674c1d-d647-41a7-a989-7e604dd9865a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5ss5q" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.078404 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3a674c1d-d647-41a7-a989-7e604dd9865a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5ss5q\" (UID: \"3a674c1d-d647-41a7-a989-7e604dd9865a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5ss5q" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.089279 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d434db3-db90-41b2-9bd3-e6ef3009f878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2241bb64bd33564cacd9768da6b4c286f9435754211bf8224871c9bafe8e2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqbgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:45Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.097401 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shz7d\" (UniqueName: \"kubernetes.io/projected/3a674c1d-d647-41a7-a989-7e604dd9865a-kube-api-access-shz7d\") pod \"ovnkube-control-plane-749d76644c-5ss5q\" (UID: \"3a674c1d-d647-41a7-a989-7e604dd9865a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5ss5q" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.102049 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406ee0ca0120ca5ae395a260da90d8b3a15bb6294b69d86fbaf5c9b5502a509d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:45Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.117217 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:45Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.131528 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:45Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.143380 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5ss5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a674c1d-d647-41a7-a989-7e604dd9865a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shz7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shz7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5ss5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:45Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.157687 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.157721 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.157730 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.157743 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.157754 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:45Z","lastTransitionTime":"2026-02-26T17:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.158315 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb6c537-e08f-48af-a1c8-5879a8519a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d6de27318643cad30debab75938fac7914dda795f3a5c911101cb1da5d5e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-54ch7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:45Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.173826 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjq6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"619e6250-cc24-43ca-a031-f79f954df6d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dafe627cf6c7501f852d717e5f74d8df670de56fe64e20a6f9c45052dd4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjq6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:45Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.259366 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5ss5q" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.261159 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.261206 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.261230 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.261263 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.261281 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:45Z","lastTransitionTime":"2026-02-26T17:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.364616 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.364661 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.364672 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.364688 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.364706 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:45Z","lastTransitionTime":"2026-02-26T17:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.468544 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.468593 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.468605 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.468625 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.468639 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:45Z","lastTransitionTime":"2026-02-26T17:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.570776 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.570802 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.570810 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.570822 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.570830 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:45Z","lastTransitionTime":"2026-02-26T17:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.673152 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.673191 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.673202 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.673218 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.673229 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:45Z","lastTransitionTime":"2026-02-26T17:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.701742 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-hbv6d"] Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.702486 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:16:45 crc kubenswrapper[4805]: E0226 17:16:45.702580 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.719377 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:45Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.732989 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:45Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.747302 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:45Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.763855 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:45Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.775664 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.775828 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.775943 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.776056 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.776153 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:45Z","lastTransitionTime":"2026-02-26T17:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.778323 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6e20a5b-84fd-4e2d-836c-a3891ef809dc-metrics-certs\") pod \"network-metrics-daemon-hbv6d\" (UID: \"d6e20a5b-84fd-4e2d-836c-a3891ef809dc\") " pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.778363 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwfx6\" (UniqueName: \"kubernetes.io/projected/d6e20a5b-84fd-4e2d-836c-a3891ef809dc-kube-api-access-kwfx6\") pod \"network-metrics-daemon-hbv6d\" (UID: \"d6e20a5b-84fd-4e2d-836c-a3891ef809dc\") " pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.785735 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tv2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cefacfa-0108-4252-aa69-4b35bcc0f69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rswkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tv2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:45Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.797884 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hbv6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6e20a5b-84fd-4e2d-836c-a3891ef809dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hbv6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:45Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.819198 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:45Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.833540 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25e83477-65d0-41be-8e55-fdacfc5871a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57fc8ccb64122947b5a7cca9213198f432744cd553224168c0c46c30692dee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2mnb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:45Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.845512 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:45Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.857619 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:45Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.859582 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5ss5q" event={"ID":"3a674c1d-d647-41a7-a989-7e604dd9865a","Type":"ContainerStarted","Data":"cc9ee96dd45eafc8e2d880b861db16a4dca1cfee08257d56e6febe4293435266"} Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.871854 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d4ls2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f108050dc8903541643b76d657da2ecc032d6ae2700c3150b7bcde71e25cc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxgv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d4ls2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:45Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.878900 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6e20a5b-84fd-4e2d-836c-a3891ef809dc-metrics-certs\") pod \"network-metrics-daemon-hbv6d\" (UID: \"d6e20a5b-84fd-4e2d-836c-a3891ef809dc\") " pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.878932 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.878965 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.878980 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.878996 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.879008 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:45Z","lastTransitionTime":"2026-02-26T17:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.878944 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwfx6\" (UniqueName: \"kubernetes.io/projected/d6e20a5b-84fd-4e2d-836c-a3891ef809dc-kube-api-access-kwfx6\") pod \"network-metrics-daemon-hbv6d\" (UID: \"d6e20a5b-84fd-4e2d-836c-a3891ef809dc\") " pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:16:45 crc kubenswrapper[4805]: E0226 17:16:45.879166 4805 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 17:16:45 crc kubenswrapper[4805]: E0226 17:16:45.879251 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6e20a5b-84fd-4e2d-836c-a3891ef809dc-metrics-certs podName:d6e20a5b-84fd-4e2d-836c-a3891ef809dc nodeName:}" failed. No retries permitted until 2026-02-26 17:16:46.379227636 +0000 UTC m=+120.940981985 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d6e20a5b-84fd-4e2d-836c-a3891ef809dc-metrics-certs") pod "network-metrics-daemon-hbv6d" (UID: "d6e20a5b-84fd-4e2d-836c-a3891ef809dc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.893067 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d434db3-db90-41b2-9bd3-e6ef3009f878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2241bb64bd33564cacd9768da6b4c286f9435754211bf8224871c9bafe8e2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqbgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:45Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.895960 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwfx6\" (UniqueName: \"kubernetes.io/projected/d6e20a5b-84fd-4e2d-836c-a3891ef809dc-kube-api-access-kwfx6\") pod \"network-metrics-daemon-hbv6d\" (UID: \"d6e20a5b-84fd-4e2d-836c-a3891ef809dc\") " pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.905727 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406ee0ca0120ca5ae395a260da90d8b3a15bb6294b69d86fbaf5c9b5502a509d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:45Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.922593 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb6c537-e08f-48af-a1c8-5879a8519a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d6de27318643cad30debab75938fac7914dda795f3a5c911101cb1da5d5e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-54ch7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:45Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.933124 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjq6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"619e6250-cc24-43ca-a031-f79f954df6d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dafe627cf6c7501f852d717e5f74d8df670de56fe64e20a6f9c45052dd4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjq6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:45Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.945113 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5ss5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a674c1d-d647-41a7-a989-7e604dd9865a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shz7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shz7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5ss5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:45Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.980951 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.981003 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.981047 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.981071 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:45 crc kubenswrapper[4805]: I0226 17:16:45.981089 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:45Z","lastTransitionTime":"2026-02-26T17:16:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.083595 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.083648 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.083665 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.083692 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.083709 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:46Z","lastTransitionTime":"2026-02-26T17:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.188512 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.188566 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.188584 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.188607 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.188623 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:46Z","lastTransitionTime":"2026-02-26T17:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.291314 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.291369 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.291380 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.291397 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.291416 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:46Z","lastTransitionTime":"2026-02-26T17:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.382937 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6e20a5b-84fd-4e2d-836c-a3891ef809dc-metrics-certs\") pod \"network-metrics-daemon-hbv6d\" (UID: \"d6e20a5b-84fd-4e2d-836c-a3891ef809dc\") " pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:16:46 crc kubenswrapper[4805]: E0226 17:16:46.383070 4805 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 17:16:46 crc kubenswrapper[4805]: E0226 17:16:46.383133 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6e20a5b-84fd-4e2d-836c-a3891ef809dc-metrics-certs podName:d6e20a5b-84fd-4e2d-836c-a3891ef809dc nodeName:}" failed. No retries permitted until 2026-02-26 17:16:47.383118116 +0000 UTC m=+121.944872445 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d6e20a5b-84fd-4e2d-836c-a3891ef809dc-metrics-certs") pod "network-metrics-daemon-hbv6d" (UID: "d6e20a5b-84fd-4e2d-836c-a3891ef809dc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.393768 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.394001 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.394120 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.394253 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.394320 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:46Z","lastTransitionTime":"2026-02-26T17:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.497345 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.497414 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.497433 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.497456 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.497472 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:46Z","lastTransitionTime":"2026-02-26T17:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.600717 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.600811 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.600829 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.600853 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.600872 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:46Z","lastTransitionTime":"2026-02-26T17:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.703717 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.703783 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.703800 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.703822 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.703838 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:46Z","lastTransitionTime":"2026-02-26T17:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.806184 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.806268 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.806277 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.806292 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.806301 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:46Z","lastTransitionTime":"2026-02-26T17:16:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.863962 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5ss5q" event={"ID":"3a674c1d-d647-41a7-a989-7e604dd9865a","Type":"ContainerStarted","Data":"2a2fc75495b622246ff13b084e44b4eca3f17555dda3ffdf0095fc535f29325e"} Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.865904 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqbgw_1d434db3-db90-41b2-9bd3-e6ef3009f878/ovnkube-controller/0.log" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.869851 4805 generic.go:334] "Generic (PLEG): container finished" podID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerID="2241bb64bd33564cacd9768da6b4c286f9435754211bf8224871c9bafe8e2bfe" exitCode=1 Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.869887 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" event={"ID":"1d434db3-db90-41b2-9bd3-e6ef3009f878","Type":"ContainerDied","Data":"2241bb64bd33564cacd9768da6b4c286f9435754211bf8224871c9bafe8e2bfe"} Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.870958 4805 scope.go:117] "RemoveContainer" containerID="2241bb64bd33564cacd9768da6b4c286f9435754211bf8224871c9bafe8e2bfe" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.885634 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:46Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.900005 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:46Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:46 crc kubenswrapper[4805]: E0226 17:16:46.906910 4805 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.912553 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d4ls2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f108050dc8903541643b76d657da2ecc032d6ae2700c3150b7bcde71e25cc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxgv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d4ls2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:46Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.939151 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d434db3-db90-41b2-9bd3-e6ef3009f878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2241bb64bd33564cacd9768da6b4c286f9435754211bf8224871c9bafe8e2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2241bb64bd33564cacd9768da6b4c286f9435754211bf8224871c9bafe8e2bfe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T17:16:46Z\\\",\\\"message\\\":\\\" 6647 handler.go:208] Removed *v1.Node event handler 7\\\\nI0226 17:16:45.351675 6647 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0226 17:16:45.351702 6647 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0226 17:16:45.351709 6647 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0226 17:16:45.351755 6647 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0226 17:16:45.351764 6647 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0226 17:16:45.351828 6647 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0226 17:16:45.353484 6647 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0226 17:16:45.357248 6647 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0226 17:16:45.357258 6647 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0226 17:16:45.357306 6647 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0226 17:16:45.357337 6647 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0226 17:16:45.357339 6647 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0226 17:16:45.357384 6647 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0226 17:16:45.357364 6647 factory.go:656] Stopping watch factory\\\\nI0226 17:16:45.357414 6647 ovnkube.go:599] Stopped ovnkube\\\\nI0226 17:16:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqbgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:46Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.952763 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.952841 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:16:46 crc kubenswrapper[4805]: E0226 17:16:46.952925 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.952796 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.953150 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:16:46 crc kubenswrapper[4805]: E0226 17:16:46.953138 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:16:46 crc kubenswrapper[4805]: E0226 17:16:46.953211 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:16:46 crc kubenswrapper[4805]: E0226 17:16:46.953287 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.957231 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406ee0ca0120ca5ae395a260da90d8b3a15bb6294b69d86fbaf5c9b5502a509d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:46Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.974670 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb6c537-e08f-48af-a1c8-5879a8519a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d6de27318643cad30debab75938fac7914dda795f3a5c911101cb1da5d5e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-54ch7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:46Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.986307 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjq6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"619e6250-cc24-43ca-a031-f79f954df6d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dafe627cf6c7501f852d717e5f74d8df670de56fe64e20a6f9c45052dd4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjq6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:46Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:46 crc kubenswrapper[4805]: I0226 17:16:46.998775 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5ss5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a674c1d-d647-41a7-a989-7e604dd9865a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shz7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shz7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5ss5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:46Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:47 crc kubenswrapper[4805]: I0226 17:16:47.015791 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:47Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:47 crc kubenswrapper[4805]: I0226 17:16:47.036181 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:47Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:47 crc kubenswrapper[4805]: I0226 17:16:47.052414 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:47Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:47 crc kubenswrapper[4805]: I0226 17:16:47.071830 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:47Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:47 crc kubenswrapper[4805]: E0226 17:16:47.078092 4805 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 17:16:47 crc kubenswrapper[4805]: I0226 17:16:47.089969 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tv2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cefacfa-0108-4252-aa69-4b35bcc0f69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rswkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tv2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:47Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:47 crc kubenswrapper[4805]: I0226 17:16:47.107627 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hbv6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6e20a5b-84fd-4e2d-836c-a3891ef809dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hbv6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:47Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:47 crc kubenswrapper[4805]: I0226 17:16:47.129691 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:47Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:47 crc kubenswrapper[4805]: I0226 17:16:47.139500 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25e83477-65d0-41be-8e55-fdacfc5871a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57fc8ccb64122947b5a7cca9213198f432744cd553224168c0c46c30692dee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2mnb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:47Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:47 crc kubenswrapper[4805]: I0226 17:16:47.151084 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406ee0ca0120ca5ae395a260da90d8b3a15bb6294b69d86fbaf5c9b5502a509d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:47Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:47 crc kubenswrapper[4805]: I0226 17:16:47.165854 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:47Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:47 crc kubenswrapper[4805]: I0226 17:16:47.180155 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:47Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:47 crc kubenswrapper[4805]: I0226 17:16:47.192682 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d4ls2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f108050dc8903541643b76d657da2ecc032d6ae2700c3150b7bcde71e25cc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxgv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d4ls2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:47Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:47 crc kubenswrapper[4805]: I0226 17:16:47.211386 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d434db3-db90-41b2-9bd3-e6ef3009f878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2241bb64bd33564cacd9768da6b4c286f9435754211bf8224871c9bafe8e2bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2241bb64bd33564cacd9768da6b4c286f9435754211bf8224871c9bafe8e2bfe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T17:16:46Z\\\",\\\"message\\\":\\\" 6647 handler.go:208] Removed *v1.Node event handler 7\\\\nI0226 17:16:45.351675 6647 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0226 17:16:45.351702 6647 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0226 17:16:45.351709 6647 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0226 17:16:45.351755 6647 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0226 17:16:45.351764 6647 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0226 17:16:45.351828 6647 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0226 17:16:45.353484 6647 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0226 17:16:45.357248 6647 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0226 17:16:45.357258 6647 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0226 17:16:45.357306 6647 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0226 17:16:45.357337 6647 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0226 17:16:45.357339 6647 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0226 17:16:45.357384 6647 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0226 17:16:45.357364 6647 factory.go:656] Stopping watch factory\\\\nI0226 17:16:45.357414 6647 ovnkube.go:599] Stopped ovnkube\\\\nI0226 17:16:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqbgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:47Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:47 crc kubenswrapper[4805]: I0226 17:16:47.228780 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb6c537-e08f-48af-a1c8-5879a8519a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d6de27318643cad30debab75938fac7914dda795f3a5c911101cb1da5d5e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-54ch7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:47Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:47 crc kubenswrapper[4805]: I0226 17:16:47.238712 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjq6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"619e6250-cc24-43ca-a031-f79f954df6d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dafe627cf6c7501f852d717e5f74d8df670de56fe64e20a6f9c45052dd4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjq6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:47Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:47 crc kubenswrapper[4805]: I0226 17:16:47.249584 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5ss5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a674c1d-d647-41a7-a989-7e604dd9865a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shz7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shz7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5ss5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:47Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:47 crc kubenswrapper[4805]: I0226 17:16:47.262437 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tv2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cefacfa-0108-4252-aa69-4b35bcc0f69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rswkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tv2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:47Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:47 crc kubenswrapper[4805]: I0226 17:16:47.272630 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hbv6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6e20a5b-84fd-4e2d-836c-a3891ef809dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hbv6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:47Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:47 crc kubenswrapper[4805]: I0226 17:16:47.290528 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:47Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:47 crc kubenswrapper[4805]: I0226 17:16:47.302372 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:47Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:47 crc kubenswrapper[4805]: I0226 17:16:47.313488 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:47Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:47 crc kubenswrapper[4805]: I0226 17:16:47.324542 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:47Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:47 crc kubenswrapper[4805]: I0226 17:16:47.346690 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:47Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:47 crc kubenswrapper[4805]: I0226 17:16:47.361934 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25e83477-65d0-41be-8e55-fdacfc5871a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57fc8ccb64122947b5a7cca9213198f432744cd553224168c0c46c30692dee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2mnb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:47Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:47 crc kubenswrapper[4805]: I0226 17:16:47.391299 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6e20a5b-84fd-4e2d-836c-a3891ef809dc-metrics-certs\") pod \"network-metrics-daemon-hbv6d\" (UID: \"d6e20a5b-84fd-4e2d-836c-a3891ef809dc\") " pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:16:47 crc kubenswrapper[4805]: E0226 17:16:47.391492 4805 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 17:16:47 crc kubenswrapper[4805]: E0226 17:16:47.391578 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6e20a5b-84fd-4e2d-836c-a3891ef809dc-metrics-certs podName:d6e20a5b-84fd-4e2d-836c-a3891ef809dc nodeName:}" failed. No retries permitted until 2026-02-26 17:16:49.39156128 +0000 UTC m=+123.953315619 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d6e20a5b-84fd-4e2d-836c-a3891ef809dc-metrics-certs") pod "network-metrics-daemon-hbv6d" (UID: "d6e20a5b-84fd-4e2d-836c-a3891ef809dc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 17:16:47 crc kubenswrapper[4805]: I0226 17:16:47.879534 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqbgw_1d434db3-db90-41b2-9bd3-e6ef3009f878/ovnkube-controller/0.log" Feb 26 17:16:47 crc kubenswrapper[4805]: I0226 17:16:47.883083 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" event={"ID":"1d434db3-db90-41b2-9bd3-e6ef3009f878","Type":"ContainerStarted","Data":"70e91293898a48ef5b4a3169a2f369f3e384a8c214611db52ec9921d402308f2"} Feb 26 17:16:47 crc kubenswrapper[4805]: I0226 17:16:47.883726 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:16:47 crc kubenswrapper[4805]: I0226 17:16:47.885470 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5ss5q" event={"ID":"3a674c1d-d647-41a7-a989-7e604dd9865a","Type":"ContainerStarted","Data":"b1ecc506d706c8a1a59db461b134b8a0985cc022014987a6bbf1b0da5de90ac6"} Feb 26 17:16:47 crc kubenswrapper[4805]: I0226 17:16:47.902555 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:47Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:47 crc kubenswrapper[4805]: I0226 17:16:47.916771 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:47Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:47 crc kubenswrapper[4805]: I0226 17:16:47.931213 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:47Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:47 crc kubenswrapper[4805]: I0226 17:16:47.943734 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:47Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:47 crc kubenswrapper[4805]: I0226 17:16:47.955664 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:47Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:47 crc kubenswrapper[4805]: I0226 17:16:47.968698 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tv2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cefacfa-0108-4252-aa69-4b35bcc0f69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rswkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tv2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:47Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:47 crc kubenswrapper[4805]: I0226 17:16:47.979395 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hbv6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6e20a5b-84fd-4e2d-836c-a3891ef809dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hbv6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:47Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:47 crc kubenswrapper[4805]: I0226 17:16:47.990739 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25e83477-65d0-41be-8e55-fdacfc5871a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57fc8ccb64122947b5a7cca9213198f432744cd553224168c0c46c30692dee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2mnb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:47Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:48 crc kubenswrapper[4805]: I0226 17:16:48.003952 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406ee0ca0120ca5ae395a260da90d8b3a15bb6294b69d86fbaf5c9b5502a509d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:48Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:48 crc kubenswrapper[4805]: I0226 17:16:48.019544 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:48Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:48 crc kubenswrapper[4805]: I0226 17:16:48.035085 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:48Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:48 crc kubenswrapper[4805]: I0226 17:16:48.048476 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d4ls2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f108050dc8903541643b76d657da2ecc032d6ae2700c3150b7bcde71e25cc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxgv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d4ls2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:48Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:48 crc kubenswrapper[4805]: I0226 17:16:48.072297 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d434db3-db90-41b2-9bd3-e6ef3009f878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70e91293898a48ef5b4a3169a2f369f3e384a8c214611db52ec9921d402308f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2241bb64bd33564cacd9768da6b4c286f9435754211bf8224871c9bafe8e2bfe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T17:16:46Z\\\",\\\"message\\\":\\\" 6647 handler.go:208] Removed *v1.Node event handler 7\\\\nI0226 17:16:45.351675 6647 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0226 17:16:45.351702 6647 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0226 17:16:45.351709 6647 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0226 17:16:45.351755 6647 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0226 17:16:45.351764 6647 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0226 17:16:45.351828 6647 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0226 17:16:45.353484 6647 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0226 17:16:45.357248 6647 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0226 17:16:45.357258 6647 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0226 17:16:45.357306 6647 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0226 17:16:45.357337 6647 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0226 17:16:45.357339 6647 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0226 17:16:45.357384 6647 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0226 17:16:45.357364 6647 factory.go:656] Stopping watch factory\\\\nI0226 17:16:45.357414 6647 ovnkube.go:599] Stopped ovnkube\\\\nI0226 17:16:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqbgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:48Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:48 crc kubenswrapper[4805]: I0226 17:16:48.089193 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb6c537-e08f-48af-a1c8-5879a8519a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d6de27318643cad30debab75938fac7914dda795f3a5c911101cb1da5d5e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-54ch7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:48Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:48 crc kubenswrapper[4805]: I0226 17:16:48.100706 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjq6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"619e6250-cc24-43ca-a031-f79f954df6d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dafe627cf6c7501f852d717e5f74d8df670de56fe64e20a6f9c45052dd4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjq6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:48Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:48 crc kubenswrapper[4805]: I0226 17:16:48.113091 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5ss5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a674c1d-d647-41a7-a989-7e604dd9865a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shz7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shz7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5ss5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:48Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:48 crc kubenswrapper[4805]: I0226 17:16:48.124873 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25e83477-65d0-41be-8e55-fdacfc5871a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57fc8ccb64122947b5a7cca9213198f432744cd553224168c0c46c30692dee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2mnb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:48Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:48 crc kubenswrapper[4805]: I0226 17:16:48.137814 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:48Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:48 crc kubenswrapper[4805]: I0226 17:16:48.147522 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d4ls2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f108050dc8903541643b76d657da2ecc032d6ae2700c3150b7bcde71e25cc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxgv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d4ls2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:48Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:48 crc kubenswrapper[4805]: I0226 17:16:48.166003 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d434db3-db90-41b2-9bd3-e6ef3009f878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70e91293898a48ef5b4a3169a2f369f3e384a8c214611db52ec9921d402308f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2241bb64bd33564cacd9768da6b4c286f9435754211bf8224871c9bafe8e2bfe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T17:16:46Z\\\",\\\"message\\\":\\\" 6647 handler.go:208] Removed *v1.Node event handler 7\\\\nI0226 17:16:45.351675 6647 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0226 17:16:45.351702 6647 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0226 17:16:45.351709 6647 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0226 17:16:45.351755 6647 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0226 17:16:45.351764 6647 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0226 17:16:45.351828 6647 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0226 17:16:45.353484 6647 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0226 17:16:45.357248 6647 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0226 17:16:45.357258 6647 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0226 17:16:45.357306 6647 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0226 17:16:45.357337 6647 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0226 17:16:45.357339 6647 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0226 17:16:45.357384 6647 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0226 17:16:45.357364 6647 factory.go:656] Stopping watch factory\\\\nI0226 17:16:45.357414 6647 ovnkube.go:599] Stopped ovnkube\\\\nI0226 17:16:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqbgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:48Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:48 crc kubenswrapper[4805]: I0226 17:16:48.178958 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406ee0ca0120ca5ae395a260da90d8b3a15bb6294b69d86fbaf5c9b5502a509d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:48Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:48 crc kubenswrapper[4805]: I0226 17:16:48.190488 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:48Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:48 crc kubenswrapper[4805]: I0226 17:16:48.199911 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjq6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"619e6250-cc24-43ca-a031-f79f954df6d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dafe627cf6c7501f852d717e5f74d8df670de56fe64e20a6f9c45052dd4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjq6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:48Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:48 crc kubenswrapper[4805]: I0226 17:16:48.211298 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5ss5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a674c1d-d647-41a7-a989-7e604dd9865a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a2fc75495b622246ff13b084e44b4eca3f17555dda3ffdf0095fc535f29325e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shz7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ecc506d706c8a1a59db461b134b8a0985cc022014987a6bbf1b0da5de90ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shz7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5ss5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:48Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:48 crc kubenswrapper[4805]: I0226 17:16:48.228377 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb6c537-e08f-48af-a1c8-5879a8519a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d6de27318643cad30debab75938fac7914dda795f3a5c911101cb1da5d5e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-54ch7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:48Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:48 crc kubenswrapper[4805]: I0226 17:16:48.246106 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:48Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:48 crc kubenswrapper[4805]: I0226 17:16:48.259055 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:48Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:48 crc kubenswrapper[4805]: I0226 17:16:48.275834 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:48Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:48 crc kubenswrapper[4805]: I0226 17:16:48.287357 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tv2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cefacfa-0108-4252-aa69-4b35bcc0f69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rswkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tv2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:48Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:48 crc kubenswrapper[4805]: I0226 17:16:48.299650 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hbv6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6e20a5b-84fd-4e2d-836c-a3891ef809dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hbv6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:48Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:48 crc kubenswrapper[4805]: I0226 17:16:48.322601 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:48Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:48 crc kubenswrapper[4805]: I0226 17:16:48.337641 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:48Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:48 crc kubenswrapper[4805]: I0226 17:16:48.515044 4805 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 26 17:16:48 crc kubenswrapper[4805]: I0226 17:16:48.891536 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqbgw_1d434db3-db90-41b2-9bd3-e6ef3009f878/ovnkube-controller/1.log" Feb 26 17:16:48 crc kubenswrapper[4805]: I0226 17:16:48.892468 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqbgw_1d434db3-db90-41b2-9bd3-e6ef3009f878/ovnkube-controller/0.log" Feb 26 17:16:48 crc kubenswrapper[4805]: I0226 17:16:48.896295 4805 generic.go:334] "Generic (PLEG): container finished" podID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerID="70e91293898a48ef5b4a3169a2f369f3e384a8c214611db52ec9921d402308f2" exitCode=1 Feb 26 17:16:48 crc kubenswrapper[4805]: I0226 17:16:48.896402 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" event={"ID":"1d434db3-db90-41b2-9bd3-e6ef3009f878","Type":"ContainerDied","Data":"70e91293898a48ef5b4a3169a2f369f3e384a8c214611db52ec9921d402308f2"} Feb 26 17:16:48 crc kubenswrapper[4805]: I0226 17:16:48.896492 4805 scope.go:117] "RemoveContainer" containerID="2241bb64bd33564cacd9768da6b4c286f9435754211bf8224871c9bafe8e2bfe" Feb 26 17:16:48 crc kubenswrapper[4805]: I0226 17:16:48.898599 4805 scope.go:117] "RemoveContainer" containerID="70e91293898a48ef5b4a3169a2f369f3e384a8c214611db52ec9921d402308f2" Feb 26 17:16:48 crc kubenswrapper[4805]: E0226 17:16:48.898936 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pqbgw_openshift-ovn-kubernetes(1d434db3-db90-41b2-9bd3-e6ef3009f878)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" Feb 26 17:16:48 crc kubenswrapper[4805]: I0226 17:16:48.922384 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:48Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:48 crc kubenswrapper[4805]: I0226 17:16:48.952637 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:16:48 crc kubenswrapper[4805]: I0226 17:16:48.952769 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:16:48 crc kubenswrapper[4805]: E0226 17:16:48.952811 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:16:48 crc kubenswrapper[4805]: E0226 17:16:48.953010 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:16:48 crc kubenswrapper[4805]: I0226 17:16:48.952637 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:16:48 crc kubenswrapper[4805]: E0226 17:16:48.953205 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:16:48 crc kubenswrapper[4805]: I0226 17:16:48.953345 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:16:48 crc kubenswrapper[4805]: E0226 17:16:48.953458 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:16:48 crc kubenswrapper[4805]: I0226 17:16:48.975171 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tv2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cefacfa-0108-4252-aa69-4b35bcc0f69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rswkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tv2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:48Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.007865 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hbv6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6e20a5b-84fd-4e2d-836c-a3891ef809dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hbv6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:48Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.025514 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:49Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.036432 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:49Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.046967 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:49Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.056942 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:49Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.066574 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25e83477-65d0-41be-8e55-fdacfc5871a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57fc8ccb64122947b5a7cca9213198f432744cd553224168c0c46c30692dee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2mnb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:49Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.081880 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d434db3-db90-41b2-9bd3-e6ef3009f878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70e91293898a48ef5b4a3169a2f369f3e384a8c214611db52ec9921d402308f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2241bb64bd33564cacd9768da6b4c286f9435754211bf8224871c9bafe8e2bfe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T17:16:46Z\\\",\\\"message\\\":\\\" 6647 handler.go:208] Removed *v1.Node event handler 7\\\\nI0226 17:16:45.351675 6647 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0226 17:16:45.351702 6647 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0226 17:16:45.351709 6647 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0226 17:16:45.351755 6647 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0226 17:16:45.351764 6647 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0226 17:16:45.351828 6647 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0226 17:16:45.353484 6647 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0226 17:16:45.357248 6647 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0226 17:16:45.357258 6647 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0226 17:16:45.357306 6647 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0226 17:16:45.357337 6647 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0226 17:16:45.357339 6647 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0226 17:16:45.357384 6647 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0226 17:16:45.357364 6647 factory.go:656] Stopping watch factory\\\\nI0226 17:16:45.357414 6647 ovnkube.go:599] Stopped ovnkube\\\\nI0226 17:16:4\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70e91293898a48ef5b4a3169a2f369f3e384a8c214611db52ec9921d402308f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"message\\\":\\\"o:140\\\\nI0226 17:16:47.907933 6888 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0226 17:16:47.907990 6888 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0226 17:16:47.908010 6888 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0226 17:16:47.908054 6888 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0226 17:16:47.908069 6888 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0226 17:16:47.908086 6888 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0226 17:16:47.908086 6888 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0226 17:16:47.908100 6888 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0226 17:16:47.908123 6888 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0226 17:16:47.908140 6888 factory.go:656] Stopping watch factory\\\\nI0226 17:16:47.908148 6888 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0226 17:16:47.908119 6888 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0226 17:16:47.908157 6888 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqbgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:49Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.093531 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406ee0ca0120ca5ae395a260da90d8b3a15bb6294b69d86fbaf5c9b5502a509d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:49Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.103601 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.103649 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.103662 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.103678 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.103692 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:49Z","lastTransitionTime":"2026-02-26T17:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.104393 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:49Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.114664 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:49Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:49 crc kubenswrapper[4805]: E0226 17:16:49.116128 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:49Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.119749 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.119779 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.119788 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.119805 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.119817 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:49Z","lastTransitionTime":"2026-02-26T17:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.124827 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d4ls2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f108050dc8903541643b76d657da2ecc032d6ae2700c3150b7bcde71e25cc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxgv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d4ls2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:49Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:49 crc kubenswrapper[4805]: E0226 17:16:49.133502 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:49Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.136747 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb6c537-e08f-48af-a1c8-5879a8519a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d6de27318643cad30debab75938fac7914dda795f3a5c911101cb1da5d5e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-54ch7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:49Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.137007 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.137065 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.137077 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.137093 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.137104 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:49Z","lastTransitionTime":"2026-02-26T17:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.146526 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjq6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"619e6250-cc24-43ca-a031-f79f954df6d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dafe627cf6c7501f852d717e5f74d8df670de56fe64e20a6f9c45052dd4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjq6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:49Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:49 crc kubenswrapper[4805]: E0226 17:16:49.149609 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:49Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.152577 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.152622 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.152635 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.152652 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.152664 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:49Z","lastTransitionTime":"2026-02-26T17:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.158512 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5ss5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a674c1d-d647-41a7-a989-7e604dd9865a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a2fc75495b622246ff13b084e44b4eca3f17555dda3ffdf0095fc535f29325e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shz7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ecc506d706c8a1a59db461b134b8a0985cc022014987a6bbf1b0da5de90ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shz7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5ss5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:49Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:49 crc kubenswrapper[4805]: E0226 17:16:49.165558 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:49Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.169448 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.169497 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.169510 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.169527 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.169540 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:49Z","lastTransitionTime":"2026-02-26T17:16:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:49 crc kubenswrapper[4805]: E0226 17:16:49.184884 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:49Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:49 crc kubenswrapper[4805]: E0226 17:16:49.185071 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.413971 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6e20a5b-84fd-4e2d-836c-a3891ef809dc-metrics-certs\") pod \"network-metrics-daemon-hbv6d\" (UID: \"d6e20a5b-84fd-4e2d-836c-a3891ef809dc\") " pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:16:49 crc kubenswrapper[4805]: E0226 17:16:49.414233 4805 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 17:16:49 crc kubenswrapper[4805]: E0226 17:16:49.414314 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6e20a5b-84fd-4e2d-836c-a3891ef809dc-metrics-certs podName:d6e20a5b-84fd-4e2d-836c-a3891ef809dc nodeName:}" failed. No retries permitted until 2026-02-26 17:16:53.414291435 +0000 UTC m=+127.976045854 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d6e20a5b-84fd-4e2d-836c-a3891ef809dc-metrics-certs") pod "network-metrics-daemon-hbv6d" (UID: "d6e20a5b-84fd-4e2d-836c-a3891ef809dc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.902588 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqbgw_1d434db3-db90-41b2-9bd3-e6ef3009f878/ovnkube-controller/1.log" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.906774 4805 scope.go:117] "RemoveContainer" containerID="70e91293898a48ef5b4a3169a2f369f3e384a8c214611db52ec9921d402308f2" Feb 26 17:16:49 crc kubenswrapper[4805]: E0226 17:16:49.906927 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pqbgw_openshift-ovn-kubernetes(1d434db3-db90-41b2-9bd3-e6ef3009f878)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.923603 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:49Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.939310 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d4ls2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f108050dc8903541643b76d657da2ecc032d6ae2700c3150b7bcde71e25cc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxgv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d4ls2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:49Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.963447 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d434db3-db90-41b2-9bd3-e6ef3009f878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70e91293898a48ef5b4a3169a2f369f3e384a8c214611db52ec9921d402308f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70e91293898a48ef5b4a3169a2f369f3e384a8c214611db52ec9921d402308f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"message\\\":\\\"o:140\\\\nI0226 17:16:47.907933 6888 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0226 17:16:47.907990 6888 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0226 17:16:47.908010 6888 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0226 17:16:47.908054 6888 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0226 17:16:47.908069 6888 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0226 17:16:47.908086 6888 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0226 17:16:47.908086 6888 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0226 17:16:47.908100 6888 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0226 17:16:47.908123 6888 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0226 17:16:47.908140 6888 factory.go:656] Stopping watch factory\\\\nI0226 17:16:47.908148 6888 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0226 17:16:47.908119 6888 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0226 17:16:47.908157 6888 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pqbgw_openshift-ovn-kubernetes(1d434db3-db90-41b2-9bd3-e6ef3009f878)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqbgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:49Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.980061 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406ee0ca0120ca5ae395a260da90d8b3a15bb6294b69d86fbaf5c9b5502a509d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:49Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:49 crc kubenswrapper[4805]: I0226 17:16:49.996534 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:49Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:50 crc kubenswrapper[4805]: I0226 17:16:50.010952 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjq6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"619e6250-cc24-43ca-a031-f79f954df6d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dafe627cf6c7501f852d717e5f74d8df670de56fe64e20a6f9c45052dd4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjq6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:50Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:50 crc kubenswrapper[4805]: I0226 17:16:50.029302 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5ss5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a674c1d-d647-41a7-a989-7e604dd9865a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a2fc75495b622246ff13b084e44b4eca3f17555dda3ffdf0095fc535f29325e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shz7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ecc506d706c8a1a59db461b134b8a0985cc022014987a6bbf1b0da5de90ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shz7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5ss5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:50Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:50 crc kubenswrapper[4805]: I0226 17:16:50.051307 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb6c537-e08f-48af-a1c8-5879a8519a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d6de27318643cad30debab75938fac7914dda795f3a5c911101cb1da5d5e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-54ch7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:50Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:50 crc kubenswrapper[4805]: I0226 17:16:50.067295 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:50Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:50 crc kubenswrapper[4805]: I0226 17:16:50.082892 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:50Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:50 crc kubenswrapper[4805]: I0226 17:16:50.097109 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:50Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:50 crc kubenswrapper[4805]: I0226 17:16:50.113460 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tv2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cefacfa-0108-4252-aa69-4b35bcc0f69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rswkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tv2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:50Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:50 crc kubenswrapper[4805]: I0226 17:16:50.128746 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hbv6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6e20a5b-84fd-4e2d-836c-a3891ef809dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hbv6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:50Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:50 crc kubenswrapper[4805]: I0226 17:16:50.154848 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:50Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:50 crc kubenswrapper[4805]: I0226 17:16:50.172600 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:50Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:50 crc kubenswrapper[4805]: I0226 17:16:50.187358 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25e83477-65d0-41be-8e55-fdacfc5871a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57fc8ccb64122947b5a7cca9213198f432744cd553224168c0c46c30692dee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2mnb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:50Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:50 crc kubenswrapper[4805]: I0226 17:16:50.952519 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:16:50 crc kubenswrapper[4805]: I0226 17:16:50.952562 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:16:50 crc kubenswrapper[4805]: I0226 17:16:50.952572 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:16:50 crc kubenswrapper[4805]: I0226 17:16:50.952551 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:16:50 crc kubenswrapper[4805]: E0226 17:16:50.952745 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:16:50 crc kubenswrapper[4805]: E0226 17:16:50.952832 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:16:50 crc kubenswrapper[4805]: E0226 17:16:50.952908 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:16:50 crc kubenswrapper[4805]: E0226 17:16:50.952960 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:16:52 crc kubenswrapper[4805]: E0226 17:16:52.079353 4805 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 17:16:52 crc kubenswrapper[4805]: I0226 17:16:52.952867 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:16:52 crc kubenswrapper[4805]: I0226 17:16:52.953032 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:16:52 crc kubenswrapper[4805]: I0226 17:16:52.953033 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:16:52 crc kubenswrapper[4805]: I0226 17:16:52.952936 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:16:52 crc kubenswrapper[4805]: E0226 17:16:52.953212 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:16:52 crc kubenswrapper[4805]: E0226 17:16:52.953366 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:16:52 crc kubenswrapper[4805]: E0226 17:16:52.953541 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:16:52 crc kubenswrapper[4805]: E0226 17:16:52.953677 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:16:53 crc kubenswrapper[4805]: I0226 17:16:53.454673 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6e20a5b-84fd-4e2d-836c-a3891ef809dc-metrics-certs\") pod \"network-metrics-daemon-hbv6d\" (UID: \"d6e20a5b-84fd-4e2d-836c-a3891ef809dc\") " pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:16:53 crc kubenswrapper[4805]: E0226 17:16:53.454921 4805 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 17:16:53 crc kubenswrapper[4805]: E0226 17:16:53.455106 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6e20a5b-84fd-4e2d-836c-a3891ef809dc-metrics-certs podName:d6e20a5b-84fd-4e2d-836c-a3891ef809dc nodeName:}" failed. No retries permitted until 2026-02-26 17:17:01.455065839 +0000 UTC m=+136.016820218 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d6e20a5b-84fd-4e2d-836c-a3891ef809dc-metrics-certs") pod "network-metrics-daemon-hbv6d" (UID: "d6e20a5b-84fd-4e2d-836c-a3891ef809dc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 17:16:54 crc kubenswrapper[4805]: I0226 17:16:54.952901 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:16:54 crc kubenswrapper[4805]: I0226 17:16:54.952901 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:16:54 crc kubenswrapper[4805]: I0226 17:16:54.953099 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:16:54 crc kubenswrapper[4805]: E0226 17:16:54.953177 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:16:54 crc kubenswrapper[4805]: E0226 17:16:54.953293 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:16:54 crc kubenswrapper[4805]: E0226 17:16:54.953405 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:16:54 crc kubenswrapper[4805]: I0226 17:16:54.953885 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:16:54 crc kubenswrapper[4805]: E0226 17:16:54.954178 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:16:56 crc kubenswrapper[4805]: I0226 17:16:56.952964 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:16:56 crc kubenswrapper[4805]: I0226 17:16:56.953009 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:16:56 crc kubenswrapper[4805]: I0226 17:16:56.952980 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:16:56 crc kubenswrapper[4805]: E0226 17:16:56.953136 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:16:56 crc kubenswrapper[4805]: E0226 17:16:56.953516 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:16:56 crc kubenswrapper[4805]: I0226 17:16:56.953564 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:16:56 crc kubenswrapper[4805]: E0226 17:16:56.953758 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:16:56 crc kubenswrapper[4805]: E0226 17:16:56.954332 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:16:56 crc kubenswrapper[4805]: I0226 17:16:56.962331 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 26 17:16:56 crc kubenswrapper[4805]: I0226 17:16:56.966564 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d4ls2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f108050dc8903541643b76d657da2ecc032d6ae2700c3150b7bcde71e25cc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxgv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d4ls2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:56Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:56 crc kubenswrapper[4805]: I0226 17:16:56.989520 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d434db3-db90-41b2-9bd3-e6ef3009f878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70e91293898a48ef5b4a3169a2f369f3e384a8c214611db52ec9921d402308f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70e91293898a48ef5b4a3169a2f369f3e384a8c214611db52ec9921d402308f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"message\\\":\\\"o:140\\\\nI0226 17:16:47.907933 6888 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0226 17:16:47.907990 6888 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0226 17:16:47.908010 6888 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0226 17:16:47.908054 6888 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0226 17:16:47.908069 6888 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0226 17:16:47.908086 6888 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0226 17:16:47.908086 6888 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0226 17:16:47.908100 6888 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0226 17:16:47.908123 6888 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0226 17:16:47.908140 6888 factory.go:656] Stopping watch factory\\\\nI0226 17:16:47.908148 6888 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0226 17:16:47.908119 6888 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0226 17:16:47.908157 6888 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pqbgw_openshift-ovn-kubernetes(1d434db3-db90-41b2-9bd3-e6ef3009f878)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqbgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:56Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:57 crc kubenswrapper[4805]: I0226 17:16:57.005634 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406ee0ca0120ca5ae395a260da90d8b3a15bb6294b69d86fbaf5c9b5502a509d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:57Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:57 crc kubenswrapper[4805]: I0226 17:16:57.020682 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:57Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:57 crc kubenswrapper[4805]: I0226 17:16:57.035666 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:57Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:57 crc kubenswrapper[4805]: I0226 17:16:57.048748 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5ss5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a674c1d-d647-41a7-a989-7e604dd9865a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a2fc75495b622246ff13b084e44b4eca3f17555dda3ffdf0095fc535f29325e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shz7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ecc506d706c8a1a59db461b134b8a0985cc022014987a6bbf1b0da5de90ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shz7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5ss5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:57Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:57 crc kubenswrapper[4805]: I0226 17:16:57.070375 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb6c537-e08f-48af-a1c8-5879a8519a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d6de27318643cad30debab75938fac7914dda795f3a5c911101cb1da5d5e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-54ch7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:57Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:57 crc kubenswrapper[4805]: E0226 17:16:57.080371 4805 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 17:16:57 crc kubenswrapper[4805]: I0226 17:16:57.084951 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjq6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"619e6250-cc24-43ca-a031-f79f954df6d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dafe627cf6c7501f852d717e5f74d8df670de56fe64e20a6f9c45052dd4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjq6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:57Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:57 crc kubenswrapper[4805]: I0226 17:16:57.102613 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:57Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:57 crc kubenswrapper[4805]: I0226 17:16:57.114739 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:57Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:57 crc kubenswrapper[4805]: I0226 17:16:57.128811 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tv2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cefacfa-0108-4252-aa69-4b35bcc0f69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rswkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tv2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:57Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:57 crc kubenswrapper[4805]: I0226 17:16:57.140248 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hbv6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6e20a5b-84fd-4e2d-836c-a3891ef809dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hbv6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:57Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:57 crc kubenswrapper[4805]: I0226 17:16:57.161894 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:57Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:57 crc kubenswrapper[4805]: I0226 17:16:57.177001 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:57Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:57 crc kubenswrapper[4805]: I0226 17:16:57.190685 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:57Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:57 crc kubenswrapper[4805]: I0226 17:16:57.202489 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25e83477-65d0-41be-8e55-fdacfc5871a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57fc8ccb64122947b5a7cca9213198f432744cd553224168c0c46c30692dee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2mnb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:57Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:58 crc kubenswrapper[4805]: I0226 17:16:58.952718 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:16:58 crc kubenswrapper[4805]: I0226 17:16:58.952719 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:16:58 crc kubenswrapper[4805]: I0226 17:16:58.952856 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:16:58 crc kubenswrapper[4805]: E0226 17:16:58.953046 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:16:58 crc kubenswrapper[4805]: I0226 17:16:58.953441 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:16:58 crc kubenswrapper[4805]: E0226 17:16:58.953634 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:16:58 crc kubenswrapper[4805]: E0226 17:16:58.953765 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:16:58 crc kubenswrapper[4805]: E0226 17:16:58.953819 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:16:59 crc kubenswrapper[4805]: I0226 17:16:59.448847 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:59 crc kubenswrapper[4805]: I0226 17:16:59.448912 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:59 crc kubenswrapper[4805]: I0226 17:16:59.448932 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:59 crc kubenswrapper[4805]: I0226 17:16:59.448959 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:59 crc kubenswrapper[4805]: I0226 17:16:59.448976 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:59Z","lastTransitionTime":"2026-02-26T17:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:59 crc kubenswrapper[4805]: E0226 17:16:59.464873 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:59Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:59 crc kubenswrapper[4805]: I0226 17:16:59.469440 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:59 crc kubenswrapper[4805]: I0226 17:16:59.469484 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:59 crc kubenswrapper[4805]: I0226 17:16:59.469496 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:59 crc kubenswrapper[4805]: I0226 17:16:59.469512 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:59 crc kubenswrapper[4805]: I0226 17:16:59.469524 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:59Z","lastTransitionTime":"2026-02-26T17:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:59 crc kubenswrapper[4805]: E0226 17:16:59.483155 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:59Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:59 crc kubenswrapper[4805]: I0226 17:16:59.486895 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:59 crc kubenswrapper[4805]: I0226 17:16:59.486936 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:59 crc kubenswrapper[4805]: I0226 17:16:59.486946 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:59 crc kubenswrapper[4805]: I0226 17:16:59.486962 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:59 crc kubenswrapper[4805]: I0226 17:16:59.486973 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:59Z","lastTransitionTime":"2026-02-26T17:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:59 crc kubenswrapper[4805]: E0226 17:16:59.499163 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:59Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:59 crc kubenswrapper[4805]: I0226 17:16:59.503089 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:59 crc kubenswrapper[4805]: I0226 17:16:59.503139 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:59 crc kubenswrapper[4805]: I0226 17:16:59.503152 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:59 crc kubenswrapper[4805]: I0226 17:16:59.503166 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:59 crc kubenswrapper[4805]: I0226 17:16:59.503176 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:59Z","lastTransitionTime":"2026-02-26T17:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:59 crc kubenswrapper[4805]: E0226 17:16:59.520050 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:59Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:59 crc kubenswrapper[4805]: I0226 17:16:59.524632 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:16:59 crc kubenswrapper[4805]: I0226 17:16:59.524668 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:16:59 crc kubenswrapper[4805]: I0226 17:16:59.524682 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:16:59 crc kubenswrapper[4805]: I0226 17:16:59.524699 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:16:59 crc kubenswrapper[4805]: I0226 17:16:59.524711 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:16:59Z","lastTransitionTime":"2026-02-26T17:16:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:16:59 crc kubenswrapper[4805]: E0226 17:16:59.539483 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:16:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:16:59Z is after 2025-08-24T17:21:41Z" Feb 26 17:16:59 crc kubenswrapper[4805]: E0226 17:16:59.539641 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 17:17:00 crc kubenswrapper[4805]: I0226 17:17:00.953259 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:17:00 crc kubenswrapper[4805]: I0226 17:17:00.953299 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:17:00 crc kubenswrapper[4805]: I0226 17:17:00.953448 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:17:00 crc kubenswrapper[4805]: E0226 17:17:00.953456 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:17:00 crc kubenswrapper[4805]: I0226 17:17:00.953489 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:17:00 crc kubenswrapper[4805]: E0226 17:17:00.953641 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:17:00 crc kubenswrapper[4805]: E0226 17:17:00.954040 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:17:00 crc kubenswrapper[4805]: E0226 17:17:00.954111 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:17:00 crc kubenswrapper[4805]: I0226 17:17:00.954421 4805 scope.go:117] "RemoveContainer" containerID="70e91293898a48ef5b4a3169a2f369f3e384a8c214611db52ec9921d402308f2" Feb 26 17:17:01 crc kubenswrapper[4805]: I0226 17:17:01.549714 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6e20a5b-84fd-4e2d-836c-a3891ef809dc-metrics-certs\") pod \"network-metrics-daemon-hbv6d\" (UID: \"d6e20a5b-84fd-4e2d-836c-a3891ef809dc\") " pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:17:01 crc kubenswrapper[4805]: E0226 17:17:01.549860 4805 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 17:17:01 crc kubenswrapper[4805]: E0226 17:17:01.550004 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6e20a5b-84fd-4e2d-836c-a3891ef809dc-metrics-certs podName:d6e20a5b-84fd-4e2d-836c-a3891ef809dc nodeName:}" failed. No retries permitted until 2026-02-26 17:17:17.549990526 +0000 UTC m=+152.111744865 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d6e20a5b-84fd-4e2d-836c-a3891ef809dc-metrics-certs") pod "network-metrics-daemon-hbv6d" (UID: "d6e20a5b-84fd-4e2d-836c-a3891ef809dc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 17:17:01 crc kubenswrapper[4805]: I0226 17:17:01.950753 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqbgw_1d434db3-db90-41b2-9bd3-e6ef3009f878/ovnkube-controller/2.log" Feb 26 17:17:01 crc kubenswrapper[4805]: I0226 17:17:01.951783 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqbgw_1d434db3-db90-41b2-9bd3-e6ef3009f878/ovnkube-controller/1.log" Feb 26 17:17:01 crc kubenswrapper[4805]: I0226 17:17:01.956374 4805 generic.go:334] "Generic (PLEG): container finished" podID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerID="5fb6663ae4b2e6e8316d62fb7aba91569b3116677209607d3c69489c25fa097f" exitCode=1 Feb 26 17:17:01 crc kubenswrapper[4805]: I0226 17:17:01.956427 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" event={"ID":"1d434db3-db90-41b2-9bd3-e6ef3009f878","Type":"ContainerDied","Data":"5fb6663ae4b2e6e8316d62fb7aba91569b3116677209607d3c69489c25fa097f"} Feb 26 17:17:01 crc kubenswrapper[4805]: I0226 17:17:01.956479 4805 scope.go:117] "RemoveContainer" containerID="70e91293898a48ef5b4a3169a2f369f3e384a8c214611db52ec9921d402308f2" Feb 26 17:17:01 crc kubenswrapper[4805]: I0226 17:17:01.957524 4805 scope.go:117] "RemoveContainer" containerID="5fb6663ae4b2e6e8316d62fb7aba91569b3116677209607d3c69489c25fa097f" Feb 26 17:17:01 crc kubenswrapper[4805]: E0226 17:17:01.957763 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pqbgw_openshift-ovn-kubernetes(1d434db3-db90-41b2-9bd3-e6ef3009f878)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" Feb 26 17:17:01 crc kubenswrapper[4805]: I0226 17:17:01.979166 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9d4b760-e6f7-4436-a084-0ad6b66bed3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e49ec5b4e10dc6dac71d5faf7f0e1b1183cab3aad802aae8c801a1ea4baa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208dd9e5f6749b5e8c9b4f0d63e86d91b0c64d979b8d23b4a80169ca30565a45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2798a01279759b3561383b0d984c845729bfb391addb4d35a637600cfb46c9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a451f932847342121502796c41fd3b32fe0b6faa4746377ed06b630d0f447fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a451f932847342121502796c41fd3b32fe0b6faa4746377ed06b630d0f447fd5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:01Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:01 crc kubenswrapper[4805]: I0226 17:17:01.996199 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25e83477-65d0-41be-8e55-fdacfc5871a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57fc8ccb64122947b5a7cca9213198f432744cd553224168c0c46c30692dee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2mnb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:01Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:02 crc kubenswrapper[4805]: I0226 17:17:02.011435 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:02Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:02 crc kubenswrapper[4805]: I0226 17:17:02.026770 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:02Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:02 crc kubenswrapper[4805]: I0226 17:17:02.039132 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d4ls2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f108050dc8903541643b76d657da2ecc032d6ae2700c3150b7bcde71e25cc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxgv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d4ls2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:02Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:02 crc kubenswrapper[4805]: I0226 17:17:02.068811 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d434db3-db90-41b2-9bd3-e6ef3009f878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb6663ae4b2e6e8316d62fb7aba91569b3116677209607d3c69489c25fa097f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70e91293898a48ef5b4a3169a2f369f3e384a8c214611db52ec9921d402308f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"message\\\":\\\"o:140\\\\nI0226 17:16:47.907933 6888 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0226 17:16:47.907990 6888 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0226 17:16:47.908010 6888 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0226 17:16:47.908054 6888 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0226 17:16:47.908069 6888 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0226 17:16:47.908086 6888 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0226 17:16:47.908086 6888 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0226 17:16:47.908100 6888 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0226 17:16:47.908123 6888 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0226 17:16:47.908140 6888 factory.go:656] Stopping watch factory\\\\nI0226 17:16:47.908148 6888 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0226 17:16:47.908119 6888 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0226 17:16:47.908157 6888 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fb6663ae4b2e6e8316d62fb7aba91569b3116677209607d3c69489c25fa097f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T17:17:01Z\\\",\\\"message\\\":\\\"s_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"de17f0de-cfb1-4534-bb42-c40f5e050c73\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:2379, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:9979, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0226 17:17:01.799618 7089 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:17:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqbgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:02Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:02 crc kubenswrapper[4805]: E0226 17:17:02.081998 4805 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 17:17:02 crc kubenswrapper[4805]: I0226 17:17:02.089535 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406ee0ca0120ca5ae395a260da90d8b3a15bb6294b69d86fbaf5c9b5502a509d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:02Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:02 crc kubenswrapper[4805]: I0226 17:17:02.107457 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb6c537-e08f-48af-a1c8-5879a8519a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d6de27318643cad30debab75938fac7914dda795f3a5c911101cb1da5d5e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-54ch7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:02Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:02 crc kubenswrapper[4805]: I0226 17:17:02.121149 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjq6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"619e6250-cc24-43ca-a031-f79f954df6d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dafe627cf6c7501f852d717e5f74d8df670de56fe64e20a6f9c45052dd4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjq6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:02Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:02 crc kubenswrapper[4805]: I0226 17:17:02.135267 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5ss5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a674c1d-d647-41a7-a989-7e604dd9865a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a2fc75495b622246ff13b084e44b4eca3f17555dda3ffdf0095fc535f29325e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shz7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ecc506d706c8a1a59db461b134b8a0985cc022014987a6bbf1b0da5de90ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shz7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5ss5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:02Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:02 crc kubenswrapper[4805]: I0226 17:17:02.150464 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:02Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:02 crc kubenswrapper[4805]: I0226 17:17:02.165689 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:02Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:02 crc kubenswrapper[4805]: I0226 17:17:02.183677 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:02Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:02 crc kubenswrapper[4805]: I0226 17:17:02.198835 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:02Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:02 crc kubenswrapper[4805]: I0226 17:17:02.212699 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tv2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cefacfa-0108-4252-aa69-4b35bcc0f69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rswkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tv2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:02Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:02 crc kubenswrapper[4805]: I0226 17:17:02.223085 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hbv6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6e20a5b-84fd-4e2d-836c-a3891ef809dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hbv6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:02Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:02 crc kubenswrapper[4805]: I0226 17:17:02.241706 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:02Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:02 crc kubenswrapper[4805]: I0226 17:17:02.952357 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:17:02 crc kubenswrapper[4805]: I0226 17:17:02.952493 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:17:02 crc kubenswrapper[4805]: I0226 17:17:02.952662 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:17:02 crc kubenswrapper[4805]: E0226 17:17:02.952513 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:17:02 crc kubenswrapper[4805]: E0226 17:17:02.952758 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:17:02 crc kubenswrapper[4805]: E0226 17:17:02.952868 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:17:02 crc kubenswrapper[4805]: I0226 17:17:02.952670 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:17:02 crc kubenswrapper[4805]: E0226 17:17:02.953111 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:17:02 crc kubenswrapper[4805]: I0226 17:17:02.967948 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqbgw_1d434db3-db90-41b2-9bd3-e6ef3009f878/ovnkube-controller/2.log" Feb 26 17:17:03 crc kubenswrapper[4805]: I0226 17:17:03.354119 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:17:03 crc kubenswrapper[4805]: I0226 17:17:03.356426 4805 scope.go:117] "RemoveContainer" containerID="5fb6663ae4b2e6e8316d62fb7aba91569b3116677209607d3c69489c25fa097f" Feb 26 17:17:03 crc kubenswrapper[4805]: E0226 17:17:03.356642 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pqbgw_openshift-ovn-kubernetes(1d434db3-db90-41b2-9bd3-e6ef3009f878)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" Feb 26 17:17:03 crc kubenswrapper[4805]: I0226 17:17:03.375464 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb6c537-e08f-48af-a1c8-5879a8519a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d6de27318643cad30debab75938fac7914dda795f3a5c911101cb1da5d5e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-54ch7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:03Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:03 crc kubenswrapper[4805]: I0226 17:17:03.418037 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjq6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"619e6250-cc24-43ca-a031-f79f954df6d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dafe627cf6c7501f852d717e5f74d8df670de56fe64e20a6f9c45052dd4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjq6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:03Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:03 crc kubenswrapper[4805]: I0226 17:17:03.434635 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5ss5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a674c1d-d647-41a7-a989-7e604dd9865a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a2fc75495b622246ff13b084e44b4eca3f17555dda3ffdf0095fc535f29325e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shz7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ecc506d706c8a1a59db461b134b8a0985cc022014987a6bbf1b0da5de90ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shz7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5ss5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:03Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:03 crc kubenswrapper[4805]: I0226 17:17:03.450688 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:03Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:03 crc kubenswrapper[4805]: I0226 17:17:03.471828 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:03Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:03 crc kubenswrapper[4805]: I0226 17:17:03.489563 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:03Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:03 crc kubenswrapper[4805]: I0226 17:17:03.505216 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:03Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:03 crc kubenswrapper[4805]: I0226 17:17:03.521365 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tv2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cefacfa-0108-4252-aa69-4b35bcc0f69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rswkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tv2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:03Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:03 crc kubenswrapper[4805]: I0226 17:17:03.536686 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hbv6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6e20a5b-84fd-4e2d-836c-a3891ef809dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hbv6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:03Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:03 crc kubenswrapper[4805]: I0226 17:17:03.561559 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:03Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:03 crc kubenswrapper[4805]: I0226 17:17:03.577859 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9d4b760-e6f7-4436-a084-0ad6b66bed3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e49ec5b4e10dc6dac71d5faf7f0e1b1183cab3aad802aae8c801a1ea4baa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208dd9e5f6749b5e8c9b4f0d63e86d91b0c64d979b8d23b4a80169ca30565a45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2798a01279759b3561383b0d984c845729bfb391addb4d35a637600cfb46c9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a451f932847342121502796c41fd3b32fe0b6faa4746377ed06b630d0f447fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a451f932847342121502796c41fd3b32fe0b6faa4746377ed06b630d0f447fd5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:03Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:03 crc kubenswrapper[4805]: I0226 17:17:03.588982 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25e83477-65d0-41be-8e55-fdacfc5871a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57fc8ccb64122947b5a7cca9213198f432744cd553224168c0c46c30692dee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2mnb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:03Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:03 crc kubenswrapper[4805]: I0226 17:17:03.600394 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:03Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:03 crc kubenswrapper[4805]: I0226 17:17:03.614358 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:03Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:03 crc kubenswrapper[4805]: I0226 17:17:03.627323 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d4ls2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f108050dc8903541643b76d657da2ecc032d6ae2700c3150b7bcde71e25cc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxgv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d4ls2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:03Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:03 crc kubenswrapper[4805]: I0226 17:17:03.648973 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d434db3-db90-41b2-9bd3-e6ef3009f878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb6663ae4b2e6e8316d62fb7aba91569b3116677209607d3c69489c25fa097f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fb6663ae4b2e6e8316d62fb7aba91569b3116677209607d3c69489c25fa097f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T17:17:01Z\\\",\\\"message\\\":\\\"s_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"de17f0de-cfb1-4534-bb42-c40f5e050c73\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:2379, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:9979, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0226 17:17:01.799618 7089 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:17:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pqbgw_openshift-ovn-kubernetes(1d434db3-db90-41b2-9bd3-e6ef3009f878)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqbgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:03Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:03 crc kubenswrapper[4805]: I0226 17:17:03.664954 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406ee0ca0120ca5ae395a260da90d8b3a15bb6294b69d86fbaf5c9b5502a509d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:03Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:04 crc kubenswrapper[4805]: I0226 17:17:04.952269 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:17:04 crc kubenswrapper[4805]: I0226 17:17:04.952342 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:17:04 crc kubenswrapper[4805]: I0226 17:17:04.952340 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:17:04 crc kubenswrapper[4805]: I0226 17:17:04.952531 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:17:04 crc kubenswrapper[4805]: E0226 17:17:04.952639 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:17:04 crc kubenswrapper[4805]: E0226 17:17:04.952722 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:17:04 crc kubenswrapper[4805]: E0226 17:17:04.952765 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:17:04 crc kubenswrapper[4805]: E0226 17:17:04.952863 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:17:04 crc kubenswrapper[4805]: I0226 17:17:04.967788 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 26 17:17:06 crc kubenswrapper[4805]: I0226 17:17:06.952496 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:17:06 crc kubenswrapper[4805]: I0226 17:17:06.952540 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:17:06 crc kubenswrapper[4805]: I0226 17:17:06.952576 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:17:06 crc kubenswrapper[4805]: E0226 17:17:06.952683 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:17:06 crc kubenswrapper[4805]: E0226 17:17:06.952846 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:17:06 crc kubenswrapper[4805]: E0226 17:17:06.953101 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:17:06 crc kubenswrapper[4805]: I0226 17:17:06.952649 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:17:06 crc kubenswrapper[4805]: E0226 17:17:06.953342 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:17:06 crc kubenswrapper[4805]: I0226 17:17:06.968701 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tv2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cefacfa-0108-4252-aa69-4b35bcc0f69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rswkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tv2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:06Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:06 crc kubenswrapper[4805]: I0226 17:17:06.982603 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hbv6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6e20a5b-84fd-4e2d-836c-a3891ef809dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hbv6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:06Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:07 crc kubenswrapper[4805]: I0226 17:17:07.007289 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:07Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:07 crc kubenswrapper[4805]: I0226 17:17:07.023995 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:07Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:07 crc kubenswrapper[4805]: I0226 17:17:07.044938 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:07Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:07 crc kubenswrapper[4805]: I0226 17:17:07.062529 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:07Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:07 crc kubenswrapper[4805]: I0226 17:17:07.077946 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:07Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:07 crc kubenswrapper[4805]: E0226 17:17:07.082698 4805 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 17:17:07 crc kubenswrapper[4805]: I0226 17:17:07.099763 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75c5c03a-8e62-423f-a200-268c5ce427cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d2eebfb7c493826ba7323db89101f2e8328db0114a3315e98bebba40c4e45a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8326f4c52e7600d16343e34c506cc2965d03fb136016e58b4e6bbd2a3220da0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 17:15:13.311159 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 17:15:13.312724 1 observer_polling.go:159] Starting file observer\\\\nI0226 17:15:13.314063 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 17:15:13.315100 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0226 17:15:42.940483 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0226 17:15:42.940700 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:13Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8661d5f04ab119c311367b9e13c65671370a488a8d37c63aa6e37f2e1556688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb76989f99dc85548cf68fedb8b05b76aeed8628772502bf3a6c820635f5ed60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe5fcd2ef94d1daafee16035d9fe7743229aed752e4e3a2cbb3e39f11620f90f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:07Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:07 crc kubenswrapper[4805]: I0226 17:17:07.114775 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9d4b760-e6f7-4436-a084-0ad6b66bed3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e49ec5b4e10dc6dac71d5faf7f0e1b1183cab3aad802aae8c801a1ea4baa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208dd9e5f6749b5e8c9b4f0d63e86d91b0c64d979b8d23b4a80169ca30565a45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2798a01279759b3561383b0d984c845729bfb391addb4d35a637600cfb46c9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a451f932847342121502796c41fd3b32fe0b6faa4746377ed06b630d0f447fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a451f932847342121502796c41fd3b32fe0b6faa4746377ed06b630d0f447fd5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:07Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:07 crc kubenswrapper[4805]: I0226 17:17:07.130065 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25e83477-65d0-41be-8e55-fdacfc5871a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57fc8ccb64122947b5a7cca9213198f432744cd553224168c0c46c30692dee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2mnb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:07Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:07 crc kubenswrapper[4805]: I0226 17:17:07.145311 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406ee0ca0120ca5ae395a260da90d8b3a15bb6294b69d86fbaf5c9b5502a509d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:07Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:07 crc kubenswrapper[4805]: I0226 17:17:07.157281 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:07Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:07 crc kubenswrapper[4805]: I0226 17:17:07.175392 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:07Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:07 crc kubenswrapper[4805]: I0226 17:17:07.187850 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d4ls2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f108050dc8903541643b76d657da2ecc032d6ae2700c3150b7bcde71e25cc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxgv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d4ls2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:07Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:07 crc kubenswrapper[4805]: I0226 17:17:07.208125 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d434db3-db90-41b2-9bd3-e6ef3009f878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb6663ae4b2e6e8316d62fb7aba91569b3116677209607d3c69489c25fa097f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fb6663ae4b2e6e8316d62fb7aba91569b3116677209607d3c69489c25fa097f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T17:17:01Z\\\",\\\"message\\\":\\\"s_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"de17f0de-cfb1-4534-bb42-c40f5e050c73\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:2379, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:9979, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0226 17:17:01.799618 7089 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:17:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pqbgw_openshift-ovn-kubernetes(1d434db3-db90-41b2-9bd3-e6ef3009f878)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqbgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:07Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:07 crc kubenswrapper[4805]: I0226 17:17:07.222276 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb6c537-e08f-48af-a1c8-5879a8519a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d6de27318643cad30debab75938fac7914dda795f3a5c911101cb1da5d5e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-54ch7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:07Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:07 crc kubenswrapper[4805]: I0226 17:17:07.232842 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjq6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"619e6250-cc24-43ca-a031-f79f954df6d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dafe627cf6c7501f852d717e5f74d8df670de56fe64e20a6f9c45052dd4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjq6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:07Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:07 crc kubenswrapper[4805]: I0226 17:17:07.245642 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5ss5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a674c1d-d647-41a7-a989-7e604dd9865a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a2fc75495b622246ff13b084e44b4eca3f17555dda3ffdf0095fc535f29325e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shz7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ecc506d706c8a1a59db461b134b8a0985cc022014987a6bbf1b0da5de90ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shz7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5ss5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:07Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:08 crc kubenswrapper[4805]: I0226 17:17:08.953685 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:17:08 crc kubenswrapper[4805]: I0226 17:17:08.953737 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:17:08 crc kubenswrapper[4805]: I0226 17:17:08.953759 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:17:08 crc kubenswrapper[4805]: E0226 17:17:08.953943 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:17:08 crc kubenswrapper[4805]: E0226 17:17:08.954129 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:17:08 crc kubenswrapper[4805]: E0226 17:17:08.954264 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:17:08 crc kubenswrapper[4805]: I0226 17:17:08.954353 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:17:08 crc kubenswrapper[4805]: E0226 17:17:08.954445 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:17:09 crc kubenswrapper[4805]: I0226 17:17:09.654776 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:17:09 crc kubenswrapper[4805]: I0226 17:17:09.654816 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:17:09 crc kubenswrapper[4805]: I0226 17:17:09.654829 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:17:09 crc kubenswrapper[4805]: I0226 17:17:09.654845 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:17:09 crc kubenswrapper[4805]: I0226 17:17:09.654858 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:17:09Z","lastTransitionTime":"2026-02-26T17:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:17:09 crc kubenswrapper[4805]: E0226 17:17:09.673768 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:09Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:09 crc kubenswrapper[4805]: I0226 17:17:09.677781 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:17:09 crc kubenswrapper[4805]: I0226 17:17:09.677832 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:17:09 crc kubenswrapper[4805]: I0226 17:17:09.677849 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:17:09 crc kubenswrapper[4805]: I0226 17:17:09.677873 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:17:09 crc kubenswrapper[4805]: I0226 17:17:09.677890 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:17:09Z","lastTransitionTime":"2026-02-26T17:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:17:09 crc kubenswrapper[4805]: E0226 17:17:09.690748 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:09Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:09 crc kubenswrapper[4805]: I0226 17:17:09.694426 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:17:09 crc kubenswrapper[4805]: I0226 17:17:09.694461 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:17:09 crc kubenswrapper[4805]: I0226 17:17:09.694491 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:17:09 crc kubenswrapper[4805]: I0226 17:17:09.694509 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:17:09 crc kubenswrapper[4805]: I0226 17:17:09.694520 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:17:09Z","lastTransitionTime":"2026-02-26T17:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:17:09 crc kubenswrapper[4805]: E0226 17:17:09.707710 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:09Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:09 crc kubenswrapper[4805]: I0226 17:17:09.711253 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:17:09 crc kubenswrapper[4805]: I0226 17:17:09.711371 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:17:09 crc kubenswrapper[4805]: I0226 17:17:09.711436 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:17:09 crc kubenswrapper[4805]: I0226 17:17:09.711502 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:17:09 crc kubenswrapper[4805]: I0226 17:17:09.711562 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:17:09Z","lastTransitionTime":"2026-02-26T17:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:17:09 crc kubenswrapper[4805]: E0226 17:17:09.728162 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:09Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:09 crc kubenswrapper[4805]: I0226 17:17:09.731617 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:17:09 crc kubenswrapper[4805]: I0226 17:17:09.731657 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:17:09 crc kubenswrapper[4805]: I0226 17:17:09.731665 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:17:09 crc kubenswrapper[4805]: I0226 17:17:09.731676 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:17:09 crc kubenswrapper[4805]: I0226 17:17:09.731685 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:17:09Z","lastTransitionTime":"2026-02-26T17:17:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:17:09 crc kubenswrapper[4805]: E0226 17:17:09.748238 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:09Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:09 crc kubenswrapper[4805]: E0226 17:17:09.748552 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 17:17:10 crc kubenswrapper[4805]: I0226 17:17:10.952542 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:17:10 crc kubenswrapper[4805]: I0226 17:17:10.952559 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:17:10 crc kubenswrapper[4805]: I0226 17:17:10.952615 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:17:10 crc kubenswrapper[4805]: I0226 17:17:10.952688 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:17:10 crc kubenswrapper[4805]: E0226 17:17:10.952798 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:17:10 crc kubenswrapper[4805]: E0226 17:17:10.952940 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:17:10 crc kubenswrapper[4805]: E0226 17:17:10.953164 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:17:10 crc kubenswrapper[4805]: E0226 17:17:10.953350 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:17:12 crc kubenswrapper[4805]: E0226 17:17:12.084168 4805 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 17:17:12 crc kubenswrapper[4805]: I0226 17:17:12.953155 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:17:12 crc kubenswrapper[4805]: E0226 17:17:12.953341 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:17:12 crc kubenswrapper[4805]: I0226 17:17:12.953583 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:17:12 crc kubenswrapper[4805]: E0226 17:17:12.953639 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:17:12 crc kubenswrapper[4805]: I0226 17:17:12.953755 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:17:12 crc kubenswrapper[4805]: E0226 17:17:12.953819 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:17:12 crc kubenswrapper[4805]: I0226 17:17:12.953747 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:17:12 crc kubenswrapper[4805]: E0226 17:17:12.953969 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:17:13 crc kubenswrapper[4805]: I0226 17:17:13.966196 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 26 17:17:13 crc kubenswrapper[4805]: I0226 17:17:13.984904 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:17:13 crc kubenswrapper[4805]: E0226 17:17:13.985013 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:17.984991171 +0000 UTC m=+212.546745510 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:17:13 crc kubenswrapper[4805]: I0226 17:17:13.985071 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:17:13 crc kubenswrapper[4805]: I0226 17:17:13.985103 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:17:13 crc kubenswrapper[4805]: I0226 17:17:13.985139 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:17:13 crc kubenswrapper[4805]: I0226 17:17:13.985185 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:17:13 crc kubenswrapper[4805]: E0226 17:17:13.985216 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 17:17:13 crc kubenswrapper[4805]: E0226 17:17:13.985231 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 17:17:13 crc kubenswrapper[4805]: E0226 17:17:13.985243 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 17:17:13 crc kubenswrapper[4805]: E0226 17:17:13.985265 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 17:17:13 crc kubenswrapper[4805]: E0226 17:17:13.985276 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 17:18:17.985266818 +0000 UTC m=+212.547021157 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 17:17:13 crc kubenswrapper[4805]: E0226 17:17:13.985382 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 17:18:17.9853638 +0000 UTC m=+212.547118139 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 17:17:13 crc kubenswrapper[4805]: E0226 17:17:13.985284 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 17:17:13 crc kubenswrapper[4805]: E0226 17:17:13.985405 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 17:17:13 crc kubenswrapper[4805]: E0226 17:17:13.985404 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 17:17:13 crc kubenswrapper[4805]: E0226 17:17:13.985513 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 17:18:17.985490554 +0000 UTC m=+212.547244903 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 17:17:13 crc kubenswrapper[4805]: E0226 17:17:13.985417 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 17:17:13 crc kubenswrapper[4805]: E0226 17:17:13.985582 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 17:18:17.985571536 +0000 UTC m=+212.547325955 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 17:17:14 crc kubenswrapper[4805]: I0226 17:17:14.952823 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:17:14 crc kubenswrapper[4805]: I0226 17:17:14.952909 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:17:14 crc kubenswrapper[4805]: I0226 17:17:14.952863 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:17:14 crc kubenswrapper[4805]: E0226 17:17:14.953093 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:17:14 crc kubenswrapper[4805]: I0226 17:17:14.953120 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:17:14 crc kubenswrapper[4805]: E0226 17:17:14.953269 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:17:14 crc kubenswrapper[4805]: E0226 17:17:14.953369 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:17:14 crc kubenswrapper[4805]: E0226 17:17:14.953459 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:17:15 crc kubenswrapper[4805]: I0226 17:17:15.952899 4805 scope.go:117] "RemoveContainer" containerID="5fb6663ae4b2e6e8316d62fb7aba91569b3116677209607d3c69489c25fa097f" Feb 26 17:17:15 crc kubenswrapper[4805]: E0226 17:17:15.953174 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pqbgw_openshift-ovn-kubernetes(1d434db3-db90-41b2-9bd3-e6ef3009f878)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" Feb 26 17:17:16 crc kubenswrapper[4805]: I0226 17:17:16.952239 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:17:16 crc kubenswrapper[4805]: I0226 17:17:16.952331 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:17:16 crc kubenswrapper[4805]: I0226 17:17:16.952305 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:17:16 crc kubenswrapper[4805]: E0226 17:17:16.952505 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:17:16 crc kubenswrapper[4805]: I0226 17:17:16.952558 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:17:16 crc kubenswrapper[4805]: E0226 17:17:16.952705 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:17:16 crc kubenswrapper[4805]: E0226 17:17:16.952894 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:17:16 crc kubenswrapper[4805]: E0226 17:17:16.952962 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:17:16 crc kubenswrapper[4805]: I0226 17:17:16.973700 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75c5c03a-8e62-423f-a200-268c5ce427cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d2eebfb7c493826ba7323db89101f2e8328db0114a3315e98bebba40c4e45a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8326f4c52e7600d16343e34c506cc2965d03fb136016e58b4e6bbd2a3220da0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 17:15:13.311159 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 17:15:13.312724 1 observer_polling.go:159] Starting file observer\\\\nI0226 17:15:13.314063 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 17:15:13.315100 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0226 17:15:42.940483 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0226 17:15:42.940700 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:13Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8661d5f04ab119c311367b9e13c65671370a488a8d37c63aa6e37f2e1556688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb76989f99dc85548cf68fedb8b05b76aeed8628772502bf3a6c820635f5ed60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe5fcd2ef94d1daafee16035d9fe7743229aed752e4e3a2cbb3e39f11620f90f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:16Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:16 crc kubenswrapper[4805]: I0226 17:17:16.988946 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9d4b760-e6f7-4436-a084-0ad6b66bed3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e49ec5b4e10dc6dac71d5faf7f0e1b1183cab3aad802aae8c801a1ea4baa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208dd9e5f6749b5e8c9b4f0d63e86d91b0c64d979b8d23b4a80169ca30565a45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2798a01279759b3561383b0d984c845729bfb391addb4d35a637600cfb46c9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a451f932847342121502796c41fd3b32fe0b6faa4746377ed06b630d0f447fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a451f932847342121502796c41fd3b32fe0b6faa4746377ed06b630d0f447fd5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:16Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:17 crc kubenswrapper[4805]: I0226 17:17:17.003358 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25e83477-65d0-41be-8e55-fdacfc5871a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57fc8ccb64122947b5a7cca9213198f432744cd553224168c0c46c30692dee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2mnb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:17Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:17 crc kubenswrapper[4805]: I0226 17:17:17.018349 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406ee0ca0120ca5ae395a260da90d8b3a15bb6294b69d86fbaf5c9b5502a509d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:17Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:17 crc kubenswrapper[4805]: I0226 17:17:17.034449 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:17Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:17 crc kubenswrapper[4805]: I0226 17:17:17.048463 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:17Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:17 crc kubenswrapper[4805]: I0226 17:17:17.063874 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d4ls2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f108050dc8903541643b76d657da2ecc032d6ae2700c3150b7bcde71e25cc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxgv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d4ls2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:17Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:17 crc kubenswrapper[4805]: E0226 17:17:17.085368 4805 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 17:17:17 crc kubenswrapper[4805]: I0226 17:17:17.102345 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d434db3-db90-41b2-9bd3-e6ef3009f878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb6663ae4b2e6e8316d62fb7aba91569b3116677209607d3c69489c25fa097f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fb6663ae4b2e6e8316d62fb7aba91569b3116677209607d3c69489c25fa097f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T17:17:01Z\\\",\\\"message\\\":\\\"s_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"de17f0de-cfb1-4534-bb42-c40f5e050c73\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:2379, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:9979, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0226 17:17:01.799618 7089 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:17:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pqbgw_openshift-ovn-kubernetes(1d434db3-db90-41b2-9bd3-e6ef3009f878)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqbgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:17Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:17 crc kubenswrapper[4805]: I0226 17:17:17.114120 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d50a504-abcc-48e7-87e0-3e6afb58f66e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea4296c5f0aeccbd6407ca35a2f863bc14b58e0d445271541413e2bd411970c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24f5115c5c9ba391bf4503f7e0868747a2591f781173fb55a8de06d90f4e5d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24f5115c5c9ba391bf4503f7e0868747a2591f781173fb55a8de06d90f4e5d0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:17Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:17 crc kubenswrapper[4805]: I0226 17:17:17.133233 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb6c537-e08f-48af-a1c8-5879a8519a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d6de27318643cad30debab75938fac7914dda795f3a5c911101cb1da5d5e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-54ch7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:17Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:17 crc kubenswrapper[4805]: I0226 17:17:17.148488 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjq6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"619e6250-cc24-43ca-a031-f79f954df6d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dafe627cf6c7501f852d717e5f74d8df670de56fe64e20a6f9c45052dd4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjq6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:17Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:17 crc kubenswrapper[4805]: I0226 17:17:17.166972 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5ss5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a674c1d-d647-41a7-a989-7e604dd9865a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a2fc75495b622246ff13b084e44b4eca3f17555dda3ffdf0095fc535f29325e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shz7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ecc506d706c8a1a59db461b134b8a0985cc022014987a6bbf1b0da5de90ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shz7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5ss5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:17Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:17 crc kubenswrapper[4805]: I0226 17:17:17.201586 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:17Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:17 crc kubenswrapper[4805]: I0226 17:17:17.224660 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:17Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:17 crc kubenswrapper[4805]: I0226 17:17:17.241155 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:17Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:17 crc kubenswrapper[4805]: I0226 17:17:17.256624 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:17Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:17 crc kubenswrapper[4805]: I0226 17:17:17.276468 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:17Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:17 crc kubenswrapper[4805]: I0226 17:17:17.293432 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tv2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cefacfa-0108-4252-aa69-4b35bcc0f69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rswkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tv2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:17Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:17 crc kubenswrapper[4805]: I0226 17:17:17.307676 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hbv6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6e20a5b-84fd-4e2d-836c-a3891ef809dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hbv6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:17Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:17 crc kubenswrapper[4805]: I0226 17:17:17.621637 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6e20a5b-84fd-4e2d-836c-a3891ef809dc-metrics-certs\") pod \"network-metrics-daemon-hbv6d\" (UID: \"d6e20a5b-84fd-4e2d-836c-a3891ef809dc\") " pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:17:17 crc kubenswrapper[4805]: E0226 17:17:17.622107 4805 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 17:17:17 crc kubenswrapper[4805]: E0226 17:17:17.622355 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6e20a5b-84fd-4e2d-836c-a3891ef809dc-metrics-certs podName:d6e20a5b-84fd-4e2d-836c-a3891ef809dc nodeName:}" failed. No retries permitted until 2026-02-26 17:17:49.622303948 +0000 UTC m=+184.184058447 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d6e20a5b-84fd-4e2d-836c-a3891ef809dc-metrics-certs") pod "network-metrics-daemon-hbv6d" (UID: "d6e20a5b-84fd-4e2d-836c-a3891ef809dc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 17:17:18 crc kubenswrapper[4805]: I0226 17:17:18.952307 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:17:18 crc kubenswrapper[4805]: I0226 17:17:18.952423 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:17:18 crc kubenswrapper[4805]: I0226 17:17:18.952423 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:17:18 crc kubenswrapper[4805]: I0226 17:17:18.952307 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:17:18 crc kubenswrapper[4805]: E0226 17:17:18.952534 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:17:18 crc kubenswrapper[4805]: E0226 17:17:18.952698 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:17:18 crc kubenswrapper[4805]: E0226 17:17:18.952795 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:17:18 crc kubenswrapper[4805]: E0226 17:17:18.952897 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:17:20 crc kubenswrapper[4805]: I0226 17:17:20.033444 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:17:20 crc kubenswrapper[4805]: I0226 17:17:20.033509 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:17:20 crc kubenswrapper[4805]: I0226 17:17:20.033527 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:17:20 crc kubenswrapper[4805]: I0226 17:17:20.033556 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:17:20 crc kubenswrapper[4805]: I0226 17:17:20.033575 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:17:20Z","lastTransitionTime":"2026-02-26T17:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:17:20 crc kubenswrapper[4805]: E0226 17:17:20.047910 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:20Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:20 crc kubenswrapper[4805]: I0226 17:17:20.051923 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:17:20 crc kubenswrapper[4805]: I0226 17:17:20.051960 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:17:20 crc kubenswrapper[4805]: I0226 17:17:20.051972 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:17:20 crc kubenswrapper[4805]: I0226 17:17:20.051987 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:17:20 crc kubenswrapper[4805]: I0226 17:17:20.051997 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:17:20Z","lastTransitionTime":"2026-02-26T17:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:17:20 crc kubenswrapper[4805]: E0226 17:17:20.067543 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:20Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:20 crc kubenswrapper[4805]: I0226 17:17:20.071766 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:17:20 crc kubenswrapper[4805]: I0226 17:17:20.071821 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:17:20 crc kubenswrapper[4805]: I0226 17:17:20.071833 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:17:20 crc kubenswrapper[4805]: I0226 17:17:20.071848 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:17:20 crc kubenswrapper[4805]: I0226 17:17:20.071858 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:17:20Z","lastTransitionTime":"2026-02-26T17:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:17:20 crc kubenswrapper[4805]: E0226 17:17:20.090107 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:20Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:20 crc kubenswrapper[4805]: I0226 17:17:20.094277 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:17:20 crc kubenswrapper[4805]: I0226 17:17:20.094339 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:17:20 crc kubenswrapper[4805]: I0226 17:17:20.094349 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:17:20 crc kubenswrapper[4805]: I0226 17:17:20.094361 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:17:20 crc kubenswrapper[4805]: I0226 17:17:20.094370 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:17:20Z","lastTransitionTime":"2026-02-26T17:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:17:20 crc kubenswrapper[4805]: E0226 17:17:20.110526 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:20Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:20 crc kubenswrapper[4805]: I0226 17:17:20.113837 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:17:20 crc kubenswrapper[4805]: I0226 17:17:20.113897 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:17:20 crc kubenswrapper[4805]: I0226 17:17:20.113916 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:17:20 crc kubenswrapper[4805]: I0226 17:17:20.113939 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:17:20 crc kubenswrapper[4805]: I0226 17:17:20.113957 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:17:20Z","lastTransitionTime":"2026-02-26T17:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:17:20 crc kubenswrapper[4805]: E0226 17:17:20.130927 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:20Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:20 crc kubenswrapper[4805]: E0226 17:17:20.134426 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 17:17:20 crc kubenswrapper[4805]: I0226 17:17:20.952517 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:17:20 crc kubenswrapper[4805]: I0226 17:17:20.952685 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:17:20 crc kubenswrapper[4805]: E0226 17:17:20.952862 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:17:20 crc kubenswrapper[4805]: I0226 17:17:20.952890 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:17:20 crc kubenswrapper[4805]: I0226 17:17:20.952973 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:17:20 crc kubenswrapper[4805]: E0226 17:17:20.953117 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:17:20 crc kubenswrapper[4805]: E0226 17:17:20.953197 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:17:20 crc kubenswrapper[4805]: E0226 17:17:20.953266 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:17:22 crc kubenswrapper[4805]: I0226 17:17:22.041989 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tv2pd_4cefacfa-0108-4252-aa69-4b35bcc0f69f/kube-multus/0.log" Feb 26 17:17:22 crc kubenswrapper[4805]: I0226 17:17:22.042057 4805 generic.go:334] "Generic (PLEG): container finished" podID="4cefacfa-0108-4252-aa69-4b35bcc0f69f" containerID="bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4" exitCode=1 Feb 26 17:17:22 crc kubenswrapper[4805]: I0226 17:17:22.042091 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tv2pd" event={"ID":"4cefacfa-0108-4252-aa69-4b35bcc0f69f","Type":"ContainerDied","Data":"bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4"} Feb 26 17:17:22 crc kubenswrapper[4805]: I0226 17:17:22.042490 4805 scope.go:117] "RemoveContainer" containerID="bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4" Feb 26 17:17:22 crc kubenswrapper[4805]: I0226 17:17:22.057765 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjq6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"619e6250-cc24-43ca-a031-f79f954df6d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dafe627cf6c7501f852d717e5f74d8df670de56fe64e20a6f9c45052dd4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjq6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:22Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:22 crc kubenswrapper[4805]: I0226 17:17:22.076445 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5ss5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a674c1d-d647-41a7-a989-7e604dd9865a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a2fc75495b622246ff13b084e44b4eca3f17555dda3ffdf0095fc535f29325e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shz7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ecc506d706c8a1a59db461b134b8a0985cc022014987a6bbf1b0da5de90ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shz7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5ss5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:22Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:22 crc kubenswrapper[4805]: E0226 17:17:22.087448 4805 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 17:17:22 crc kubenswrapper[4805]: I0226 17:17:22.091261 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d50a504-abcc-48e7-87e0-3e6afb58f66e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea4296c5f0aeccbd6407ca35a2f863bc14b58e0d445271541413e2bd411970c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24f5115c5c9ba391bf4503f7e0868747a2591f781173fb55a8de06d90f4e5d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24f5115c5c9ba391bf4503f7e0868747a2591f781173fb55a8de06d90f4e5d0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:22Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:22 crc kubenswrapper[4805]: I0226 17:17:22.108917 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb6c537-e08f-48af-a1c8-5879a8519a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d6de27318643cad30debab75938fac7914dda795f3a5c911101cb1da5d5e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-54ch7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:22Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:22 crc kubenswrapper[4805]: I0226 17:17:22.123580 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:22Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:22 crc kubenswrapper[4805]: I0226 17:17:22.137854 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:22Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:22 crc kubenswrapper[4805]: I0226 17:17:22.152819 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:22Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:22 crc kubenswrapper[4805]: I0226 17:17:22.174095 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tv2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cefacfa-0108-4252-aa69-4b35bcc0f69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T17:17:21Z\\\",\\\"message\\\":\\\"2026-02-26T17:16:35+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5c4ff198-2896-499a-ae44-42685854af07\\\\n2026-02-26T17:16:35+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5c4ff198-2896-499a-ae44-42685854af07 to /host/opt/cni/bin/\\\\n2026-02-26T17:16:36Z [verbose] multus-daemon started\\\\n2026-02-26T17:16:36Z [verbose] Readiness Indicator file check\\\\n2026-02-26T17:17:21Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rswkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tv2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:22Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:22 crc kubenswrapper[4805]: I0226 17:17:22.187704 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hbv6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6e20a5b-84fd-4e2d-836c-a3891ef809dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hbv6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:22Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:22 crc kubenswrapper[4805]: I0226 17:17:22.205646 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:22Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:22 crc kubenswrapper[4805]: I0226 17:17:22.221682 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:22Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:22 crc kubenswrapper[4805]: I0226 17:17:22.235583 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25e83477-65d0-41be-8e55-fdacfc5871a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57fc8ccb64122947b5a7cca9213198f432744cd553224168c0c46c30692dee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2mnb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:22Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:22 crc kubenswrapper[4805]: I0226 17:17:22.249795 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75c5c03a-8e62-423f-a200-268c5ce427cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d2eebfb7c493826ba7323db89101f2e8328db0114a3315e98bebba40c4e45a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8326f4c52e7600d16343e34c506cc2965d03fb136016e58b4e6bbd2a3220da0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 17:15:13.311159 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 17:15:13.312724 1 observer_polling.go:159] Starting file observer\\\\nI0226 17:15:13.314063 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 17:15:13.315100 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0226 17:15:42.940483 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0226 17:15:42.940700 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:13Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8661d5f04ab119c311367b9e13c65671370a488a8d37c63aa6e37f2e1556688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb76989f99dc85548cf68fedb8b05b76aeed8628772502bf3a6c820635f5ed60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe5fcd2ef94d1daafee16035d9fe7743229aed752e4e3a2cbb3e39f11620f90f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:22Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:22 crc kubenswrapper[4805]: I0226 17:17:22.263966 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9d4b760-e6f7-4436-a084-0ad6b66bed3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e49ec5b4e10dc6dac71d5faf7f0e1b1183cab3aad802aae8c801a1ea4baa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208dd9e5f6749b5e8c9b4f0d63e86d91b0c64d979b8d23b4a80169ca30565a45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2798a01279759b3561383b0d984c845729bfb391addb4d35a637600cfb46c9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a451f932847342121502796c41fd3b32fe0b6faa4746377ed06b630d0f447fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a451f932847342121502796c41fd3b32fe0b6faa4746377ed06b630d0f447fd5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:22Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:22 crc kubenswrapper[4805]: I0226 17:17:22.278059 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:22Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:22 crc kubenswrapper[4805]: I0226 17:17:22.290514 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d4ls2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f108050dc8903541643b76d657da2ecc032d6ae2700c3150b7bcde71e25cc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxgv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d4ls2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:22Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:22 crc kubenswrapper[4805]: I0226 17:17:22.313003 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d434db3-db90-41b2-9bd3-e6ef3009f878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb6663ae4b2e6e8316d62fb7aba91569b3116677209607d3c69489c25fa097f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fb6663ae4b2e6e8316d62fb7aba91569b3116677209607d3c69489c25fa097f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T17:17:01Z\\\",\\\"message\\\":\\\"s_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"de17f0de-cfb1-4534-bb42-c40f5e050c73\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:2379, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:9979, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0226 17:17:01.799618 7089 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:17:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pqbgw_openshift-ovn-kubernetes(1d434db3-db90-41b2-9bd3-e6ef3009f878)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqbgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:22Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:22 crc kubenswrapper[4805]: I0226 17:17:22.329296 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406ee0ca0120ca5ae395a260da90d8b3a15bb6294b69d86fbaf5c9b5502a509d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:22Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:22 crc kubenswrapper[4805]: I0226 17:17:22.341292 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:22Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:22 crc kubenswrapper[4805]: I0226 17:17:22.953002 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:17:22 crc kubenswrapper[4805]: I0226 17:17:22.953117 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:17:22 crc kubenswrapper[4805]: I0226 17:17:22.953192 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:17:22 crc kubenswrapper[4805]: I0226 17:17:22.953251 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:17:22 crc kubenswrapper[4805]: E0226 17:17:22.953242 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:17:22 crc kubenswrapper[4805]: E0226 17:17:22.953446 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:17:22 crc kubenswrapper[4805]: E0226 17:17:22.953685 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:17:22 crc kubenswrapper[4805]: E0226 17:17:22.953863 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:17:23 crc kubenswrapper[4805]: I0226 17:17:23.049378 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tv2pd_4cefacfa-0108-4252-aa69-4b35bcc0f69f/kube-multus/0.log" Feb 26 17:17:23 crc kubenswrapper[4805]: I0226 17:17:23.049478 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tv2pd" event={"ID":"4cefacfa-0108-4252-aa69-4b35bcc0f69f","Type":"ContainerStarted","Data":"6b4d5739a7ef7ce9e6f2a3c653cc6d361b9bb0995d8899611d5196dfb304d82a"} Feb 26 17:17:23 crc kubenswrapper[4805]: I0226 17:17:23.068069 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406ee0ca0120ca5ae395a260da90d8b3a15bb6294b69d86fbaf5c9b5502a509d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:23Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:23 crc kubenswrapper[4805]: I0226 17:17:23.086054 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:23Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:23 crc kubenswrapper[4805]: I0226 17:17:23.099927 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:23Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:23 crc kubenswrapper[4805]: I0226 17:17:23.112500 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d4ls2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f108050dc8903541643b76d657da2ecc032d6ae2700c3150b7bcde71e25cc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxgv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d4ls2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:23Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:23 crc kubenswrapper[4805]: I0226 17:17:23.134768 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d434db3-db90-41b2-9bd3-e6ef3009f878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb6663ae4b2e6e8316d62fb7aba91569b3116677209607d3c69489c25fa097f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fb6663ae4b2e6e8316d62fb7aba91569b3116677209607d3c69489c25fa097f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T17:17:01Z\\\",\\\"message\\\":\\\"s_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"de17f0de-cfb1-4534-bb42-c40f5e050c73\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:2379, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:9979, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0226 17:17:01.799618 7089 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:17:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pqbgw_openshift-ovn-kubernetes(1d434db3-db90-41b2-9bd3-e6ef3009f878)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqbgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:23Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:23 crc kubenswrapper[4805]: I0226 17:17:23.147338 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d50a504-abcc-48e7-87e0-3e6afb58f66e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea4296c5f0aeccbd6407ca35a2f863bc14b58e0d445271541413e2bd411970c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24f5115c5c9ba391bf4503f7e0868747a2591f781173fb55a8de06d90f4e5d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24f5115c5c9ba391bf4503f7e0868747a2591f781173fb55a8de06d90f4e5d0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:23Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:23 crc kubenswrapper[4805]: I0226 17:17:23.163289 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb6c537-e08f-48af-a1c8-5879a8519a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d6de27318643cad30debab75938fac7914dda795f3a5c911101cb1da5d5e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-54ch7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:23Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:23 crc kubenswrapper[4805]: I0226 17:17:23.176806 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjq6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"619e6250-cc24-43ca-a031-f79f954df6d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dafe627cf6c7501f852d717e5f74d8df670de56fe64e20a6f9c45052dd4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjq6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:23Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:23 crc kubenswrapper[4805]: I0226 17:17:23.192722 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5ss5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a674c1d-d647-41a7-a989-7e604dd9865a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a2fc75495b622246ff13b084e44b4eca3f17555dda3ffdf0095fc535f29325e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shz7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ecc506d706c8a1a59db461b134b8a0985cc022014987a6bbf1b0da5de90ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shz7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5ss5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:23Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:23 crc kubenswrapper[4805]: I0226 17:17:23.221853 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:23Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:23 crc kubenswrapper[4805]: I0226 17:17:23.238959 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:23Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:23 crc kubenswrapper[4805]: I0226 17:17:23.257567 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:23Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:23 crc kubenswrapper[4805]: I0226 17:17:23.273681 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:23Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:23 crc kubenswrapper[4805]: I0226 17:17:23.292890 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:23Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:23 crc kubenswrapper[4805]: I0226 17:17:23.309220 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tv2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cefacfa-0108-4252-aa69-4b35bcc0f69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b4d5739a7ef7ce9e6f2a3c653cc6d361b9bb0995d8899611d5196dfb304d82a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T17:17:21Z\\\",\\\"message\\\":\\\"2026-02-26T17:16:35+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5c4ff198-2896-499a-ae44-42685854af07\\\\n2026-02-26T17:16:35+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5c4ff198-2896-499a-ae44-42685854af07 to /host/opt/cni/bin/\\\\n2026-02-26T17:16:36Z [verbose] multus-daemon started\\\\n2026-02-26T17:16:36Z [verbose] Readiness Indicator file check\\\\n2026-02-26T17:17:21Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rswkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tv2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:23Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:23 crc kubenswrapper[4805]: I0226 17:17:23.321594 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hbv6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6e20a5b-84fd-4e2d-836c-a3891ef809dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hbv6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:23Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:23 crc kubenswrapper[4805]: I0226 17:17:23.334577 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75c5c03a-8e62-423f-a200-268c5ce427cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d2eebfb7c493826ba7323db89101f2e8328db0114a3315e98bebba40c4e45a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8326f4c52e7600d16343e34c506cc2965d03fb136016e58b4e6bbd2a3220da0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 17:15:13.311159 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 17:15:13.312724 1 observer_polling.go:159] Starting file observer\\\\nI0226 17:15:13.314063 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 17:15:13.315100 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0226 17:15:42.940483 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0226 17:15:42.940700 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:13Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8661d5f04ab119c311367b9e13c65671370a488a8d37c63aa6e37f2e1556688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb76989f99dc85548cf68fedb8b05b76aeed8628772502bf3a6c820635f5ed60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe5fcd2ef94d1daafee16035d9fe7743229aed752e4e3a2cbb3e39f11620f90f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:23Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:23 crc kubenswrapper[4805]: I0226 17:17:23.347268 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9d4b760-e6f7-4436-a084-0ad6b66bed3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e49ec5b4e10dc6dac71d5faf7f0e1b1183cab3aad802aae8c801a1ea4baa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208dd9e5f6749b5e8c9b4f0d63e86d91b0c64d979b8d23b4a80169ca30565a45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2798a01279759b3561383b0d984c845729bfb391addb4d35a637600cfb46c9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a451f932847342121502796c41fd3b32fe0b6faa4746377ed06b630d0f447fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a451f932847342121502796c41fd3b32fe0b6faa4746377ed06b630d0f447fd5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:23Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:23 crc kubenswrapper[4805]: I0226 17:17:23.364659 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25e83477-65d0-41be-8e55-fdacfc5871a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57fc8ccb64122947b5a7cca9213198f432744cd553224168c0c46c30692dee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2mnb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:23Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:24 crc kubenswrapper[4805]: I0226 17:17:24.952982 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:17:24 crc kubenswrapper[4805]: I0226 17:17:24.953081 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:17:24 crc kubenswrapper[4805]: E0226 17:17:24.953202 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:17:24 crc kubenswrapper[4805]: I0226 17:17:24.953260 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:17:24 crc kubenswrapper[4805]: E0226 17:17:24.953313 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:17:24 crc kubenswrapper[4805]: E0226 17:17:24.953398 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:17:24 crc kubenswrapper[4805]: I0226 17:17:24.953565 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:17:24 crc kubenswrapper[4805]: E0226 17:17:24.953664 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:17:26 crc kubenswrapper[4805]: I0226 17:17:26.952045 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:17:26 crc kubenswrapper[4805]: I0226 17:17:26.952100 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:17:26 crc kubenswrapper[4805]: I0226 17:17:26.952076 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:17:26 crc kubenswrapper[4805]: E0226 17:17:26.952186 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:17:26 crc kubenswrapper[4805]: I0226 17:17:26.952045 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:17:26 crc kubenswrapper[4805]: E0226 17:17:26.952616 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:17:26 crc kubenswrapper[4805]: E0226 17:17:26.952605 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:17:26 crc kubenswrapper[4805]: E0226 17:17:26.952806 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:17:26 crc kubenswrapper[4805]: I0226 17:17:26.968122 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d4ls2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f108050dc8903541643b76d657da2ecc032d6ae2700c3150b7bcde71e25cc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxgv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d4ls2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:26Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:27 crc kubenswrapper[4805]: I0226 17:17:27.004084 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d434db3-db90-41b2-9bd3-e6ef3009f878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fb6663ae4b2e6e8316d62fb7aba91569b3116677209607d3c69489c25fa097f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fb6663ae4b2e6e8316d62fb7aba91569b3116677209607d3c69489c25fa097f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T17:17:01Z\\\",\\\"message\\\":\\\"s_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"de17f0de-cfb1-4534-bb42-c40f5e050c73\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:2379, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:9979, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0226 17:17:01.799618 7089 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:17:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pqbgw_openshift-ovn-kubernetes(1d434db3-db90-41b2-9bd3-e6ef3009f878)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqbgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:27Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:27 crc kubenswrapper[4805]: I0226 17:17:27.021332 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406ee0ca0120ca5ae395a260da90d8b3a15bb6294b69d86fbaf5c9b5502a509d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:27Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:27 crc kubenswrapper[4805]: I0226 17:17:27.036541 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:27Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:27 crc kubenswrapper[4805]: I0226 17:17:27.051731 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:27Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:27 crc kubenswrapper[4805]: I0226 17:17:27.067255 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5ss5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a674c1d-d647-41a7-a989-7e604dd9865a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a2fc75495b622246ff13b084e44b4eca3f17555dda3ffdf0095fc535f29325e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shz7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ecc506d706c8a1a59db461b134b8a0985cc022014987a6bbf1b0da5de90ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shz7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5ss5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:27Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:27 crc kubenswrapper[4805]: I0226 17:17:27.079631 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d50a504-abcc-48e7-87e0-3e6afb58f66e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea4296c5f0aeccbd6407ca35a2f863bc14b58e0d445271541413e2bd411970c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24f5115c5c9ba391bf4503f7e0868747a2591f781173fb55a8de06d90f4e5d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24f5115c5c9ba391bf4503f7e0868747a2591f781173fb55a8de06d90f4e5d0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:27Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:27 crc kubenswrapper[4805]: E0226 17:17:27.088279 4805 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 17:17:27 crc kubenswrapper[4805]: I0226 17:17:27.103251 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb6c537-e08f-48af-a1c8-5879a8519a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d6de27318643cad30debab75938fac7914dda795f3a5c911101cb1da5d5e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-54ch7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:27Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:27 crc kubenswrapper[4805]: I0226 17:17:27.118303 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjq6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"619e6250-cc24-43ca-a031-f79f954df6d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dafe627cf6c7501f852d717e5f74d8df670de56fe64e20a6f9c45052dd4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjq6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:27Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:27 crc kubenswrapper[4805]: I0226 17:17:27.132208 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:27Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:27 crc kubenswrapper[4805]: I0226 17:17:27.144246 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:27Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:27 crc kubenswrapper[4805]: I0226 17:17:27.157242 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tv2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cefacfa-0108-4252-aa69-4b35bcc0f69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b4d5739a7ef7ce9e6f2a3c653cc6d361b9bb0995d8899611d5196dfb304d82a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T17:17:21Z\\\",\\\"message\\\":\\\"2026-02-26T17:16:35+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5c4ff198-2896-499a-ae44-42685854af07\\\\n2026-02-26T17:16:35+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5c4ff198-2896-499a-ae44-42685854af07 to /host/opt/cni/bin/\\\\n2026-02-26T17:16:36Z [verbose] multus-daemon started\\\\n2026-02-26T17:16:36Z [verbose] Readiness Indicator file check\\\\n2026-02-26T17:17:21Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rswkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tv2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:27Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:27 crc kubenswrapper[4805]: I0226 17:17:27.170728 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hbv6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6e20a5b-84fd-4e2d-836c-a3891ef809dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hbv6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:27Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:27 crc kubenswrapper[4805]: I0226 17:17:27.195525 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:27Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:27 crc kubenswrapper[4805]: I0226 17:17:27.215630 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:27Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:27 crc kubenswrapper[4805]: I0226 17:17:27.232790 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:27Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:27 crc kubenswrapper[4805]: I0226 17:17:27.247641 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75c5c03a-8e62-423f-a200-268c5ce427cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d2eebfb7c493826ba7323db89101f2e8328db0114a3315e98bebba40c4e45a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8326f4c52e7600d16343e34c506cc2965d03fb136016e58b4e6bbd2a3220da0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 17:15:13.311159 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 17:15:13.312724 1 observer_polling.go:159] Starting file observer\\\\nI0226 17:15:13.314063 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 17:15:13.315100 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0226 17:15:42.940483 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0226 17:15:42.940700 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:13Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8661d5f04ab119c311367b9e13c65671370a488a8d37c63aa6e37f2e1556688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb76989f99dc85548cf68fedb8b05b76aeed8628772502bf3a6c820635f5ed60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe5fcd2ef94d1daafee16035d9fe7743229aed752e4e3a2cbb3e39f11620f90f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:27Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:27 crc kubenswrapper[4805]: I0226 17:17:27.263260 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9d4b760-e6f7-4436-a084-0ad6b66bed3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e49ec5b4e10dc6dac71d5faf7f0e1b1183cab3aad802aae8c801a1ea4baa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208dd9e5f6749b5e8c9b4f0d63e86d91b0c64d979b8d23b4a80169ca30565a45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2798a01279759b3561383b0d984c845729bfb391addb4d35a637600cfb46c9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a451f932847342121502796c41fd3b32fe0b6faa4746377ed06b630d0f447fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a451f932847342121502796c41fd3b32fe0b6faa4746377ed06b630d0f447fd5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:27Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:27 crc kubenswrapper[4805]: I0226 17:17:27.277703 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25e83477-65d0-41be-8e55-fdacfc5871a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57fc8ccb64122947b5a7cca9213198f432744cd553224168c0c46c30692dee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2mnb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:27Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:28 crc kubenswrapper[4805]: I0226 17:17:28.953146 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:17:28 crc kubenswrapper[4805]: E0226 17:17:28.953290 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:17:28 crc kubenswrapper[4805]: I0226 17:17:28.953319 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:17:28 crc kubenswrapper[4805]: I0226 17:17:28.953407 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:17:28 crc kubenswrapper[4805]: E0226 17:17:28.953507 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:17:28 crc kubenswrapper[4805]: E0226 17:17:28.953563 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:17:28 crc kubenswrapper[4805]: I0226 17:17:28.953846 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:17:28 crc kubenswrapper[4805]: E0226 17:17:28.953930 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:17:30 crc kubenswrapper[4805]: I0226 17:17:30.349757 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:17:30 crc kubenswrapper[4805]: I0226 17:17:30.349824 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:17:30 crc kubenswrapper[4805]: I0226 17:17:30.349840 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:17:30 crc kubenswrapper[4805]: I0226 17:17:30.349866 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:17:30 crc kubenswrapper[4805]: I0226 17:17:30.349884 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:17:30Z","lastTransitionTime":"2026-02-26T17:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:17:30 crc kubenswrapper[4805]: E0226 17:17:30.364541 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:30Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:30 crc kubenswrapper[4805]: I0226 17:17:30.369977 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:17:30 crc kubenswrapper[4805]: I0226 17:17:30.370046 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:17:30 crc kubenswrapper[4805]: I0226 17:17:30.370063 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:17:30 crc kubenswrapper[4805]: I0226 17:17:30.370082 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:17:30 crc kubenswrapper[4805]: I0226 17:17:30.370094 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:17:30Z","lastTransitionTime":"2026-02-26T17:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:17:30 crc kubenswrapper[4805]: E0226 17:17:30.387637 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:30Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:30 crc kubenswrapper[4805]: I0226 17:17:30.392560 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:17:30 crc kubenswrapper[4805]: I0226 17:17:30.392614 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:17:30 crc kubenswrapper[4805]: I0226 17:17:30.392627 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:17:30 crc kubenswrapper[4805]: I0226 17:17:30.392648 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:17:30 crc kubenswrapper[4805]: I0226 17:17:30.392661 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:17:30Z","lastTransitionTime":"2026-02-26T17:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:17:30 crc kubenswrapper[4805]: E0226 17:17:30.410587 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:30Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:30 crc kubenswrapper[4805]: I0226 17:17:30.416225 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:17:30 crc kubenswrapper[4805]: I0226 17:17:30.416299 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:17:30 crc kubenswrapper[4805]: I0226 17:17:30.416317 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:17:30 crc kubenswrapper[4805]: I0226 17:17:30.416343 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:17:30 crc kubenswrapper[4805]: I0226 17:17:30.416363 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:17:30Z","lastTransitionTime":"2026-02-26T17:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:17:30 crc kubenswrapper[4805]: E0226 17:17:30.435057 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:30Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:30 crc kubenswrapper[4805]: I0226 17:17:30.440468 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:17:30 crc kubenswrapper[4805]: I0226 17:17:30.440535 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:17:30 crc kubenswrapper[4805]: I0226 17:17:30.440552 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:17:30 crc kubenswrapper[4805]: I0226 17:17:30.440579 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:17:30 crc kubenswrapper[4805]: I0226 17:17:30.440597 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:17:30Z","lastTransitionTime":"2026-02-26T17:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:17:30 crc kubenswrapper[4805]: E0226 17:17:30.456407 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:17:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"aed24eaa-3a4d-469f-a8c6-d3249b191765\\\",\\\"systemUUID\\\":\\\"f8358516-caaf-41d0-bd53-1417d7f7dfbf\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:30Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:30 crc kubenswrapper[4805]: E0226 17:17:30.456569 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 17:17:30 crc kubenswrapper[4805]: I0226 17:17:30.953127 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:17:30 crc kubenswrapper[4805]: I0226 17:17:30.953216 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:17:30 crc kubenswrapper[4805]: I0226 17:17:30.953263 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:17:30 crc kubenswrapper[4805]: I0226 17:17:30.953396 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:17:30 crc kubenswrapper[4805]: E0226 17:17:30.953564 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:17:30 crc kubenswrapper[4805]: E0226 17:17:30.953736 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:17:30 crc kubenswrapper[4805]: E0226 17:17:30.954623 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:17:30 crc kubenswrapper[4805]: E0226 17:17:30.954978 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:17:30 crc kubenswrapper[4805]: I0226 17:17:30.955049 4805 scope.go:117] "RemoveContainer" containerID="5fb6663ae4b2e6e8316d62fb7aba91569b3116677209607d3c69489c25fa097f" Feb 26 17:17:31 crc kubenswrapper[4805]: I0226 17:17:31.078446 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqbgw_1d434db3-db90-41b2-9bd3-e6ef3009f878/ovnkube-controller/2.log" Feb 26 17:17:32 crc kubenswrapper[4805]: I0226 17:17:32.088682 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqbgw_1d434db3-db90-41b2-9bd3-e6ef3009f878/ovnkube-controller/3.log" Feb 26 17:17:32 crc kubenswrapper[4805]: I0226 17:17:32.089295 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqbgw_1d434db3-db90-41b2-9bd3-e6ef3009f878/ovnkube-controller/2.log" Feb 26 17:17:32 crc kubenswrapper[4805]: E0226 17:17:32.089361 4805 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 17:17:32 crc kubenswrapper[4805]: I0226 17:17:32.092672 4805 generic.go:334] "Generic (PLEG): container finished" podID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerID="74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623" exitCode=1 Feb 26 17:17:32 crc kubenswrapper[4805]: I0226 17:17:32.092733 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" event={"ID":"1d434db3-db90-41b2-9bd3-e6ef3009f878","Type":"ContainerDied","Data":"74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623"} Feb 26 17:17:32 crc kubenswrapper[4805]: I0226 17:17:32.092791 4805 scope.go:117] "RemoveContainer" containerID="5fb6663ae4b2e6e8316d62fb7aba91569b3116677209607d3c69489c25fa097f" Feb 26 17:17:32 crc kubenswrapper[4805]: I0226 17:17:32.093685 4805 scope.go:117] "RemoveContainer" containerID="74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623" Feb 26 17:17:32 crc kubenswrapper[4805]: E0226 17:17:32.093851 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pqbgw_openshift-ovn-kubernetes(1d434db3-db90-41b2-9bd3-e6ef3009f878)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" Feb 26 17:17:32 crc kubenswrapper[4805]: I0226 17:17:32.109380 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9d4b760-e6f7-4436-a084-0ad6b66bed3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e49ec5b4e10dc6dac71d5faf7f0e1b1183cab3aad802aae8c801a1ea4baa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208dd9e5f6749b5e8c9b4f0d63e86d91b0c64d979b8d23b4a80169ca30565a45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2798a01279759b3561383b0d984c845729bfb391addb4d35a637600cfb46c9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a451f932847342121502796c41fd3b32fe0b6faa4746377ed06b630d0f447fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a451f932847342121502796c41fd3b32fe0b6faa4746377ed06b630d0f447fd5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:32 crc kubenswrapper[4805]: I0226 17:17:32.120436 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25e83477-65d0-41be-8e55-fdacfc5871a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57fc8ccb64122947b5a7cca9213198f432744cd553224168c0c46c30692dee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2mnb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:32 crc kubenswrapper[4805]: I0226 17:17:32.131974 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75c5c03a-8e62-423f-a200-268c5ce427cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d2eebfb7c493826ba7323db89101f2e8328db0114a3315e98bebba40c4e45a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8326f4c52e7600d16343e34c506cc2965d03fb136016e58b4e6bbd2a3220da0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 17:15:13.311159 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 17:15:13.312724 1 observer_polling.go:159] Starting file observer\\\\nI0226 17:15:13.314063 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 17:15:13.315100 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0226 17:15:42.940483 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0226 17:15:42.940700 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:13Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8661d5f04ab119c311367b9e13c65671370a488a8d37c63aa6e37f2e1556688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb76989f99dc85548cf68fedb8b05b76aeed8628772502bf3a6c820635f5ed60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe5fcd2ef94d1daafee16035d9fe7743229aed752e4e3a2cbb3e39f11620f90f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:32 crc kubenswrapper[4805]: I0226 17:17:32.143875 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:32 crc kubenswrapper[4805]: I0226 17:17:32.160208 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:32 crc kubenswrapper[4805]: I0226 17:17:32.172367 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d4ls2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f108050dc8903541643b76d657da2ecc032d6ae2700c3150b7bcde71e25cc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxgv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d4ls2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:32 crc kubenswrapper[4805]: I0226 17:17:32.192631 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d434db3-db90-41b2-9bd3-e6ef3009f878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5fb6663ae4b2e6e8316d62fb7aba91569b3116677209607d3c69489c25fa097f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T17:17:01Z\\\",\\\"message\\\":\\\"s_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"de17f0de-cfb1-4534-bb42-c40f5e050c73\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:2379, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:9979, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0226 17:17:01.799618 7089 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:17:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T17:17:31Z\\\",\\\"message\\\":\\\"uters:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-ingress-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.244\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0226 17:17:31.935798 7385 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:17:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqbgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:32 crc kubenswrapper[4805]: I0226 17:17:32.208465 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406ee0ca0120ca5ae395a260da90d8b3a15bb6294b69d86fbaf5c9b5502a509d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:32 crc kubenswrapper[4805]: I0226 17:17:32.227403 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb6c537-e08f-48af-a1c8-5879a8519a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d6de27318643cad30debab75938fac7914dda795f3a5c911101cb1da5d5e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-54ch7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:32 crc kubenswrapper[4805]: I0226 17:17:32.239709 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjq6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"619e6250-cc24-43ca-a031-f79f954df6d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dafe627cf6c7501f852d717e5f74d8df670de56fe64e20a6f9c45052dd4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjq6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:32 crc kubenswrapper[4805]: I0226 17:17:32.253560 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5ss5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a674c1d-d647-41a7-a989-7e604dd9865a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a2fc75495b622246ff13b084e44b4eca3f17555dda3ffdf0095fc535f29325e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shz7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ecc506d706c8a1a59db461b134b8a0985cc022014987a6bbf1b0da5de90ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shz7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5ss5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:32 crc kubenswrapper[4805]: I0226 17:17:32.266321 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d50a504-abcc-48e7-87e0-3e6afb58f66e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea4296c5f0aeccbd6407ca35a2f863bc14b58e0d445271541413e2bd411970c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24f5115c5c9ba391bf4503f7e0868747a2591f781173fb55a8de06d90f4e5d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24f5115c5c9ba391bf4503f7e0868747a2591f781173fb55a8de06d90f4e5d0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:32 crc kubenswrapper[4805]: I0226 17:17:32.281554 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:32 crc kubenswrapper[4805]: I0226 17:17:32.296564 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:32 crc kubenswrapper[4805]: I0226 17:17:32.309467 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:32 crc kubenswrapper[4805]: I0226 17:17:32.321948 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:32 crc kubenswrapper[4805]: I0226 17:17:32.336263 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tv2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cefacfa-0108-4252-aa69-4b35bcc0f69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b4d5739a7ef7ce9e6f2a3c653cc6d361b9bb0995d8899611d5196dfb304d82a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T17:17:21Z\\\",\\\"message\\\":\\\"2026-02-26T17:16:35+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5c4ff198-2896-499a-ae44-42685854af07\\\\n2026-02-26T17:16:35+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5c4ff198-2896-499a-ae44-42685854af07 to /host/opt/cni/bin/\\\\n2026-02-26T17:16:36Z [verbose] multus-daemon started\\\\n2026-02-26T17:16:36Z [verbose] Readiness Indicator file check\\\\n2026-02-26T17:17:21Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rswkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tv2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:32 crc kubenswrapper[4805]: I0226 17:17:32.345740 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hbv6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6e20a5b-84fd-4e2d-836c-a3891ef809dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hbv6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:32 crc kubenswrapper[4805]: I0226 17:17:32.364089 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:32Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:32 crc kubenswrapper[4805]: I0226 17:17:32.952507 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:17:32 crc kubenswrapper[4805]: I0226 17:17:32.952536 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:17:32 crc kubenswrapper[4805]: I0226 17:17:32.952586 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:17:32 crc kubenswrapper[4805]: I0226 17:17:32.952494 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:17:32 crc kubenswrapper[4805]: E0226 17:17:32.952662 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:17:32 crc kubenswrapper[4805]: E0226 17:17:32.952808 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:17:32 crc kubenswrapper[4805]: E0226 17:17:32.952911 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:17:32 crc kubenswrapper[4805]: E0226 17:17:32.953002 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:17:33 crc kubenswrapper[4805]: I0226 17:17:33.097165 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqbgw_1d434db3-db90-41b2-9bd3-e6ef3009f878/ovnkube-controller/3.log" Feb 26 17:17:33 crc kubenswrapper[4805]: I0226 17:17:33.354487 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:17:33 crc kubenswrapper[4805]: I0226 17:17:33.356106 4805 scope.go:117] "RemoveContainer" containerID="74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623" Feb 26 17:17:33 crc kubenswrapper[4805]: E0226 17:17:33.356386 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pqbgw_openshift-ovn-kubernetes(1d434db3-db90-41b2-9bd3-e6ef3009f878)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" Feb 26 17:17:33 crc kubenswrapper[4805]: I0226 17:17:33.376519 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75c5c03a-8e62-423f-a200-268c5ce427cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d2eebfb7c493826ba7323db89101f2e8328db0114a3315e98bebba40c4e45a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8326f4c52e7600d16343e34c506cc2965d03fb136016e58b4e6bbd2a3220da0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 17:15:13.311159 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 17:15:13.312724 1 observer_polling.go:159] Starting file observer\\\\nI0226 17:15:13.314063 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 17:15:13.315100 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0226 17:15:42.940483 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0226 17:15:42.940700 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:13Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8661d5f04ab119c311367b9e13c65671370a488a8d37c63aa6e37f2e1556688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb76989f99dc85548cf68fedb8b05b76aeed8628772502bf3a6c820635f5ed60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe5fcd2ef94d1daafee16035d9fe7743229aed752e4e3a2cbb3e39f11620f90f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:33 crc kubenswrapper[4805]: I0226 17:17:33.392786 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9d4b760-e6f7-4436-a084-0ad6b66bed3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e49ec5b4e10dc6dac71d5faf7f0e1b1183cab3aad802aae8c801a1ea4baa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208dd9e5f6749b5e8c9b4f0d63e86d91b0c64d979b8d23b4a80169ca30565a45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2798a01279759b3561383b0d984c845729bfb391addb4d35a637600cfb46c9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a451f932847342121502796c41fd3b32fe0b6faa4746377ed06b630d0f447fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a451f932847342121502796c41fd3b32fe0b6faa4746377ed06b630d0f447fd5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:33 crc kubenswrapper[4805]: I0226 17:17:33.405769 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25e83477-65d0-41be-8e55-fdacfc5871a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57fc8ccb64122947b5a7cca9213198f432744cd553224168c0c46c30692dee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2mnb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:33 crc kubenswrapper[4805]: I0226 17:17:33.424869 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406ee0ca0120ca5ae395a260da90d8b3a15bb6294b69d86fbaf5c9b5502a509d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:33 crc kubenswrapper[4805]: I0226 17:17:33.441369 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:33 crc kubenswrapper[4805]: I0226 17:17:33.462199 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:33 crc kubenswrapper[4805]: I0226 17:17:33.476777 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d4ls2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f108050dc8903541643b76d657da2ecc032d6ae2700c3150b7bcde71e25cc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxgv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d4ls2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:33 crc kubenswrapper[4805]: I0226 17:17:33.499948 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d434db3-db90-41b2-9bd3-e6ef3009f878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T17:17:31Z\\\",\\\"message\\\":\\\"uters:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-ingress-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.244\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0226 17:17:31.935798 7385 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:17:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pqbgw_openshift-ovn-kubernetes(1d434db3-db90-41b2-9bd3-e6ef3009f878)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqbgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:33 crc kubenswrapper[4805]: I0226 17:17:33.515084 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d50a504-abcc-48e7-87e0-3e6afb58f66e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea4296c5f0aeccbd6407ca35a2f863bc14b58e0d445271541413e2bd411970c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24f5115c5c9ba391bf4503f7e0868747a2591f781173fb55a8de06d90f4e5d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24f5115c5c9ba391bf4503f7e0868747a2591f781173fb55a8de06d90f4e5d0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:33 crc kubenswrapper[4805]: I0226 17:17:33.537912 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb6c537-e08f-48af-a1c8-5879a8519a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d6de27318643cad30debab75938fac7914dda795f3a5c911101cb1da5d5e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-54ch7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:33 crc kubenswrapper[4805]: I0226 17:17:33.552243 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjq6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"619e6250-cc24-43ca-a031-f79f954df6d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dafe627cf6c7501f852d717e5f74d8df670de56fe64e20a6f9c45052dd4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjq6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:33 crc kubenswrapper[4805]: I0226 17:17:33.569965 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5ss5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a674c1d-d647-41a7-a989-7e604dd9865a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a2fc75495b622246ff13b084e44b4eca3f17555dda3ffdf0095fc535f29325e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shz7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ecc506d706c8a1a59db461b134b8a0985cc022014987a6bbf1b0da5de90ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shz7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5ss5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:33 crc kubenswrapper[4805]: I0226 17:17:33.618115 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hbv6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6e20a5b-84fd-4e2d-836c-a3891ef809dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hbv6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:33 crc kubenswrapper[4805]: I0226 17:17:33.654361 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:33 crc kubenswrapper[4805]: I0226 17:17:33.676453 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:33 crc kubenswrapper[4805]: I0226 17:17:33.689656 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:33 crc kubenswrapper[4805]: I0226 17:17:33.701563 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:33 crc kubenswrapper[4805]: I0226 17:17:33.715166 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:33 crc kubenswrapper[4805]: I0226 17:17:33.727055 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tv2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cefacfa-0108-4252-aa69-4b35bcc0f69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b4d5739a7ef7ce9e6f2a3c653cc6d361b9bb0995d8899611d5196dfb304d82a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T17:17:21Z\\\",\\\"message\\\":\\\"2026-02-26T17:16:35+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5c4ff198-2896-499a-ae44-42685854af07\\\\n2026-02-26T17:16:35+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5c4ff198-2896-499a-ae44-42685854af07 to /host/opt/cni/bin/\\\\n2026-02-26T17:16:36Z [verbose] multus-daemon started\\\\n2026-02-26T17:16:36Z [verbose] Readiness Indicator file check\\\\n2026-02-26T17:17:21Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rswkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tv2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:33Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:34 crc kubenswrapper[4805]: I0226 17:17:34.953157 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:17:34 crc kubenswrapper[4805]: I0226 17:17:34.953258 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:17:34 crc kubenswrapper[4805]: I0226 17:17:34.953273 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:17:34 crc kubenswrapper[4805]: I0226 17:17:34.953171 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:17:34 crc kubenswrapper[4805]: E0226 17:17:34.953383 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:17:34 crc kubenswrapper[4805]: E0226 17:17:34.953535 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:17:34 crc kubenswrapper[4805]: E0226 17:17:34.953498 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:17:34 crc kubenswrapper[4805]: E0226 17:17:34.953695 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:17:36 crc kubenswrapper[4805]: I0226 17:17:36.952503 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:17:36 crc kubenswrapper[4805]: I0226 17:17:36.952564 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:17:36 crc kubenswrapper[4805]: E0226 17:17:36.952637 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:17:36 crc kubenswrapper[4805]: I0226 17:17:36.952521 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:17:36 crc kubenswrapper[4805]: I0226 17:17:36.952737 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:17:36 crc kubenswrapper[4805]: E0226 17:17:36.952818 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:17:36 crc kubenswrapper[4805]: E0226 17:17:36.952938 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:17:36 crc kubenswrapper[4805]: E0226 17:17:36.952996 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:17:36 crc kubenswrapper[4805]: I0226 17:17:36.976137 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75c5c03a-8e62-423f-a200-268c5ce427cf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d2eebfb7c493826ba7323db89101f2e8328db0114a3315e98bebba40c4e45a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8326f4c52e7600d16343e34c506cc2965d03fb136016e58b4e6bbd2a3220da0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0226 17:15:13.311159 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0226 17:15:13.312724 1 observer_polling.go:159] Starting file observer\\\\nI0226 17:15:13.314063 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0226 17:15:13.315100 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0226 17:15:42.940483 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0226 17:15:42.940700 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:13Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8661d5f04ab119c311367b9e13c65671370a488a8d37c63aa6e37f2e1556688b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb76989f99dc85548cf68fedb8b05b76aeed8628772502bf3a6c820635f5ed60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe5fcd2ef94d1daafee16035d9fe7743229aed752e4e3a2cbb3e39f11620f90f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:36Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:36 crc kubenswrapper[4805]: I0226 17:17:36.996377 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9d4b760-e6f7-4436-a084-0ad6b66bed3d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e49ec5b4e10dc6dac71d5faf7f0e1b1183cab3aad802aae8c801a1ea4baa4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208dd9e5f6749b5e8c9b4f0d63e86d91b0c64d979b8d23b4a80169ca30565a45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2798a01279759b3561383b0d984c845729bfb391addb4d35a637600cfb46c9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a451f932847342121502796c41fd3b32fe0b6faa4746377ed06b630d0f447fd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a451f932847342121502796c41fd3b32fe0b6faa4746377ed06b630d0f447fd5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:36Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:37 crc kubenswrapper[4805]: I0226 17:17:37.013133 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25e83477-65d0-41be-8e55-fdacfc5871a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57fc8ccb64122947b5a7cca9213198f432744cd553224168c0c46c30692dee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lpw5p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2mnb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:37 crc kubenswrapper[4805]: I0226 17:17:37.032786 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://406ee0ca0120ca5ae395a260da90d8b3a15bb6294b69d86fbaf5c9b5502a509d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:37 crc kubenswrapper[4805]: I0226 17:17:37.054332 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:37 crc kubenswrapper[4805]: I0226 17:17:37.074284 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:37 crc kubenswrapper[4805]: E0226 17:17:37.090282 4805 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 17:17:37 crc kubenswrapper[4805]: I0226 17:17:37.094223 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d4ls2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5488ec5b-183b-423e-a38d-bf3aaf73b6f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f108050dc8903541643b76d657da2ecc032d6ae2700c3150b7bcde71e25cc45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jxgv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d4ls2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:37 crc kubenswrapper[4805]: I0226 17:17:37.124541 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d434db3-db90-41b2-9bd3-e6ef3009f878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T17:17:31Z\\\",\\\"message\\\":\\\"uters:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-ingress-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.244\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0226 17:17:31.935798 7385 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:17:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pqbgw_openshift-ovn-kubernetes(1d434db3-db90-41b2-9bd3-e6ef3009f878)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5kffw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqbgw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:37 crc kubenswrapper[4805]: I0226 17:17:37.141237 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d50a504-abcc-48e7-87e0-3e6afb58f66e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ea4296c5f0aeccbd6407ca35a2f863bc14b58e0d445271541413e2bd411970c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24f5115c5c9ba391bf4503f7e0868747a2591f781173fb55a8de06d90f4e5d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24f5115c5c9ba391bf4503f7e0868747a2591f781173fb55a8de06d90f4e5d0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:37 crc kubenswrapper[4805]: I0226 17:17:37.166124 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-54ch7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fb6c537-e08f-48af-a1c8-5879a8519a5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d6de27318643cad30debab75938fac7914dda795f3a5c911101cb1da5d5e8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51fe4f7a2f90d7aa7fa8fb174048bbeda72ee57001049e1e7067ef6a4dd11237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77be707cb973164e8a85860d302e677e91336d50043902e967b403b68b4165c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a0bb0cd7430bf01540258254531d519943740542f21e53fc505e2f3499d2a09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13073e35d32d0174db33fdbed16c6bbfcc50d188f1a30a148142c4f5cb0bbf87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21951f9da5749dcadfaa6dc472504d5e48feccd15b430e06044abd8c7311efd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fa46aca5a940a014559c04f9b6e6cf76fdc63b3a4983e19aa5d0cf17565659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:16:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pc9dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-54ch7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:37 crc kubenswrapper[4805]: I0226 17:17:37.181880 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjq6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"619e6250-cc24-43ca-a031-f79f954df6d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dafe627cf6c7501f852d717e5f74d8df670de56fe64e20a6f9c45052dd4a22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6zn4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjq6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:37 crc kubenswrapper[4805]: I0226 17:17:37.196554 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5ss5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a674c1d-d647-41a7-a989-7e604dd9865a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a2fc75495b622246ff13b084e44b4eca3f17555dda3ffdf0095fc535f29325e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shz7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1ecc506d706c8a1a59db461b134b8a0985cc022014987a6bbf1b0da5de90ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shz7d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5ss5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:37 crc kubenswrapper[4805]: I0226 17:17:37.209612 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hbv6d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6e20a5b-84fd-4e2d-836c-a3891ef809dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kwfx6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:45Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hbv6d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:37 crc kubenswrapper[4805]: I0226 17:17:37.233157 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16f7035b-f209-4aa2-97aa-1a7fa776c0db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:15:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7cdda02b24cc60a20cbb3f7d631fd6a0487a6f2825f33eceb91eb2bd0219cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a5353a9dbf2a5d786e66aa9943e64a77087450b80f79e696c50d94fffcbb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d233b98d05a8490d1a252feacdeec703cee1690b15694a61714dc937a8caf2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8f548df577a4bd928d2027364264dc9e9f0a78d5340dd5d8e9f7f8f8bcfb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dce7342d2fedbaa2f55e1970f4afde35892dbfc9aa7fc2e251a8a32398016e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31868e4bfc8f78500c2ac8fb7ec71c37f9000f21d8023c57423bc66efa98494e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51d4eeefbb4a94c911e7647d1629c04b227ca76b356abb8a471c88d90da29ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://771832ed369c4806f8aac12c9aa751078482a4cebeb8d0efdd3fe0b49c922c24\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:37 crc kubenswrapper[4805]: I0226 17:17:37.248809 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:14:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-26T17:15:43Z\\\",\\\"message\\\":\\\"W0226 17:15:42.242413 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0226 17:15:42.242796 1 crypto.go:601] Generating new CA for check-endpoints-signer@1772126142 cert, and key in /tmp/serving-cert-2247879767/serving-signer.crt, /tmp/serving-cert-2247879767/serving-signer.key\\\\nI0226 17:15:42.466631 1 observer_polling.go:159] Starting file observer\\\\nW0226 17:15:42.475318 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\nI0226 17:15:42.475813 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0226 17:15:42.476890 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2247879767/tls.crt::/tmp/serving-cert-2247879767/tls.key\\\\\\\"\\\\nF0226 17:15:42.892516 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:15:42Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:15:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:14:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-26T17:14:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-26T17:14:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:14:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:37 crc kubenswrapper[4805]: I0226 17:17:37.263529 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:37 crc kubenswrapper[4805]: I0226 17:17:37.277629 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc0c323feaa84d901ca826ebec12cfa5d865e28da3364ea74c0d0c7edd26b631\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:37 crc kubenswrapper[4805]: I0226 17:17:37.293061 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0ded33135ef7a8b80bd7370766573a77bb2da8bfafe8e008af127252518c2bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c827d2cc5628b0eec01031cefcfe20a327d2e0c8d556af3703646c54c6c6e63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:16:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:37 crc kubenswrapper[4805]: I0226 17:17:37.313087 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tv2pd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4cefacfa-0108-4252-aa69-4b35bcc0f69f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:16:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-26T17:17:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b4d5739a7ef7ce9e6f2a3c653cc6d361b9bb0995d8899611d5196dfb304d82a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-26T17:17:21Z\\\",\\\"message\\\":\\\"2026-02-26T17:16:35+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5c4ff198-2896-499a-ae44-42685854af07\\\\n2026-02-26T17:16:35+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5c4ff198-2896-499a-ae44-42685854af07 to /host/opt/cni/bin/\\\\n2026-02-26T17:16:36Z [verbose] multus-daemon started\\\\n2026-02-26T17:16:36Z [verbose] Readiness Indicator file check\\\\n2026-02-26T17:17:21Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-26T17:16:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:17:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rswkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-26T17:16:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tv2pd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-26T17:17:37Z is after 2025-08-24T17:21:41Z" Feb 26 17:17:38 crc kubenswrapper[4805]: I0226 17:17:38.953079 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:17:38 crc kubenswrapper[4805]: I0226 17:17:38.953161 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:17:38 crc kubenswrapper[4805]: E0226 17:17:38.953227 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:17:38 crc kubenswrapper[4805]: I0226 17:17:38.953078 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:17:38 crc kubenswrapper[4805]: E0226 17:17:38.953317 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:17:38 crc kubenswrapper[4805]: E0226 17:17:38.953459 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:17:38 crc kubenswrapper[4805]: I0226 17:17:38.954245 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:17:38 crc kubenswrapper[4805]: E0226 17:17:38.954481 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.537692 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.537734 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.537745 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.537761 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.537771 4805 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-26T17:17:40Z","lastTransitionTime":"2026-02-26T17:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.586357 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pff"] Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.587327 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pff" Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.591671 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.591856 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.591910 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.592838 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.627496 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=27.627475361 podStartE2EDuration="27.627475361s" podCreationTimestamp="2026-02-26 17:17:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:17:40.604776083 +0000 UTC m=+175.166530422" watchObservedRunningTime="2026-02-26 17:17:40.627475361 +0000 UTC m=+175.189229700" Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.643197 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-54ch7" podStartSLOduration=99.643172197 podStartE2EDuration="1m39.643172197s" podCreationTimestamp="2026-02-26 17:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:17:40.632754491 +0000 UTC m=+175.194508830" watchObservedRunningTime="2026-02-26 17:17:40.643172197 +0000 UTC m=+175.204926576" Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.643709 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-bjq6x" podStartSLOduration=99.64369481 podStartE2EDuration="1m39.64369481s" podCreationTimestamp="2026-02-26 17:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:17:40.643341571 +0000 UTC m=+175.205095910" watchObservedRunningTime="2026-02-26 17:17:40.64369481 +0000 UTC m=+175.205449189" Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.671290 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5ss5q" podStartSLOduration=98.671257858 podStartE2EDuration="1m38.671257858s" podCreationTimestamp="2026-02-26 17:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:17:40.655898411 +0000 UTC m=+175.217652780" watchObservedRunningTime="2026-02-26 17:17:40.671257858 +0000 UTC m=+175.233012197" Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.680656 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1b15e4b-7dc1-4408-b74d-a9f4a2c155e7-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-x9pff\" (UID: \"a1b15e4b-7dc1-4408-b74d-a9f4a2c155e7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pff" Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.680719 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a1b15e4b-7dc1-4408-b74d-a9f4a2c155e7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-x9pff\" (UID: \"a1b15e4b-7dc1-4408-b74d-a9f4a2c155e7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pff" Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.680747 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a1b15e4b-7dc1-4408-b74d-a9f4a2c155e7-service-ca\") pod \"cluster-version-operator-5c965bbfc6-x9pff\" (UID: \"a1b15e4b-7dc1-4408-b74d-a9f4a2c155e7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pff" Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.680830 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a1b15e4b-7dc1-4408-b74d-a9f4a2c155e7-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-x9pff\" (UID: \"a1b15e4b-7dc1-4408-b74d-a9f4a2c155e7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pff" Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.680895 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a1b15e4b-7dc1-4408-b74d-a9f4a2c155e7-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-x9pff\" (UID: \"a1b15e4b-7dc1-4408-b74d-a9f4a2c155e7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pff" Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.686222 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-tv2pd" podStartSLOduration=99.686197566 podStartE2EDuration="1m39.686197566s" podCreationTimestamp="2026-02-26 17:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:17:40.685972161 +0000 UTC m=+175.247726500" watchObservedRunningTime="2026-02-26 17:17:40.686197566 +0000 UTC m=+175.247951915" Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.737328 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=82.737307814 podStartE2EDuration="1m22.737307814s" podCreationTimestamp="2026-02-26 17:16:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:17:40.737074578 +0000 UTC m=+175.298828937" watchObservedRunningTime="2026-02-26 17:17:40.737307814 +0000 UTC m=+175.299062163" Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.769136 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=85.769107066 podStartE2EDuration="1m25.769107066s" podCreationTimestamp="2026-02-26 17:16:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:17:40.754303242 +0000 UTC m=+175.316057581" watchObservedRunningTime="2026-02-26 17:17:40.769107066 +0000 UTC m=+175.330861405" Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.781808 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a1b15e4b-7dc1-4408-b74d-a9f4a2c155e7-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-x9pff\" (UID: \"a1b15e4b-7dc1-4408-b74d-a9f4a2c155e7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pff" Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.781860 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1b15e4b-7dc1-4408-b74d-a9f4a2c155e7-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-x9pff\" (UID: \"a1b15e4b-7dc1-4408-b74d-a9f4a2c155e7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pff" Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.781890 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a1b15e4b-7dc1-4408-b74d-a9f4a2c155e7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-x9pff\" (UID: \"a1b15e4b-7dc1-4408-b74d-a9f4a2c155e7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pff" Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.781915 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a1b15e4b-7dc1-4408-b74d-a9f4a2c155e7-service-ca\") pod \"cluster-version-operator-5c965bbfc6-x9pff\" (UID: \"a1b15e4b-7dc1-4408-b74d-a9f4a2c155e7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pff" Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.781941 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a1b15e4b-7dc1-4408-b74d-a9f4a2c155e7-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-x9pff\" (UID: \"a1b15e4b-7dc1-4408-b74d-a9f4a2c155e7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pff" Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.782047 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a1b15e4b-7dc1-4408-b74d-a9f4a2c155e7-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-x9pff\" (UID: \"a1b15e4b-7dc1-4408-b74d-a9f4a2c155e7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pff" Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.782062 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a1b15e4b-7dc1-4408-b74d-a9f4a2c155e7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-x9pff\" (UID: \"a1b15e4b-7dc1-4408-b74d-a9f4a2c155e7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pff" Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.782982 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a1b15e4b-7dc1-4408-b74d-a9f4a2c155e7-service-ca\") pod \"cluster-version-operator-5c965bbfc6-x9pff\" (UID: \"a1b15e4b-7dc1-4408-b74d-a9f4a2c155e7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pff" Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.789621 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1b15e4b-7dc1-4408-b74d-a9f4a2c155e7-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-x9pff\" (UID: \"a1b15e4b-7dc1-4408-b74d-a9f4a2c155e7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pff" Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.796615 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a1b15e4b-7dc1-4408-b74d-a9f4a2c155e7-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-x9pff\" (UID: \"a1b15e4b-7dc1-4408-b74d-a9f4a2c155e7\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pff" Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.802718 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=36.802699133 podStartE2EDuration="36.802699133s" podCreationTimestamp="2026-02-26 17:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:17:40.801892123 +0000 UTC m=+175.363646482" watchObservedRunningTime="2026-02-26 17:17:40.802699133 +0000 UTC m=+175.364453472" Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.830933 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podStartSLOduration=99.830904367 podStartE2EDuration="1m39.830904367s" podCreationTimestamp="2026-02-26 17:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:17:40.830539228 +0000 UTC m=+175.392293597" watchObservedRunningTime="2026-02-26 17:17:40.830904367 +0000 UTC m=+175.392658726" Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.831470 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=44.831461001 podStartE2EDuration="44.831461001s" podCreationTimestamp="2026-02-26 17:16:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:17:40.818874931 +0000 UTC m=+175.380629270" watchObservedRunningTime="2026-02-26 17:17:40.831461001 +0000 UTC m=+175.393215360" Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.903723 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pff" Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.904908 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-d4ls2" podStartSLOduration=99.904885158 podStartE2EDuration="1m39.904885158s" podCreationTimestamp="2026-02-26 17:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:17:40.903695628 +0000 UTC m=+175.465449967" watchObservedRunningTime="2026-02-26 17:17:40.904885158 +0000 UTC m=+175.466639497" Feb 26 17:17:40 crc kubenswrapper[4805]: W0226 17:17:40.915803 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1b15e4b_7dc1_4408_b74d_a9f4a2c155e7.slice/crio-57d86d2dddeec43bef2a2f00d20379144583fa20cfa5fdc1cb1ec841344ba9ff WatchSource:0}: Error finding container 57d86d2dddeec43bef2a2f00d20379144583fa20cfa5fdc1cb1ec841344ba9ff: Status 404 returned error can't find the container with id 57d86d2dddeec43bef2a2f00d20379144583fa20cfa5fdc1cb1ec841344ba9ff Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.953204 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.953227 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.953224 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:17:40 crc kubenswrapper[4805]: I0226 17:17:40.953209 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:17:40 crc kubenswrapper[4805]: E0226 17:17:40.953332 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:17:40 crc kubenswrapper[4805]: E0226 17:17:40.953389 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:17:40 crc kubenswrapper[4805]: E0226 17:17:40.953438 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:17:40 crc kubenswrapper[4805]: E0226 17:17:40.953481 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:17:41 crc kubenswrapper[4805]: I0226 17:17:41.002646 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 26 17:17:41 crc kubenswrapper[4805]: I0226 17:17:41.009383 4805 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 26 17:17:41 crc kubenswrapper[4805]: I0226 17:17:41.130064 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pff" event={"ID":"a1b15e4b-7dc1-4408-b74d-a9f4a2c155e7","Type":"ContainerStarted","Data":"468241eac8ef6378b420dd799c239f2cb16a3081ed4b2cbbf8a7535e30080b3c"} Feb 26 17:17:41 crc kubenswrapper[4805]: I0226 17:17:41.130344 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pff" event={"ID":"a1b15e4b-7dc1-4408-b74d-a9f4a2c155e7","Type":"ContainerStarted","Data":"57d86d2dddeec43bef2a2f00d20379144583fa20cfa5fdc1cb1ec841344ba9ff"} Feb 26 17:17:41 crc kubenswrapper[4805]: I0226 17:17:41.145531 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9pff" podStartSLOduration=100.145506639 podStartE2EDuration="1m40.145506639s" podCreationTimestamp="2026-02-26 17:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:17:41.144865963 +0000 UTC m=+175.706620302" watchObservedRunningTime="2026-02-26 17:17:41.145506639 +0000 UTC m=+175.707260988" Feb 26 17:17:42 crc kubenswrapper[4805]: E0226 17:17:42.091733 4805 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 17:17:42 crc kubenswrapper[4805]: I0226 17:17:42.952865 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:17:42 crc kubenswrapper[4805]: I0226 17:17:42.952879 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:17:42 crc kubenswrapper[4805]: I0226 17:17:42.952956 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:17:42 crc kubenswrapper[4805]: I0226 17:17:42.953104 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:17:42 crc kubenswrapper[4805]: E0226 17:17:42.953180 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:17:42 crc kubenswrapper[4805]: E0226 17:17:42.953256 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:17:42 crc kubenswrapper[4805]: E0226 17:17:42.953409 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:17:42 crc kubenswrapper[4805]: E0226 17:17:42.953532 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:17:43 crc kubenswrapper[4805]: I0226 17:17:43.953554 4805 scope.go:117] "RemoveContainer" containerID="74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623" Feb 26 17:17:43 crc kubenswrapper[4805]: E0226 17:17:43.954074 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pqbgw_openshift-ovn-kubernetes(1d434db3-db90-41b2-9bd3-e6ef3009f878)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" Feb 26 17:17:44 crc kubenswrapper[4805]: I0226 17:17:44.952212 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:17:44 crc kubenswrapper[4805]: I0226 17:17:44.952249 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:17:44 crc kubenswrapper[4805]: E0226 17:17:44.952418 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:17:44 crc kubenswrapper[4805]: I0226 17:17:44.952529 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:17:44 crc kubenswrapper[4805]: E0226 17:17:44.952537 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:17:44 crc kubenswrapper[4805]: I0226 17:17:44.952587 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:17:44 crc kubenswrapper[4805]: E0226 17:17:44.952858 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:17:44 crc kubenswrapper[4805]: E0226 17:17:44.952903 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:17:46 crc kubenswrapper[4805]: I0226 17:17:46.952012 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:17:46 crc kubenswrapper[4805]: I0226 17:17:46.952064 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:17:46 crc kubenswrapper[4805]: I0226 17:17:46.952101 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:17:46 crc kubenswrapper[4805]: E0226 17:17:46.952281 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:17:46 crc kubenswrapper[4805]: I0226 17:17:46.952380 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:17:46 crc kubenswrapper[4805]: E0226 17:17:46.953888 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:17:46 crc kubenswrapper[4805]: E0226 17:17:46.954060 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:17:46 crc kubenswrapper[4805]: E0226 17:17:46.954158 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:17:47 crc kubenswrapper[4805]: E0226 17:17:47.092268 4805 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 17:17:48 crc kubenswrapper[4805]: I0226 17:17:48.952840 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:17:48 crc kubenswrapper[4805]: I0226 17:17:48.952884 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:17:48 crc kubenswrapper[4805]: I0226 17:17:48.953961 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:17:48 crc kubenswrapper[4805]: E0226 17:17:48.954226 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:17:48 crc kubenswrapper[4805]: I0226 17:17:48.954304 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:17:48 crc kubenswrapper[4805]: E0226 17:17:48.954577 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:17:48 crc kubenswrapper[4805]: E0226 17:17:48.954677 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:17:48 crc kubenswrapper[4805]: E0226 17:17:48.955012 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:17:49 crc kubenswrapper[4805]: I0226 17:17:49.656416 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6e20a5b-84fd-4e2d-836c-a3891ef809dc-metrics-certs\") pod \"network-metrics-daemon-hbv6d\" (UID: \"d6e20a5b-84fd-4e2d-836c-a3891ef809dc\") " pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:17:49 crc kubenswrapper[4805]: E0226 17:17:49.656569 4805 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 17:17:49 crc kubenswrapper[4805]: E0226 17:17:49.656620 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6e20a5b-84fd-4e2d-836c-a3891ef809dc-metrics-certs podName:d6e20a5b-84fd-4e2d-836c-a3891ef809dc nodeName:}" failed. No retries permitted until 2026-02-26 17:18:53.656606567 +0000 UTC m=+248.218360906 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d6e20a5b-84fd-4e2d-836c-a3891ef809dc-metrics-certs") pod "network-metrics-daemon-hbv6d" (UID: "d6e20a5b-84fd-4e2d-836c-a3891ef809dc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 26 17:17:50 crc kubenswrapper[4805]: I0226 17:17:50.952422 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:17:50 crc kubenswrapper[4805]: E0226 17:17:50.952709 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:17:50 crc kubenswrapper[4805]: I0226 17:17:50.952746 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:17:50 crc kubenswrapper[4805]: I0226 17:17:50.952789 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:17:50 crc kubenswrapper[4805]: I0226 17:17:50.952855 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:17:50 crc kubenswrapper[4805]: E0226 17:17:50.952924 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:17:50 crc kubenswrapper[4805]: E0226 17:17:50.952958 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:17:50 crc kubenswrapper[4805]: E0226 17:17:50.953072 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:17:52 crc kubenswrapper[4805]: E0226 17:17:52.094096 4805 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 17:17:52 crc kubenswrapper[4805]: I0226 17:17:52.952971 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:17:52 crc kubenswrapper[4805]: I0226 17:17:52.952971 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:17:52 crc kubenswrapper[4805]: I0226 17:17:52.953550 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:17:52 crc kubenswrapper[4805]: I0226 17:17:52.953847 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:17:52 crc kubenswrapper[4805]: E0226 17:17:52.954000 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:17:52 crc kubenswrapper[4805]: E0226 17:17:52.953859 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:17:52 crc kubenswrapper[4805]: E0226 17:17:52.954223 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:17:52 crc kubenswrapper[4805]: E0226 17:17:52.954362 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:17:54 crc kubenswrapper[4805]: I0226 17:17:54.953189 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:17:54 crc kubenswrapper[4805]: I0226 17:17:54.953287 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:17:54 crc kubenswrapper[4805]: I0226 17:17:54.953367 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:17:54 crc kubenswrapper[4805]: E0226 17:17:54.953375 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:17:54 crc kubenswrapper[4805]: I0226 17:17:54.953394 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:17:54 crc kubenswrapper[4805]: E0226 17:17:54.953483 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:17:54 crc kubenswrapper[4805]: E0226 17:17:54.953585 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:17:54 crc kubenswrapper[4805]: E0226 17:17:54.953645 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:17:55 crc kubenswrapper[4805]: I0226 17:17:55.953237 4805 scope.go:117] "RemoveContainer" containerID="74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623" Feb 26 17:17:55 crc kubenswrapper[4805]: E0226 17:17:55.953482 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pqbgw_openshift-ovn-kubernetes(1d434db3-db90-41b2-9bd3-e6ef3009f878)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" Feb 26 17:17:56 crc kubenswrapper[4805]: I0226 17:17:56.952237 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:17:56 crc kubenswrapper[4805]: I0226 17:17:56.952305 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:17:56 crc kubenswrapper[4805]: I0226 17:17:56.952261 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:17:56 crc kubenswrapper[4805]: I0226 17:17:56.952243 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:17:56 crc kubenswrapper[4805]: E0226 17:17:56.953749 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:17:56 crc kubenswrapper[4805]: E0226 17:17:56.953933 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:17:56 crc kubenswrapper[4805]: E0226 17:17:56.954353 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:17:56 crc kubenswrapper[4805]: E0226 17:17:56.954454 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:17:57 crc kubenswrapper[4805]: E0226 17:17:57.094619 4805 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 17:17:58 crc kubenswrapper[4805]: I0226 17:17:58.953148 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:17:58 crc kubenswrapper[4805]: I0226 17:17:58.953177 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:17:58 crc kubenswrapper[4805]: I0226 17:17:58.953146 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:17:58 crc kubenswrapper[4805]: I0226 17:17:58.953281 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:17:58 crc kubenswrapper[4805]: E0226 17:17:58.953443 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:17:58 crc kubenswrapper[4805]: E0226 17:17:58.953708 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:17:58 crc kubenswrapper[4805]: E0226 17:17:58.953859 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:17:58 crc kubenswrapper[4805]: E0226 17:17:58.954088 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:18:00 crc kubenswrapper[4805]: I0226 17:18:00.952113 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:18:00 crc kubenswrapper[4805]: I0226 17:18:00.952225 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:18:00 crc kubenswrapper[4805]: E0226 17:18:00.952261 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:18:00 crc kubenswrapper[4805]: I0226 17:18:00.952131 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:18:00 crc kubenswrapper[4805]: I0226 17:18:00.952328 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:18:00 crc kubenswrapper[4805]: E0226 17:18:00.952478 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:18:00 crc kubenswrapper[4805]: E0226 17:18:00.952538 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:18:00 crc kubenswrapper[4805]: E0226 17:18:00.952606 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:18:02 crc kubenswrapper[4805]: E0226 17:18:02.096004 4805 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 17:18:02 crc kubenswrapper[4805]: I0226 17:18:02.952658 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:18:02 crc kubenswrapper[4805]: E0226 17:18:02.952846 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:18:02 crc kubenswrapper[4805]: I0226 17:18:02.952895 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:18:02 crc kubenswrapper[4805]: I0226 17:18:02.952918 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:18:02 crc kubenswrapper[4805]: I0226 17:18:02.952923 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:18:02 crc kubenswrapper[4805]: E0226 17:18:02.953067 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:18:02 crc kubenswrapper[4805]: E0226 17:18:02.953147 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:18:02 crc kubenswrapper[4805]: E0226 17:18:02.953245 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:18:04 crc kubenswrapper[4805]: I0226 17:18:04.952508 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:18:04 crc kubenswrapper[4805]: I0226 17:18:04.952971 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:18:04 crc kubenswrapper[4805]: I0226 17:18:04.953078 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:18:04 crc kubenswrapper[4805]: I0226 17:18:04.952910 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:18:04 crc kubenswrapper[4805]: E0226 17:18:04.953174 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:18:04 crc kubenswrapper[4805]: E0226 17:18:04.953298 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:18:04 crc kubenswrapper[4805]: E0226 17:18:04.953384 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:18:04 crc kubenswrapper[4805]: E0226 17:18:04.953446 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:18:06 crc kubenswrapper[4805]: I0226 17:18:06.952132 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:18:06 crc kubenswrapper[4805]: I0226 17:18:06.952220 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:18:06 crc kubenswrapper[4805]: E0226 17:18:06.954243 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:18:06 crc kubenswrapper[4805]: I0226 17:18:06.954267 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:18:06 crc kubenswrapper[4805]: E0226 17:18:06.954819 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:18:06 crc kubenswrapper[4805]: E0226 17:18:06.954413 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:18:06 crc kubenswrapper[4805]: I0226 17:18:06.954306 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:18:06 crc kubenswrapper[4805]: E0226 17:18:06.955461 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:18:07 crc kubenswrapper[4805]: E0226 17:18:07.096786 4805 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 17:18:07 crc kubenswrapper[4805]: I0226 17:18:07.954166 4805 scope.go:117] "RemoveContainer" containerID="74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623" Feb 26 17:18:07 crc kubenswrapper[4805]: E0226 17:18:07.954546 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pqbgw_openshift-ovn-kubernetes(1d434db3-db90-41b2-9bd3-e6ef3009f878)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" Feb 26 17:18:08 crc kubenswrapper[4805]: I0226 17:18:08.221504 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tv2pd_4cefacfa-0108-4252-aa69-4b35bcc0f69f/kube-multus/1.log" Feb 26 17:18:08 crc kubenswrapper[4805]: I0226 17:18:08.222490 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tv2pd_4cefacfa-0108-4252-aa69-4b35bcc0f69f/kube-multus/0.log" Feb 26 17:18:08 crc kubenswrapper[4805]: I0226 17:18:08.222618 4805 generic.go:334] "Generic (PLEG): container finished" podID="4cefacfa-0108-4252-aa69-4b35bcc0f69f" containerID="6b4d5739a7ef7ce9e6f2a3c653cc6d361b9bb0995d8899611d5196dfb304d82a" exitCode=1 Feb 26 17:18:08 crc kubenswrapper[4805]: I0226 17:18:08.222697 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tv2pd" event={"ID":"4cefacfa-0108-4252-aa69-4b35bcc0f69f","Type":"ContainerDied","Data":"6b4d5739a7ef7ce9e6f2a3c653cc6d361b9bb0995d8899611d5196dfb304d82a"} Feb 26 17:18:08 crc kubenswrapper[4805]: I0226 17:18:08.222795 4805 scope.go:117] "RemoveContainer" containerID="bc7651edb12f1de4fecb51b215eb060181a4cc783dcad88f15d4246c776f09a4" Feb 26 17:18:08 crc kubenswrapper[4805]: I0226 17:18:08.223243 4805 scope.go:117] "RemoveContainer" containerID="6b4d5739a7ef7ce9e6f2a3c653cc6d361b9bb0995d8899611d5196dfb304d82a" Feb 26 17:18:08 crc kubenswrapper[4805]: E0226 17:18:08.223464 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-tv2pd_openshift-multus(4cefacfa-0108-4252-aa69-4b35bcc0f69f)\"" pod="openshift-multus/multus-tv2pd" podUID="4cefacfa-0108-4252-aa69-4b35bcc0f69f" Feb 26 17:18:08 crc kubenswrapper[4805]: I0226 17:18:08.952849 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:18:08 crc kubenswrapper[4805]: I0226 17:18:08.952963 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:18:08 crc kubenswrapper[4805]: E0226 17:18:08.953011 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:18:08 crc kubenswrapper[4805]: I0226 17:18:08.953071 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:18:08 crc kubenswrapper[4805]: I0226 17:18:08.953094 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:18:08 crc kubenswrapper[4805]: E0226 17:18:08.953189 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:18:08 crc kubenswrapper[4805]: E0226 17:18:08.953507 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:18:08 crc kubenswrapper[4805]: E0226 17:18:08.953600 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:18:09 crc kubenswrapper[4805]: I0226 17:18:09.228095 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tv2pd_4cefacfa-0108-4252-aa69-4b35bcc0f69f/kube-multus/1.log" Feb 26 17:18:10 crc kubenswrapper[4805]: I0226 17:18:10.952490 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:18:10 crc kubenswrapper[4805]: I0226 17:18:10.952532 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:18:10 crc kubenswrapper[4805]: I0226 17:18:10.952591 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:18:10 crc kubenswrapper[4805]: I0226 17:18:10.952504 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:18:10 crc kubenswrapper[4805]: E0226 17:18:10.952782 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:18:10 crc kubenswrapper[4805]: E0226 17:18:10.952668 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:18:10 crc kubenswrapper[4805]: E0226 17:18:10.952920 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:18:10 crc kubenswrapper[4805]: E0226 17:18:10.953003 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:18:12 crc kubenswrapper[4805]: E0226 17:18:12.098529 4805 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 17:18:12 crc kubenswrapper[4805]: I0226 17:18:12.952702 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:18:12 crc kubenswrapper[4805]: E0226 17:18:12.952845 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:18:12 crc kubenswrapper[4805]: I0226 17:18:12.953073 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:18:12 crc kubenswrapper[4805]: I0226 17:18:12.953091 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:18:12 crc kubenswrapper[4805]: I0226 17:18:12.953125 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:18:12 crc kubenswrapper[4805]: E0226 17:18:12.953354 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:18:12 crc kubenswrapper[4805]: E0226 17:18:12.953506 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:18:12 crc kubenswrapper[4805]: E0226 17:18:12.953566 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:18:14 crc kubenswrapper[4805]: I0226 17:18:14.952308 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:18:14 crc kubenswrapper[4805]: I0226 17:18:14.952431 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:18:14 crc kubenswrapper[4805]: E0226 17:18:14.952507 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:18:14 crc kubenswrapper[4805]: E0226 17:18:14.952626 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:18:14 crc kubenswrapper[4805]: I0226 17:18:14.952319 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:18:14 crc kubenswrapper[4805]: I0226 17:18:14.952695 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:18:14 crc kubenswrapper[4805]: E0226 17:18:14.952747 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:18:14 crc kubenswrapper[4805]: E0226 17:18:14.952812 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:18:16 crc kubenswrapper[4805]: I0226 17:18:16.952565 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:18:16 crc kubenswrapper[4805]: I0226 17:18:16.952691 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:18:16 crc kubenswrapper[4805]: I0226 17:18:16.953917 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:18:16 crc kubenswrapper[4805]: I0226 17:18:16.953931 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:18:16 crc kubenswrapper[4805]: E0226 17:18:16.954198 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:18:16 crc kubenswrapper[4805]: E0226 17:18:16.953865 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:18:16 crc kubenswrapper[4805]: E0226 17:18:16.954473 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:18:16 crc kubenswrapper[4805]: E0226 17:18:16.954577 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:18:17 crc kubenswrapper[4805]: E0226 17:18:17.099240 4805 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 17:18:18 crc kubenswrapper[4805]: I0226 17:18:18.071395 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:18 crc kubenswrapper[4805]: I0226 17:18:18.071549 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:18:18 crc kubenswrapper[4805]: E0226 17:18:18.071636 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:20:20.071602902 +0000 UTC m=+334.633357251 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:18 crc kubenswrapper[4805]: E0226 17:18:18.071699 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 17:18:18 crc kubenswrapper[4805]: E0226 17:18:18.071726 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 17:18:18 crc kubenswrapper[4805]: E0226 17:18:18.071745 4805 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 17:18:18 crc kubenswrapper[4805]: I0226 17:18:18.071757 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:18:18 crc kubenswrapper[4805]: E0226 17:18:18.071812 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-26 17:20:20.071787877 +0000 UTC m=+334.633542276 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 17:18:18 crc kubenswrapper[4805]: I0226 17:18:18.071845 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:18:18 crc kubenswrapper[4805]: I0226 17:18:18.071892 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:18:18 crc kubenswrapper[4805]: E0226 17:18:18.071911 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 26 17:18:18 crc kubenswrapper[4805]: E0226 17:18:18.071931 4805 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 26 17:18:18 crc kubenswrapper[4805]: E0226 17:18:18.071944 4805 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 17:18:18 crc kubenswrapper[4805]: E0226 17:18:18.071992 4805 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 17:18:18 crc kubenswrapper[4805]: E0226 17:18:18.071996 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-26 17:20:20.071985802 +0000 UTC m=+334.633740221 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 26 17:18:18 crc kubenswrapper[4805]: E0226 17:18:18.072088 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 17:20:20.072065954 +0000 UTC m=+334.633820313 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 26 17:18:18 crc kubenswrapper[4805]: E0226 17:18:18.072133 4805 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 17:18:18 crc kubenswrapper[4805]: E0226 17:18:18.072172 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-26 17:20:20.072160496 +0000 UTC m=+334.633914935 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 26 17:18:18 crc kubenswrapper[4805]: I0226 17:18:18.952393 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:18:18 crc kubenswrapper[4805]: E0226 17:18:18.952524 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:18:18 crc kubenswrapper[4805]: I0226 17:18:18.952644 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:18:18 crc kubenswrapper[4805]: I0226 17:18:18.952665 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:18:18 crc kubenswrapper[4805]: I0226 17:18:18.952752 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:18:18 crc kubenswrapper[4805]: E0226 17:18:18.952840 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:18:18 crc kubenswrapper[4805]: E0226 17:18:18.952896 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:18:18 crc kubenswrapper[4805]: I0226 17:18:18.952984 4805 scope.go:117] "RemoveContainer" containerID="6b4d5739a7ef7ce9e6f2a3c653cc6d361b9bb0995d8899611d5196dfb304d82a" Feb 26 17:18:18 crc kubenswrapper[4805]: E0226 17:18:18.953081 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:18:20 crc kubenswrapper[4805]: I0226 17:18:20.268275 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tv2pd_4cefacfa-0108-4252-aa69-4b35bcc0f69f/kube-multus/1.log" Feb 26 17:18:20 crc kubenswrapper[4805]: I0226 17:18:20.268645 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tv2pd" event={"ID":"4cefacfa-0108-4252-aa69-4b35bcc0f69f","Type":"ContainerStarted","Data":"f05a848fc8fc044ab0d3f773b175d5c5a19a41680b5d67af5b6dabb86f31f070"} Feb 26 17:18:20 crc kubenswrapper[4805]: I0226 17:18:20.952602 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:18:20 crc kubenswrapper[4805]: I0226 17:18:20.952709 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:18:20 crc kubenswrapper[4805]: E0226 17:18:20.952789 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:18:20 crc kubenswrapper[4805]: I0226 17:18:20.952841 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:18:20 crc kubenswrapper[4805]: I0226 17:18:20.952953 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:18:20 crc kubenswrapper[4805]: E0226 17:18:20.953178 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:18:20 crc kubenswrapper[4805]: E0226 17:18:20.953308 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:18:20 crc kubenswrapper[4805]: E0226 17:18:20.953390 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:18:22 crc kubenswrapper[4805]: E0226 17:18:22.100150 4805 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 17:18:22 crc kubenswrapper[4805]: I0226 17:18:22.952531 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:18:22 crc kubenswrapper[4805]: I0226 17:18:22.952582 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:18:22 crc kubenswrapper[4805]: I0226 17:18:22.952561 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:18:22 crc kubenswrapper[4805]: I0226 17:18:22.952536 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:18:22 crc kubenswrapper[4805]: E0226 17:18:22.952667 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:18:22 crc kubenswrapper[4805]: E0226 17:18:22.952783 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:18:22 crc kubenswrapper[4805]: E0226 17:18:22.952942 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:18:22 crc kubenswrapper[4805]: E0226 17:18:22.953764 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:18:22 crc kubenswrapper[4805]: I0226 17:18:22.954405 4805 scope.go:117] "RemoveContainer" containerID="74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623" Feb 26 17:18:24 crc kubenswrapper[4805]: I0226 17:18:24.096298 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hbv6d"] Feb 26 17:18:24 crc kubenswrapper[4805]: I0226 17:18:24.096456 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:18:24 crc kubenswrapper[4805]: E0226 17:18:24.096644 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:18:24 crc kubenswrapper[4805]: I0226 17:18:24.281268 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqbgw_1d434db3-db90-41b2-9bd3-e6ef3009f878/ovnkube-controller/3.log" Feb 26 17:18:24 crc kubenswrapper[4805]: I0226 17:18:24.284900 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" event={"ID":"1d434db3-db90-41b2-9bd3-e6ef3009f878","Type":"ContainerStarted","Data":"c203122f004abb6dc03d9a844cec9016420dc12551d7a69210b37d82829728bd"} Feb 26 17:18:24 crc kubenswrapper[4805]: I0226 17:18:24.285271 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:18:24 crc kubenswrapper[4805]: I0226 17:18:24.324473 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" podStartSLOduration=143.324452829 podStartE2EDuration="2m23.324452829s" podCreationTimestamp="2026-02-26 17:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:24.323571547 +0000 UTC m=+218.885325896" watchObservedRunningTime="2026-02-26 17:18:24.324452829 +0000 UTC m=+218.886207168" Feb 26 17:18:24 crc kubenswrapper[4805]: I0226 17:18:24.953113 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:18:24 crc kubenswrapper[4805]: I0226 17:18:24.953155 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:18:24 crc kubenswrapper[4805]: I0226 17:18:24.953260 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:18:24 crc kubenswrapper[4805]: E0226 17:18:24.953378 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:18:24 crc kubenswrapper[4805]: E0226 17:18:24.953495 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:18:24 crc kubenswrapper[4805]: E0226 17:18:24.953700 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:18:25 crc kubenswrapper[4805]: I0226 17:18:25.952697 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:18:25 crc kubenswrapper[4805]: E0226 17:18:25.952931 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hbv6d" podUID="d6e20a5b-84fd-4e2d-836c-a3891ef809dc" Feb 26 17:18:26 crc kubenswrapper[4805]: I0226 17:18:26.952769 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:18:26 crc kubenswrapper[4805]: I0226 17:18:26.952893 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:18:26 crc kubenswrapper[4805]: I0226 17:18:26.952890 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:18:26 crc kubenswrapper[4805]: E0226 17:18:26.954927 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:18:26 crc kubenswrapper[4805]: E0226 17:18:26.954989 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:18:26 crc kubenswrapper[4805]: E0226 17:18:26.955144 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:18:27 crc kubenswrapper[4805]: I0226 17:18:27.952103 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:18:27 crc kubenswrapper[4805]: I0226 17:18:27.954633 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 26 17:18:27 crc kubenswrapper[4805]: I0226 17:18:27.955884 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 26 17:18:28 crc kubenswrapper[4805]: I0226 17:18:28.952799 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:18:28 crc kubenswrapper[4805]: I0226 17:18:28.952916 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:18:28 crc kubenswrapper[4805]: I0226 17:18:28.953168 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:18:28 crc kubenswrapper[4805]: I0226 17:18:28.956280 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 26 17:18:28 crc kubenswrapper[4805]: I0226 17:18:28.956447 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 26 17:18:28 crc kubenswrapper[4805]: I0226 17:18:28.959886 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 26 17:18:28 crc kubenswrapper[4805]: I0226 17:18:28.960337 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.570552 4805 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.611981 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-g44kw"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.614967 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ngs29"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.615223 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8kkn"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.615532 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8kkn" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.615937 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g44kw" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.616010 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-v45rz"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.616389 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ngs29" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.616711 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-v45rz" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.619496 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mpnkz"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.620415 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-gc2nd"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.621348 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mcvr5"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.621950 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-mpnkz" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.622008 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-qkzq5"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.622589 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gc2nd" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.622624 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.623091 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.639353 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.643415 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.643542 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.643904 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.644352 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.644821 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.644936 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.646128 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.646226 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.646111 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.646632 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.649896 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.651799 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.670424 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6l2g9"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.671094 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.671343 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6l2g9" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.671691 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.671852 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.672296 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.672388 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.672607 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.672731 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.672747 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.672775 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.672845 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.672869 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.672904 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.672960 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.672653 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-hln6b"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.673047 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.673365 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.673060 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.673431 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.672659 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.673083 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.673147 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.673184 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.673574 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hln6b" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.673204 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.673239 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.673244 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.673248 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.673255 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.674268 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.674694 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-ml8xl"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.675180 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-ml8xl" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.675463 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.677271 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.678102 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.678411 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.680475 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.680768 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.681056 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.683917 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.684493 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.684567 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.684970 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.685286 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.685456 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.687286 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wj5zq"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.692689 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.698743 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.698768 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.698814 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.700429 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-czlsm"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.701043 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rw6n5"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.701218 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wj5zq" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.701402 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-25fk7"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.701606 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-czlsm" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.701677 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-lrc7d"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.703952 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.704177 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.707475 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.709458 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-rw6n5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.710046 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-25fk7" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.721784 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k9xsh"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.735334 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.735683 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/05e30706-eb6c-42d4-a28d-aa664f89ed80-client-ca\") pod \"controller-manager-879f6c89f-ngs29\" (UID: \"05e30706-eb6c-42d4-a28d-aa664f89ed80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ngs29" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.735790 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f6af6767-3331-4e84-97c7-385cd642443c-encryption-config\") pod \"apiserver-76f77b778f-qkzq5\" (UID: \"f6af6767-3331-4e84-97c7-385cd642443c\") " pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.735998 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.736060 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1c73718-f5ac-4887-b013-04660314d6ac-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-x8kkn\" (UID: \"d1c73718-f5ac-4887-b013-04660314d6ac\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8kkn" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.736300 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwbdb\" (UniqueName: \"kubernetes.io/projected/38eb7141-6e03-493c-855f-def45c0e7977-kube-api-access-fwbdb\") pod \"authentication-operator-69f744f599-mpnkz\" (UID: \"38eb7141-6e03-493c-855f-def45c0e7977\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mpnkz" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.736444 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/99edc210-b315-4224-8d9f-a5911f8527b2-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-v45rz\" (UID: \"99edc210-b315-4224-8d9f-a5911f8527b2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-v45rz" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.737702 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05e30706-eb6c-42d4-a28d-aa664f89ed80-serving-cert\") pod \"controller-manager-879f6c89f-ngs29\" (UID: \"05e30706-eb6c-42d4-a28d-aa664f89ed80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ngs29" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.737746 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f6af6767-3331-4e84-97c7-385cd642443c-audit\") pod \"apiserver-76f77b778f-qkzq5\" (UID: \"f6af6767-3331-4e84-97c7-385cd642443c\") " pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.737779 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.737855 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f6af6767-3331-4e84-97c7-385cd642443c-etcd-serving-ca\") pod \"apiserver-76f77b778f-qkzq5\" (UID: \"f6af6767-3331-4e84-97c7-385cd642443c\") " pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.737885 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6af6767-3331-4e84-97c7-385cd642443c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-qkzq5\" (UID: \"f6af6767-3331-4e84-97c7-385cd642443c\") " pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.737913 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6af6767-3331-4e84-97c7-385cd642443c-serving-cert\") pod \"apiserver-76f77b778f-qkzq5\" (UID: \"f6af6767-3331-4e84-97c7-385cd642443c\") " pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.737942 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.737975 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38eb7141-6e03-493c-855f-def45c0e7977-config\") pod \"authentication-operator-69f744f599-mpnkz\" (UID: \"38eb7141-6e03-493c-855f-def45c0e7977\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mpnkz" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.738002 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.738049 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/99b673f3-8f28-492a-a26e-a5254f7ef796-machine-approver-tls\") pod \"machine-approver-56656f9798-gc2nd\" (UID: \"99b673f3-8f28-492a-a26e-a5254f7ef796\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gc2nd" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.738088 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f6af6767-3331-4e84-97c7-385cd642443c-etcd-client\") pod \"apiserver-76f77b778f-qkzq5\" (UID: \"f6af6767-3331-4e84-97c7-385cd642443c\") " pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.738117 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/99b673f3-8f28-492a-a26e-a5254f7ef796-auth-proxy-config\") pod \"machine-approver-56656f9798-gc2nd\" (UID: \"99b673f3-8f28-492a-a26e-a5254f7ef796\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gc2nd" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.738145 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmb9h\" (UniqueName: \"kubernetes.io/projected/99edc210-b315-4224-8d9f-a5911f8527b2-kube-api-access-dmb9h\") pod \"machine-api-operator-5694c8668f-v45rz\" (UID: \"99edc210-b315-4224-8d9f-a5911f8527b2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-v45rz" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.738175 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwbnd\" (UniqueName: \"kubernetes.io/projected/05e30706-eb6c-42d4-a28d-aa664f89ed80-kube-api-access-qwbnd\") pod \"controller-manager-879f6c89f-ngs29\" (UID: \"05e30706-eb6c-42d4-a28d-aa664f89ed80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ngs29" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.738219 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vns2j\" (UniqueName: \"kubernetes.io/projected/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-kube-api-access-vns2j\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.738257 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/05e30706-eb6c-42d4-a28d-aa664f89ed80-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ngs29\" (UID: \"05e30706-eb6c-42d4-a28d-aa664f89ed80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ngs29" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.738289 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38eb7141-6e03-493c-855f-def45c0e7977-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mpnkz\" (UID: \"38eb7141-6e03-493c-855f-def45c0e7977\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mpnkz" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.738318 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6af6767-3331-4e84-97c7-385cd642443c-config\") pod \"apiserver-76f77b778f-qkzq5\" (UID: \"f6af6767-3331-4e84-97c7-385cd642443c\") " pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.738374 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f440d939-b304-4728-afc4-ad814d771fbb-config\") pod \"route-controller-manager-6576b87f9c-g44kw\" (UID: \"f440d939-b304-4728-afc4-ad814d771fbb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g44kw" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.738406 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99edc210-b315-4224-8d9f-a5911f8527b2-config\") pod \"machine-api-operator-5694c8668f-v45rz\" (UID: \"99edc210-b315-4224-8d9f-a5911f8527b2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-v45rz" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.738467 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f6af6767-3331-4e84-97c7-385cd642443c-image-import-ca\") pod \"apiserver-76f77b778f-qkzq5\" (UID: \"f6af6767-3331-4e84-97c7-385cd642443c\") " pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.738524 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.738569 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f440d939-b304-4728-afc4-ad814d771fbb-serving-cert\") pod \"route-controller-manager-6576b87f9c-g44kw\" (UID: \"f440d939-b304-4728-afc4-ad814d771fbb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g44kw" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.738617 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-audit-dir\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.738646 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.738678 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1c73718-f5ac-4887-b013-04660314d6ac-config\") pod \"openshift-apiserver-operator-796bbdcf4f-x8kkn\" (UID: \"d1c73718-f5ac-4887-b013-04660314d6ac\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8kkn" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.738802 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpd2h\" (UniqueName: \"kubernetes.io/projected/f440d939-b304-4728-afc4-ad814d771fbb-kube-api-access-lpd2h\") pod \"route-controller-manager-6576b87f9c-g44kw\" (UID: \"f440d939-b304-4728-afc4-ad814d771fbb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g44kw" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.739107 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/99edc210-b315-4224-8d9f-a5911f8527b2-images\") pod \"machine-api-operator-5694c8668f-v45rz\" (UID: \"99edc210-b315-4224-8d9f-a5911f8527b2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-v45rz" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.739180 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8szxp\" (UniqueName: \"kubernetes.io/projected/99b673f3-8f28-492a-a26e-a5254f7ef796-kube-api-access-8szxp\") pod \"machine-approver-56656f9798-gc2nd\" (UID: \"99b673f3-8f28-492a-a26e-a5254f7ef796\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gc2nd" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.739238 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6j4c\" (UniqueName: \"kubernetes.io/projected/f6af6767-3331-4e84-97c7-385cd642443c-kube-api-access-s6j4c\") pod \"apiserver-76f77b778f-qkzq5\" (UID: \"f6af6767-3331-4e84-97c7-385cd642443c\") " pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.739267 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f440d939-b304-4728-afc4-ad814d771fbb-client-ca\") pod \"route-controller-manager-6576b87f9c-g44kw\" (UID: \"f440d939-b304-4728-afc4-ad814d771fbb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g44kw" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.739329 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f6af6767-3331-4e84-97c7-385cd642443c-node-pullsecrets\") pod \"apiserver-76f77b778f-qkzq5\" (UID: \"f6af6767-3331-4e84-97c7-385cd642443c\") " pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.739356 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38eb7141-6e03-493c-855f-def45c0e7977-serving-cert\") pod \"authentication-operator-69f744f599-mpnkz\" (UID: \"38eb7141-6e03-493c-855f-def45c0e7977\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mpnkz" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.739381 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38eb7141-6e03-493c-855f-def45c0e7977-service-ca-bundle\") pod \"authentication-operator-69f744f599-mpnkz\" (UID: \"38eb7141-6e03-493c-855f-def45c0e7977\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mpnkz" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.739406 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k42f\" (UniqueName: \"kubernetes.io/projected/d1c73718-f5ac-4887-b013-04660314d6ac-kube-api-access-7k42f\") pod \"openshift-apiserver-operator-796bbdcf4f-x8kkn\" (UID: \"d1c73718-f5ac-4887-b013-04660314d6ac\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8kkn" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.739436 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05e30706-eb6c-42d4-a28d-aa664f89ed80-config\") pod \"controller-manager-879f6c89f-ngs29\" (UID: \"05e30706-eb6c-42d4-a28d-aa664f89ed80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ngs29" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.739461 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.739487 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.739535 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-audit-policies\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.739555 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.739575 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.739611 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99b673f3-8f28-492a-a26e-a5254f7ef796-config\") pod \"machine-approver-56656f9798-gc2nd\" (UID: \"99b673f3-8f28-492a-a26e-a5254f7ef796\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gc2nd" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.739689 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f6af6767-3331-4e84-97c7-385cd642443c-audit-dir\") pod \"apiserver-76f77b778f-qkzq5\" (UID: \"f6af6767-3331-4e84-97c7-385cd642443c\") " pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.740458 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-lrc7d" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.785159 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-2dnn9"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.786036 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8kkn"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.786064 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tshb4"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.786444 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-f72j6"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.787102 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ngs29"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.787126 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mpnkz"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.787141 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sz82x"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.787438 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-bnlgh"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.787736 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-v45rz"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.787776 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9ff4c"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.788154 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-g44kw"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.788183 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rw6n5"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.788316 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9ff4c" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.788618 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-ml8xl"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.788710 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k9xsh" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.788958 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-f72j6" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.788976 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.789057 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-bnlgh" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.789129 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2dnn9" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.789530 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sz82x" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.789651 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.790191 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.790321 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.790421 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.790639 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.790757 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.790836 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.790952 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.791287 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.791351 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.791426 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.791490 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.791704 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.791824 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.791966 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.792251 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.792359 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.792254 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.792447 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.792442 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.792469 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.791970 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.792488 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.792508 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.792518 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.794098 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.796174 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.796217 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.796239 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.796450 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.796568 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.796808 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.797124 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.797248 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.798119 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.798468 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.798758 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.798797 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.799112 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.799258 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.799369 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.799875 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.800099 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.800144 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.807083 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.807691 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.810837 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ws68"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.817196 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6s2jm"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.817440 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ws68" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.817815 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6s2g"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.818121 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bxpz"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.818425 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-frcbs"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.818710 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-79bz8"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.818974 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535435-n8728"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.819334 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535435-n8728" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.819604 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-79bz8" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.819670 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-6lgpv"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.819761 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bxpz" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.819812 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-frcbs" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.819897 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6s2g" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.820093 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.820586 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6lgpv" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.820784 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6s2jm" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.824262 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-trm4v"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.828254 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.831063 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-5jhjn"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.831556 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-trm4v" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.832078 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d7sjf"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.832541 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2nd5r"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.833737 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-d7sjf" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.835344 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5jhjn" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.837174 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f78d"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.837351 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2nd5r" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.838376 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-4b67j"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.839564 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pzjmg"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.840645 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f78d" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.840870 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.841241 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pzjmg" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.842387 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-4b67j" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.843899 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6af6767-3331-4e84-97c7-385cd642443c-serving-cert\") pod \"apiserver-76f77b778f-qkzq5\" (UID: \"f6af6767-3331-4e84-97c7-385cd642443c\") " pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.843946 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.843975 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b68ea4bc-0e19-42c3-80f4-7c71d84e97c3-audit-dir\") pod \"apiserver-7bbb656c7d-hln6b\" (UID: \"b68ea4bc-0e19-42c3-80f4-7c71d84e97c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hln6b" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.844003 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c2d3d15-9f73-4fed-81e3-2f8ce400b967-serving-cert\") pod \"openshift-config-operator-7777fb866f-6l2g9\" (UID: \"8c2d3d15-9f73-4fed-81e3-2f8ce400b967\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6l2g9" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.844044 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6trqw\" (UniqueName: \"kubernetes.io/projected/07a7b8f7-2103-443e-906f-5d2d74baa5a9-kube-api-access-6trqw\") pod \"control-plane-machine-set-operator-78cbb6b69f-frcbs\" (UID: \"07a7b8f7-2103-443e-906f-5d2d74baa5a9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-frcbs" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.844076 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38eb7141-6e03-493c-855f-def45c0e7977-config\") pod \"authentication-operator-69f744f599-mpnkz\" (UID: \"38eb7141-6e03-493c-855f-def45c0e7977\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mpnkz" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.844099 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.844119 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/99b673f3-8f28-492a-a26e-a5254f7ef796-machine-approver-tls\") pod \"machine-approver-56656f9798-gc2nd\" (UID: \"99b673f3-8f28-492a-a26e-a5254f7ef796\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gc2nd" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.844141 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0fa7532e-b0f6-481d-a6d3-9523edc96c13-bound-sa-token\") pod \"ingress-operator-5b745b69d9-f72j6\" (UID: \"0fa7532e-b0f6-481d-a6d3-9523edc96c13\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-f72j6" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.844164 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b68ea4bc-0e19-42c3-80f4-7c71d84e97c3-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-hln6b\" (UID: \"b68ea4bc-0e19-42c3-80f4-7c71d84e97c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hln6b" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.844210 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f5c14f64-688c-4661-a440-5b171ee3d7b6-etcd-ca\") pod \"etcd-operator-b45778765-ml8xl\" (UID: \"f5c14f64-688c-4661-a440-5b171ee3d7b6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ml8xl" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.844232 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f6af6767-3331-4e84-97c7-385cd642443c-etcd-client\") pod \"apiserver-76f77b778f-qkzq5\" (UID: \"f6af6767-3331-4e84-97c7-385cd642443c\") " pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.844252 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/99b673f3-8f28-492a-a26e-a5254f7ef796-auth-proxy-config\") pod \"machine-approver-56656f9798-gc2nd\" (UID: \"99b673f3-8f28-492a-a26e-a5254f7ef796\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gc2nd" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.844270 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmb9h\" (UniqueName: \"kubernetes.io/projected/99edc210-b315-4224-8d9f-a5911f8527b2-kube-api-access-dmb9h\") pod \"machine-api-operator-5694c8668f-v45rz\" (UID: \"99edc210-b315-4224-8d9f-a5911f8527b2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-v45rz" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.844302 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0d1ffc5-ede9-49d3-8d52-3ebc49cdcb30-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-25fk7\" (UID: \"b0d1ffc5-ede9-49d3-8d52-3ebc49cdcb30\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-25fk7" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.844323 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f5c14f64-688c-4661-a440-5b171ee3d7b6-etcd-client\") pod \"etcd-operator-b45778765-ml8xl\" (UID: \"f5c14f64-688c-4661-a440-5b171ee3d7b6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ml8xl" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.844346 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f3f2460-fdb8-4b47-89f9-3cbb84e143e8-secret-volume\") pod \"collect-profiles-29535435-n8728\" (UID: \"0f3f2460-fdb8-4b47-89f9-3cbb84e143e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535435-n8728" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.844366 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b68ea4bc-0e19-42c3-80f4-7c71d84e97c3-audit-policies\") pod \"apiserver-7bbb656c7d-hln6b\" (UID: \"b68ea4bc-0e19-42c3-80f4-7c71d84e97c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hln6b" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.844391 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwbnd\" (UniqueName: \"kubernetes.io/projected/05e30706-eb6c-42d4-a28d-aa664f89ed80-kube-api-access-qwbnd\") pod \"controller-manager-879f6c89f-ngs29\" (UID: \"05e30706-eb6c-42d4-a28d-aa664f89ed80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ngs29" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.844412 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vns2j\" (UniqueName: \"kubernetes.io/projected/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-kube-api-access-vns2j\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.844434 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwkbb\" (UniqueName: \"kubernetes.io/projected/67e1c0bf-8550-4126-986d-0d88d4d5f8ef-kube-api-access-kwkbb\") pod \"service-ca-operator-777779d784-5jhjn\" (UID: \"67e1c0bf-8550-4126-986d-0d88d4d5f8ef\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5jhjn" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.844458 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0fa7532e-b0f6-481d-a6d3-9523edc96c13-trusted-ca\") pod \"ingress-operator-5b745b69d9-f72j6\" (UID: \"0fa7532e-b0f6-481d-a6d3-9523edc96c13\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-f72j6" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.844476 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ndr5\" (UniqueName: \"kubernetes.io/projected/b656b811-2756-4cb0-8a5e-2fce336a3047-kube-api-access-5ndr5\") pod \"cluster-image-registry-operator-dc59b4c8b-k9xsh\" (UID: \"b656b811-2756-4cb0-8a5e-2fce336a3047\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k9xsh" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.844509 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/05e30706-eb6c-42d4-a28d-aa664f89ed80-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ngs29\" (UID: \"05e30706-eb6c-42d4-a28d-aa664f89ed80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ngs29" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.844532 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f3f2460-fdb8-4b47-89f9-3cbb84e143e8-config-volume\") pod \"collect-profiles-29535435-n8728\" (UID: \"0f3f2460-fdb8-4b47-89f9-3cbb84e143e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535435-n8728" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.844556 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24f9a767-19ab-4c2f-a1d6-3378e7c793d1-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6s2jm\" (UID: \"24f9a767-19ab-4c2f-a1d6-3378e7c793d1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6s2jm" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.844668 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38eb7141-6e03-493c-855f-def45c0e7977-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mpnkz\" (UID: \"38eb7141-6e03-493c-855f-def45c0e7977\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mpnkz" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.844690 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6af6767-3331-4e84-97c7-385cd642443c-config\") pod \"apiserver-76f77b778f-qkzq5\" (UID: \"f6af6767-3331-4e84-97c7-385cd642443c\") " pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.844712 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f440d939-b304-4728-afc4-ad814d771fbb-config\") pod \"route-controller-manager-6576b87f9c-g44kw\" (UID: \"f440d939-b304-4728-afc4-ad814d771fbb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g44kw" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.844864 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4a08155f-ab85-41d4-afcd-6d681c38f727-srv-cert\") pod \"catalog-operator-68c6474976-9bxpz\" (UID: \"4a08155f-ab85-41d4-afcd-6d681c38f727\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bxpz" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.844883 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99edc210-b315-4224-8d9f-a5911f8527b2-config\") pod \"machine-api-operator-5694c8668f-v45rz\" (UID: \"99edc210-b315-4224-8d9f-a5911f8527b2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-v45rz" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.844902 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f6af6767-3331-4e84-97c7-385cd642443c-image-import-ca\") pod \"apiserver-76f77b778f-qkzq5\" (UID: \"f6af6767-3331-4e84-97c7-385cd642443c\") " pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.845066 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ff06563-787b-4f43-ab99-d900accc2791-config\") pod \"console-operator-58897d9998-rw6n5\" (UID: \"9ff06563-787b-4f43-ab99-d900accc2791\") " pod="openshift-console-operator/console-operator-58897d9998-rw6n5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.845094 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/19b69797-31c3-4e0e-8968-eb38c731b343-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2nd5r\" (UID: \"19b69797-31c3-4e0e-8968-eb38c731b343\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2nd5r" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.845113 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/07a7b8f7-2103-443e-906f-5d2d74baa5a9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-frcbs\" (UID: \"07a7b8f7-2103-443e-906f-5d2d74baa5a9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-frcbs" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.845220 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.845303 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sckxv\" (UniqueName: \"kubernetes.io/projected/ca3d06f8-3cc9-4e77-9d45-e1232c00b04f-kube-api-access-sckxv\") pod \"marketplace-operator-79b997595-d7sjf\" (UID: \"ca3d06f8-3cc9-4e77-9d45-e1232c00b04f\") " pod="openshift-marketplace/marketplace-operator-79b997595-d7sjf" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.845651 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b656b811-2756-4cb0-8a5e-2fce336a3047-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-k9xsh\" (UID: \"b656b811-2756-4cb0-8a5e-2fce336a3047\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k9xsh" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.847866 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6af6767-3331-4e84-97c7-385cd642443c-config\") pod \"apiserver-76f77b778f-qkzq5\" (UID: \"f6af6767-3331-4e84-97c7-385cd642443c\") " pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.848887 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f6af6767-3331-4e84-97c7-385cd642443c-image-import-ca\") pod \"apiserver-76f77b778f-qkzq5\" (UID: \"f6af6767-3331-4e84-97c7-385cd642443c\") " pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.849059 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99edc210-b315-4224-8d9f-a5911f8527b2-config\") pod \"machine-api-operator-5694c8668f-v45rz\" (UID: \"99edc210-b315-4224-8d9f-a5911f8527b2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-v45rz" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.849137 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f440d939-b304-4728-afc4-ad814d771fbb-config\") pod \"route-controller-manager-6576b87f9c-g44kw\" (UID: \"f440d939-b304-4728-afc4-ad814d771fbb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g44kw" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.846872 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24f9a767-19ab-4c2f-a1d6-3378e7c793d1-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6s2jm\" (UID: \"24f9a767-19ab-4c2f-a1d6-3378e7c793d1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6s2jm" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.851362 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/99b673f3-8f28-492a-a26e-a5254f7ef796-auth-proxy-config\") pod \"machine-approver-56656f9798-gc2nd\" (UID: \"99b673f3-8f28-492a-a26e-a5254f7ef796\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gc2nd" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.845367 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38eb7141-6e03-493c-855f-def45c0e7977-config\") pod \"authentication-operator-69f744f599-mpnkz\" (UID: \"38eb7141-6e03-493c-855f-def45c0e7977\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mpnkz" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.849313 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f440d939-b304-4728-afc4-ad814d771fbb-serving-cert\") pod \"route-controller-manager-6576b87f9c-g44kw\" (UID: \"f440d939-b304-4728-afc4-ad814d771fbb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g44kw" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.854919 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6af6767-3331-4e84-97c7-385cd642443c-serving-cert\") pod \"apiserver-76f77b778f-qkzq5\" (UID: \"f6af6767-3331-4e84-97c7-385cd642443c\") " pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.855952 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.856068 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f5c14f64-688c-4661-a440-5b171ee3d7b6-etcd-service-ca\") pod \"etcd-operator-b45778765-ml8xl\" (UID: \"f5c14f64-688c-4661-a440-5b171ee3d7b6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ml8xl" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.856105 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67e1c0bf-8550-4126-986d-0d88d4d5f8ef-config\") pod \"service-ca-operator-777779d784-5jhjn\" (UID: \"67e1c0bf-8550-4126-986d-0d88d4d5f8ef\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5jhjn" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.856190 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-audit-dir\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.856235 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.856260 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-audit-dir\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.860076 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/05e30706-eb6c-42d4-a28d-aa664f89ed80-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ngs29\" (UID: \"05e30706-eb6c-42d4-a28d-aa664f89ed80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ngs29" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.860129 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wj5zq"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.861155 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f440d939-b304-4728-afc4-ad814d771fbb-serving-cert\") pod \"route-controller-manager-6576b87f9c-g44kw\" (UID: \"f440d939-b304-4728-afc4-ad814d771fbb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g44kw" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.856669 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.861252 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1c73718-f5ac-4887-b013-04660314d6ac-config\") pod \"openshift-apiserver-operator-796bbdcf4f-x8kkn\" (UID: \"d1c73718-f5ac-4887-b013-04660314d6ac\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8kkn" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.861505 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpd2h\" (UniqueName: \"kubernetes.io/projected/f440d939-b304-4728-afc4-ad814d771fbb-kube-api-access-lpd2h\") pod \"route-controller-manager-6576b87f9c-g44kw\" (UID: \"f440d939-b304-4728-afc4-ad814d771fbb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g44kw" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.861575 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b68ea4bc-0e19-42c3-80f4-7c71d84e97c3-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-hln6b\" (UID: \"b68ea4bc-0e19-42c3-80f4-7c71d84e97c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hln6b" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.861848 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.861895 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx6xc\" (UniqueName: \"kubernetes.io/projected/9ff06563-787b-4f43-ab99-d900accc2791-kube-api-access-tx6xc\") pod \"console-operator-58897d9998-rw6n5\" (UID: \"9ff06563-787b-4f43-ab99-d900accc2791\") " pod="openshift-console-operator/console-operator-58897d9998-rw6n5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.861979 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/99edc210-b315-4224-8d9f-a5911f8527b2-images\") pod \"machine-api-operator-5694c8668f-v45rz\" (UID: \"99edc210-b315-4224-8d9f-a5911f8527b2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-v45rz" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.862046 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5c14f64-688c-4661-a440-5b171ee3d7b6-config\") pod \"etcd-operator-b45778765-ml8xl\" (UID: \"f5c14f64-688c-4661-a440-5b171ee3d7b6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ml8xl" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.862090 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq5qp\" (UniqueName: \"kubernetes.io/projected/f5c14f64-688c-4661-a440-5b171ee3d7b6-kube-api-access-nq5qp\") pod \"etcd-operator-b45778765-ml8xl\" (UID: \"f5c14f64-688c-4661-a440-5b171ee3d7b6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ml8xl" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.862097 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/99b673f3-8f28-492a-a26e-a5254f7ef796-machine-approver-tls\") pod \"machine-approver-56656f9798-gc2nd\" (UID: \"99b673f3-8f28-492a-a26e-a5254f7ef796\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gc2nd" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.862136 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ca3d06f8-3cc9-4e77-9d45-e1232c00b04f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-d7sjf\" (UID: \"ca3d06f8-3cc9-4e77-9d45-e1232c00b04f\") " pod="openshift-marketplace/marketplace-operator-79b997595-d7sjf" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.862161 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b68ea4bc-0e19-42c3-80f4-7c71d84e97c3-serving-cert\") pod \"apiserver-7bbb656c7d-hln6b\" (UID: \"b68ea4bc-0e19-42c3-80f4-7c71d84e97c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hln6b" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.862227 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8szxp\" (UniqueName: \"kubernetes.io/projected/99b673f3-8f28-492a-a26e-a5254f7ef796-kube-api-access-8szxp\") pod \"machine-approver-56656f9798-gc2nd\" (UID: \"99b673f3-8f28-492a-a26e-a5254f7ef796\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gc2nd" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.862272 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b656b811-2756-4cb0-8a5e-2fce336a3047-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-k9xsh\" (UID: \"b656b811-2756-4cb0-8a5e-2fce336a3047\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k9xsh" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.862334 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd9j7\" (UniqueName: \"kubernetes.io/projected/b0d1ffc5-ede9-49d3-8d52-3ebc49cdcb30-kube-api-access-xd9j7\") pod \"openshift-controller-manager-operator-756b6f6bc6-25fk7\" (UID: \"b0d1ffc5-ede9-49d3-8d52-3ebc49cdcb30\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-25fk7" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.862361 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6j4c\" (UniqueName: \"kubernetes.io/projected/f6af6767-3331-4e84-97c7-385cd642443c-kube-api-access-s6j4c\") pod \"apiserver-76f77b778f-qkzq5\" (UID: \"f6af6767-3331-4e84-97c7-385cd642443c\") " pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.862627 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1c73718-f5ac-4887-b013-04660314d6ac-config\") pod \"openshift-apiserver-operator-796bbdcf4f-x8kkn\" (UID: \"d1c73718-f5ac-4887-b013-04660314d6ac\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8kkn" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.862813 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f440d939-b304-4728-afc4-ad814d771fbb-client-ca\") pod \"route-controller-manager-6576b87f9c-g44kw\" (UID: \"f440d939-b304-4728-afc4-ad814d771fbb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g44kw" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.862921 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tklxz\" (UniqueName: \"kubernetes.io/projected/8c2d3d15-9f73-4fed-81e3-2f8ce400b967-kube-api-access-tklxz\") pod \"openshift-config-operator-7777fb866f-6l2g9\" (UID: \"8c2d3d15-9f73-4fed-81e3-2f8ce400b967\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6l2g9" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.863058 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8c2d3d15-9f73-4fed-81e3-2f8ce400b967-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6l2g9\" (UID: \"8c2d3d15-9f73-4fed-81e3-2f8ce400b967\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6l2g9" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.865393 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.871243 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f6af6767-3331-4e84-97c7-385cd642443c-etcd-client\") pod \"apiserver-76f77b778f-qkzq5\" (UID: \"f6af6767-3331-4e84-97c7-385cd642443c\") " pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.878384 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67e1c0bf-8550-4126-986d-0d88d4d5f8ef-serving-cert\") pod \"service-ca-operator-777779d784-5jhjn\" (UID: \"67e1c0bf-8550-4126-986d-0d88d4d5f8ef\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5jhjn" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.878460 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b68ea4bc-0e19-42c3-80f4-7c71d84e97c3-encryption-config\") pod \"apiserver-7bbb656c7d-hln6b\" (UID: \"b68ea4bc-0e19-42c3-80f4-7c71d84e97c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hln6b" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.878485 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f6af6767-3331-4e84-97c7-385cd642443c-node-pullsecrets\") pod \"apiserver-76f77b778f-qkzq5\" (UID: \"f6af6767-3331-4e84-97c7-385cd642443c\") " pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.878510 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f440d939-b304-4728-afc4-ad814d771fbb-client-ca\") pod \"route-controller-manager-6576b87f9c-g44kw\" (UID: \"f440d939-b304-4728-afc4-ad814d771fbb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g44kw" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.878960 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879071 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/19b69797-31c3-4e0e-8968-eb38c731b343-images\") pod \"machine-config-operator-74547568cd-2nd5r\" (UID: \"19b69797-31c3-4e0e-8968-eb38c731b343\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2nd5r" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879120 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0d1ffc5-ede9-49d3-8d52-3ebc49cdcb30-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-25fk7\" (UID: \"b0d1ffc5-ede9-49d3-8d52-3ebc49cdcb30\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-25fk7" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879136 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5c14f64-688c-4661-a440-5b171ee3d7b6-serving-cert\") pod \"etcd-operator-b45778765-ml8xl\" (UID: \"f5c14f64-688c-4661-a440-5b171ee3d7b6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ml8xl" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879156 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38eb7141-6e03-493c-855f-def45c0e7977-serving-cert\") pod \"authentication-operator-69f744f599-mpnkz\" (UID: \"38eb7141-6e03-493c-855f-def45c0e7977\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mpnkz" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879171 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38eb7141-6e03-493c-855f-def45c0e7977-service-ca-bundle\") pod \"authentication-operator-69f744f599-mpnkz\" (UID: \"38eb7141-6e03-493c-855f-def45c0e7977\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mpnkz" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879188 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/528c9fdc-fb48-460b-a06b-a07ce3c388c4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-wj5zq\" (UID: \"528c9fdc-fb48-460b-a06b-a07ce3c388c4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wj5zq" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879209 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k42f\" (UniqueName: \"kubernetes.io/projected/d1c73718-f5ac-4887-b013-04660314d6ac-kube-api-access-7k42f\") pod \"openshift-apiserver-operator-796bbdcf4f-x8kkn\" (UID: \"d1c73718-f5ac-4887-b013-04660314d6ac\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8kkn" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879227 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05e30706-eb6c-42d4-a28d-aa664f89ed80-config\") pod \"controller-manager-879f6c89f-ngs29\" (UID: \"05e30706-eb6c-42d4-a28d-aa664f89ed80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ngs29" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879246 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47nxg\" (UniqueName: \"kubernetes.io/projected/4a08155f-ab85-41d4-afcd-6d681c38f727-kube-api-access-47nxg\") pod \"catalog-operator-68c6474976-9bxpz\" (UID: \"4a08155f-ab85-41d4-afcd-6d681c38f727\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bxpz" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879262 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/19b69797-31c3-4e0e-8968-eb38c731b343-proxy-tls\") pod \"machine-config-operator-74547568cd-2nd5r\" (UID: \"19b69797-31c3-4e0e-8968-eb38c731b343\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2nd5r" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879284 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879306 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879331 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-audit-policies\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879350 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b656b811-2756-4cb0-8a5e-2fce336a3047-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-k9xsh\" (UID: \"b656b811-2756-4cb0-8a5e-2fce336a3047\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k9xsh" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879370 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4a08155f-ab85-41d4-afcd-6d681c38f727-profile-collector-cert\") pod \"catalog-operator-68c6474976-9bxpz\" (UID: \"4a08155f-ab85-41d4-afcd-6d681c38f727\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bxpz" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879386 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879403 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879418 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99b673f3-8f28-492a-a26e-a5254f7ef796-config\") pod \"machine-approver-56656f9798-gc2nd\" (UID: \"99b673f3-8f28-492a-a26e-a5254f7ef796\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gc2nd" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879436 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f6af6767-3331-4e84-97c7-385cd642443c-audit-dir\") pod \"apiserver-76f77b778f-qkzq5\" (UID: \"f6af6767-3331-4e84-97c7-385cd642443c\") " pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879452 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ff06563-787b-4f43-ab99-d900accc2791-serving-cert\") pod \"console-operator-58897d9998-rw6n5\" (UID: \"9ff06563-787b-4f43-ab99-d900accc2791\") " pod="openshift-console-operator/console-operator-58897d9998-rw6n5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879467 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wttvl\" (UniqueName: \"kubernetes.io/projected/19b69797-31c3-4e0e-8968-eb38c731b343-kube-api-access-wttvl\") pod \"machine-config-operator-74547568cd-2nd5r\" (UID: \"19b69797-31c3-4e0e-8968-eb38c731b343\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2nd5r" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879486 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879502 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb4kq\" (UniqueName: \"kubernetes.io/projected/0fa7532e-b0f6-481d-a6d3-9523edc96c13-kube-api-access-wb4kq\") pod \"ingress-operator-5b745b69d9-f72j6\" (UID: \"0fa7532e-b0f6-481d-a6d3-9523edc96c13\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-f72j6" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879521 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/05e30706-eb6c-42d4-a28d-aa664f89ed80-client-ca\") pod \"controller-manager-879f6c89f-ngs29\" (UID: \"05e30706-eb6c-42d4-a28d-aa664f89ed80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ngs29" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879535 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9ff06563-787b-4f43-ab99-d900accc2791-trusted-ca\") pod \"console-operator-58897d9998-rw6n5\" (UID: \"9ff06563-787b-4f43-ab99-d900accc2791\") " pod="openshift-console-operator/console-operator-58897d9998-rw6n5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879551 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsdjt\" (UniqueName: \"kubernetes.io/projected/046e32fa-0c59-4d66-9ab3-02d3ab26255b-kube-api-access-jsdjt\") pod \"downloads-7954f5f757-lrc7d\" (UID: \"046e32fa-0c59-4d66-9ab3-02d3ab26255b\") " pod="openshift-console/downloads-7954f5f757-lrc7d" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879577 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f6af6767-3331-4e84-97c7-385cd642443c-encryption-config\") pod \"apiserver-76f77b778f-qkzq5\" (UID: \"f6af6767-3331-4e84-97c7-385cd642443c\") " pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879592 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879619 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1c73718-f5ac-4887-b013-04660314d6ac-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-x8kkn\" (UID: \"d1c73718-f5ac-4887-b013-04660314d6ac\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8kkn" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879637 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwbdb\" (UniqueName: \"kubernetes.io/projected/38eb7141-6e03-493c-855f-def45c0e7977-kube-api-access-fwbdb\") pod \"authentication-operator-69f744f599-mpnkz\" (UID: \"38eb7141-6e03-493c-855f-def45c0e7977\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mpnkz" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879657 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/99edc210-b315-4224-8d9f-a5911f8527b2-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-v45rz\" (UID: \"99edc210-b315-4224-8d9f-a5911f8527b2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-v45rz" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879677 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b68ea4bc-0e19-42c3-80f4-7c71d84e97c3-etcd-client\") pod \"apiserver-7bbb656c7d-hln6b\" (UID: \"b68ea4bc-0e19-42c3-80f4-7c71d84e97c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hln6b" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879693 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncjpx\" (UniqueName: \"kubernetes.io/projected/24f9a767-19ab-4c2f-a1d6-3378e7c793d1-kube-api-access-ncjpx\") pod \"kube-storage-version-migrator-operator-b67b599dd-6s2jm\" (UID: \"24f9a767-19ab-4c2f-a1d6-3378e7c793d1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6s2jm" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879712 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05e30706-eb6c-42d4-a28d-aa664f89ed80-serving-cert\") pod \"controller-manager-879f6c89f-ngs29\" (UID: \"05e30706-eb6c-42d4-a28d-aa664f89ed80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ngs29" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879729 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7n8c\" (UniqueName: \"kubernetes.io/projected/528c9fdc-fb48-460b-a06b-a07ce3c388c4-kube-api-access-j7n8c\") pod \"cluster-samples-operator-665b6dd947-wj5zq\" (UID: \"528c9fdc-fb48-460b-a06b-a07ce3c388c4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wj5zq" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879746 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cff54d20-3883-42f8-8f56-7948a115807d-metrics-tls\") pod \"dns-operator-744455d44c-czlsm\" (UID: \"cff54d20-3883-42f8-8f56-7948a115807d\") " pod="openshift-dns-operator/dns-operator-744455d44c-czlsm" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879763 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd22t\" (UniqueName: \"kubernetes.io/projected/0f3f2460-fdb8-4b47-89f9-3cbb84e143e8-kube-api-access-dd22t\") pod \"collect-profiles-29535435-n8728\" (UID: \"0f3f2460-fdb8-4b47-89f9-3cbb84e143e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535435-n8728" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879784 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f6af6767-3331-4e84-97c7-385cd642443c-audit\") pod \"apiserver-76f77b778f-qkzq5\" (UID: \"f6af6767-3331-4e84-97c7-385cd642443c\") " pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879801 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879816 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nm5r\" (UniqueName: \"kubernetes.io/projected/b68ea4bc-0e19-42c3-80f4-7c71d84e97c3-kube-api-access-2nm5r\") pod \"apiserver-7bbb656c7d-hln6b\" (UID: \"b68ea4bc-0e19-42c3-80f4-7c71d84e97c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hln6b" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879843 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7kxt\" (UniqueName: \"kubernetes.io/projected/cff54d20-3883-42f8-8f56-7948a115807d-kube-api-access-j7kxt\") pod \"dns-operator-744455d44c-czlsm\" (UID: \"cff54d20-3883-42f8-8f56-7948a115807d\") " pod="openshift-dns-operator/dns-operator-744455d44c-czlsm" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879862 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ca3d06f8-3cc9-4e77-9d45-e1232c00b04f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-d7sjf\" (UID: \"ca3d06f8-3cc9-4e77-9d45-e1232c00b04f\") " pod="openshift-marketplace/marketplace-operator-79b997595-d7sjf" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879879 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f6af6767-3331-4e84-97c7-385cd642443c-etcd-serving-ca\") pod \"apiserver-76f77b778f-qkzq5\" (UID: \"f6af6767-3331-4e84-97c7-385cd642443c\") " pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879895 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6af6767-3331-4e84-97c7-385cd642443c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-qkzq5\" (UID: \"f6af6767-3331-4e84-97c7-385cd642443c\") " pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.879912 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0fa7532e-b0f6-481d-a6d3-9523edc96c13-metrics-tls\") pod \"ingress-operator-5b745b69d9-f72j6\" (UID: \"0fa7532e-b0f6-481d-a6d3-9523edc96c13\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-f72j6" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.880090 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f6af6767-3331-4e84-97c7-385cd642443c-node-pullsecrets\") pod \"apiserver-76f77b778f-qkzq5\" (UID: \"f6af6767-3331-4e84-97c7-385cd642443c\") " pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.881467 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38eb7141-6e03-493c-855f-def45c0e7977-service-ca-bundle\") pod \"authentication-operator-69f744f599-mpnkz\" (UID: \"38eb7141-6e03-493c-855f-def45c0e7977\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mpnkz" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.881908 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/05e30706-eb6c-42d4-a28d-aa664f89ed80-client-ca\") pod \"controller-manager-879f6c89f-ngs29\" (UID: \"05e30706-eb6c-42d4-a28d-aa664f89ed80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ngs29" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.882607 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05e30706-eb6c-42d4-a28d-aa664f89ed80-config\") pod \"controller-manager-879f6c89f-ngs29\" (UID: \"05e30706-eb6c-42d4-a28d-aa664f89ed80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ngs29" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.883919 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/99edc210-b315-4224-8d9f-a5911f8527b2-images\") pod \"machine-api-operator-5694c8668f-v45rz\" (UID: \"99edc210-b315-4224-8d9f-a5911f8527b2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-v45rz" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.884148 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.884327 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6l2g9"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.884357 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-hln6b"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.884368 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k9xsh"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.885418 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38eb7141-6e03-493c-855f-def45c0e7977-serving-cert\") pod \"authentication-operator-69f744f599-mpnkz\" (UID: \"38eb7141-6e03-493c-855f-def45c0e7977\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mpnkz" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.885600 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-audit-policies\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.885897 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f6af6767-3331-4e84-97c7-385cd642443c-etcd-serving-ca\") pod \"apiserver-76f77b778f-qkzq5\" (UID: \"f6af6767-3331-4e84-97c7-385cd642443c\") " pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.886950 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6af6767-3331-4e84-97c7-385cd642443c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-qkzq5\" (UID: \"f6af6767-3331-4e84-97c7-385cd642443c\") " pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.887138 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/99edc210-b315-4224-8d9f-a5911f8527b2-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-v45rz\" (UID: \"99edc210-b315-4224-8d9f-a5911f8527b2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-v45rz" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.887245 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-lrc7d"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.887313 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99b673f3-8f28-492a-a26e-a5254f7ef796-config\") pod \"machine-approver-56656f9798-gc2nd\" (UID: \"99b673f3-8f28-492a-a26e-a5254f7ef796\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gc2nd" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.887318 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f6af6767-3331-4e84-97c7-385cd642443c-audit-dir\") pod \"apiserver-76f77b778f-qkzq5\" (UID: \"f6af6767-3331-4e84-97c7-385cd642443c\") " pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.887940 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.888581 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9ff4c"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.889624 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.889769 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f6af6767-3331-4e84-97c7-385cd642443c-audit\") pod \"apiserver-76f77b778f-qkzq5\" (UID: \"f6af6767-3331-4e84-97c7-385cd642443c\") " pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.889837 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05e30706-eb6c-42d4-a28d-aa664f89ed80-serving-cert\") pod \"controller-manager-879f6c89f-ngs29\" (UID: \"05e30706-eb6c-42d4-a28d-aa664f89ed80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ngs29" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.889998 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.890034 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.890358 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f6af6767-3331-4e84-97c7-385cd642443c-encryption-config\") pod \"apiserver-76f77b778f-qkzq5\" (UID: \"f6af6767-3331-4e84-97c7-385cd642443c\") " pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.890407 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38eb7141-6e03-493c-855f-def45c0e7977-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mpnkz\" (UID: \"38eb7141-6e03-493c-855f-def45c0e7977\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mpnkz" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.890559 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.890692 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tshb4"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.891877 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mcvr5"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.892766 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.893696 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.893896 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535438-d8kz2"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.894850 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535438-d8kz2" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.895671 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-2dnn9"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.897961 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.898355 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-qkzq5"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.899757 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-k87cp"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.900528 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-k87cp" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.900962 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535435-n8728"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.901969 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-79bz8"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.902918 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-25fk7"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.903359 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1c73718-f5ac-4887-b013-04660314d6ac-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-x8kkn\" (UID: \"d1c73718-f5ac-4887-b013-04660314d6ac\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8kkn" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.905580 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ws68"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.905604 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-frcbs"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.906769 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-czlsm"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.907086 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2nd5r"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.907957 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-n2k99"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.908665 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-n2k99" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.909867 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535438-d8kz2"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.911754 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-5jhjn"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.911822 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-trm4v"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.912818 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6s2jm"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.914088 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-4b67j"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.915255 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-f72j6"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.915756 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sz82x"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.916699 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bxpz"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.918804 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pzjmg"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.918885 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6s2g"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.919696 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f78d"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.920653 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-n2k99"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.921847 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d7sjf"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.929319 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-6lgpv"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.929711 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.931099 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-sqdmc"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.932614 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xbznm"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.932797 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-sqdmc" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.934448 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-sqdmc"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.934571 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xbznm"] Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.934516 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xbznm" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.939445 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.959245 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.978926 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.980501 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47nxg\" (UniqueName: \"kubernetes.io/projected/4a08155f-ab85-41d4-afcd-6d681c38f727-kube-api-access-47nxg\") pod \"catalog-operator-68c6474976-9bxpz\" (UID: \"4a08155f-ab85-41d4-afcd-6d681c38f727\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bxpz" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.980532 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/19b69797-31c3-4e0e-8968-eb38c731b343-proxy-tls\") pod \"machine-config-operator-74547568cd-2nd5r\" (UID: \"19b69797-31c3-4e0e-8968-eb38c731b343\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2nd5r" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.980561 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b656b811-2756-4cb0-8a5e-2fce336a3047-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-k9xsh\" (UID: \"b656b811-2756-4cb0-8a5e-2fce336a3047\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k9xsh" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.980583 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4a08155f-ab85-41d4-afcd-6d681c38f727-profile-collector-cert\") pod \"catalog-operator-68c6474976-9bxpz\" (UID: \"4a08155f-ab85-41d4-afcd-6d681c38f727\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bxpz" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.980619 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ff06563-787b-4f43-ab99-d900accc2791-serving-cert\") pod \"console-operator-58897d9998-rw6n5\" (UID: \"9ff06563-787b-4f43-ab99-d900accc2791\") " pod="openshift-console-operator/console-operator-58897d9998-rw6n5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.980643 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wttvl\" (UniqueName: \"kubernetes.io/projected/19b69797-31c3-4e0e-8968-eb38c731b343-kube-api-access-wttvl\") pod \"machine-config-operator-74547568cd-2nd5r\" (UID: \"19b69797-31c3-4e0e-8968-eb38c731b343\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2nd5r" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.980666 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb4kq\" (UniqueName: \"kubernetes.io/projected/0fa7532e-b0f6-481d-a6d3-9523edc96c13-kube-api-access-wb4kq\") pod \"ingress-operator-5b745b69d9-f72j6\" (UID: \"0fa7532e-b0f6-481d-a6d3-9523edc96c13\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-f72j6" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.980686 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9ff06563-787b-4f43-ab99-d900accc2791-trusted-ca\") pod \"console-operator-58897d9998-rw6n5\" (UID: \"9ff06563-787b-4f43-ab99-d900accc2791\") " pod="openshift-console-operator/console-operator-58897d9998-rw6n5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.980708 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsdjt\" (UniqueName: \"kubernetes.io/projected/046e32fa-0c59-4d66-9ab3-02d3ab26255b-kube-api-access-jsdjt\") pod \"downloads-7954f5f757-lrc7d\" (UID: \"046e32fa-0c59-4d66-9ab3-02d3ab26255b\") " pod="openshift-console/downloads-7954f5f757-lrc7d" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.980763 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b68ea4bc-0e19-42c3-80f4-7c71d84e97c3-etcd-client\") pod \"apiserver-7bbb656c7d-hln6b\" (UID: \"b68ea4bc-0e19-42c3-80f4-7c71d84e97c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hln6b" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.980788 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncjpx\" (UniqueName: \"kubernetes.io/projected/24f9a767-19ab-4c2f-a1d6-3378e7c793d1-kube-api-access-ncjpx\") pod \"kube-storage-version-migrator-operator-b67b599dd-6s2jm\" (UID: \"24f9a767-19ab-4c2f-a1d6-3378e7c793d1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6s2jm" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.980814 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7n8c\" (UniqueName: \"kubernetes.io/projected/528c9fdc-fb48-460b-a06b-a07ce3c388c4-kube-api-access-j7n8c\") pod \"cluster-samples-operator-665b6dd947-wj5zq\" (UID: \"528c9fdc-fb48-460b-a06b-a07ce3c388c4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wj5zq" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.980836 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cff54d20-3883-42f8-8f56-7948a115807d-metrics-tls\") pod \"dns-operator-744455d44c-czlsm\" (UID: \"cff54d20-3883-42f8-8f56-7948a115807d\") " pod="openshift-dns-operator/dns-operator-744455d44c-czlsm" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.980861 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd22t\" (UniqueName: \"kubernetes.io/projected/0f3f2460-fdb8-4b47-89f9-3cbb84e143e8-kube-api-access-dd22t\") pod \"collect-profiles-29535435-n8728\" (UID: \"0f3f2460-fdb8-4b47-89f9-3cbb84e143e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535435-n8728" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.980884 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nm5r\" (UniqueName: \"kubernetes.io/projected/b68ea4bc-0e19-42c3-80f4-7c71d84e97c3-kube-api-access-2nm5r\") pod \"apiserver-7bbb656c7d-hln6b\" (UID: \"b68ea4bc-0e19-42c3-80f4-7c71d84e97c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hln6b" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.980916 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7kxt\" (UniqueName: \"kubernetes.io/projected/cff54d20-3883-42f8-8f56-7948a115807d-kube-api-access-j7kxt\") pod \"dns-operator-744455d44c-czlsm\" (UID: \"cff54d20-3883-42f8-8f56-7948a115807d\") " pod="openshift-dns-operator/dns-operator-744455d44c-czlsm" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.980939 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ca3d06f8-3cc9-4e77-9d45-e1232c00b04f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-d7sjf\" (UID: \"ca3d06f8-3cc9-4e77-9d45-e1232c00b04f\") " pod="openshift-marketplace/marketplace-operator-79b997595-d7sjf" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.980967 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0fa7532e-b0f6-481d-a6d3-9523edc96c13-metrics-tls\") pod \"ingress-operator-5b745b69d9-f72j6\" (UID: \"0fa7532e-b0f6-481d-a6d3-9523edc96c13\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-f72j6" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.980992 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b68ea4bc-0e19-42c3-80f4-7c71d84e97c3-audit-dir\") pod \"apiserver-7bbb656c7d-hln6b\" (UID: \"b68ea4bc-0e19-42c3-80f4-7c71d84e97c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hln6b" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.981043 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c2d3d15-9f73-4fed-81e3-2f8ce400b967-serving-cert\") pod \"openshift-config-operator-7777fb866f-6l2g9\" (UID: \"8c2d3d15-9f73-4fed-81e3-2f8ce400b967\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6l2g9" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.981068 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6trqw\" (UniqueName: \"kubernetes.io/projected/07a7b8f7-2103-443e-906f-5d2d74baa5a9-kube-api-access-6trqw\") pod \"control-plane-machine-set-operator-78cbb6b69f-frcbs\" (UID: \"07a7b8f7-2103-443e-906f-5d2d74baa5a9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-frcbs" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.981091 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0fa7532e-b0f6-481d-a6d3-9523edc96c13-bound-sa-token\") pod \"ingress-operator-5b745b69d9-f72j6\" (UID: \"0fa7532e-b0f6-481d-a6d3-9523edc96c13\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-f72j6" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.981114 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b68ea4bc-0e19-42c3-80f4-7c71d84e97c3-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-hln6b\" (UID: \"b68ea4bc-0e19-42c3-80f4-7c71d84e97c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hln6b" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.981136 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f5c14f64-688c-4661-a440-5b171ee3d7b6-etcd-ca\") pod \"etcd-operator-b45778765-ml8xl\" (UID: \"f5c14f64-688c-4661-a440-5b171ee3d7b6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ml8xl" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.981164 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0d1ffc5-ede9-49d3-8d52-3ebc49cdcb30-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-25fk7\" (UID: \"b0d1ffc5-ede9-49d3-8d52-3ebc49cdcb30\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-25fk7" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.981183 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f5c14f64-688c-4661-a440-5b171ee3d7b6-etcd-client\") pod \"etcd-operator-b45778765-ml8xl\" (UID: \"f5c14f64-688c-4661-a440-5b171ee3d7b6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ml8xl" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.981205 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f3f2460-fdb8-4b47-89f9-3cbb84e143e8-secret-volume\") pod \"collect-profiles-29535435-n8728\" (UID: \"0f3f2460-fdb8-4b47-89f9-3cbb84e143e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535435-n8728" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.981228 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b68ea4bc-0e19-42c3-80f4-7c71d84e97c3-audit-policies\") pod \"apiserver-7bbb656c7d-hln6b\" (UID: \"b68ea4bc-0e19-42c3-80f4-7c71d84e97c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hln6b" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.981257 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwkbb\" (UniqueName: \"kubernetes.io/projected/67e1c0bf-8550-4126-986d-0d88d4d5f8ef-kube-api-access-kwkbb\") pod \"service-ca-operator-777779d784-5jhjn\" (UID: \"67e1c0bf-8550-4126-986d-0d88d4d5f8ef\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5jhjn" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.981274 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0fa7532e-b0f6-481d-a6d3-9523edc96c13-trusted-ca\") pod \"ingress-operator-5b745b69d9-f72j6\" (UID: \"0fa7532e-b0f6-481d-a6d3-9523edc96c13\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-f72j6" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.981290 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ndr5\" (UniqueName: \"kubernetes.io/projected/b656b811-2756-4cb0-8a5e-2fce336a3047-kube-api-access-5ndr5\") pod \"cluster-image-registry-operator-dc59b4c8b-k9xsh\" (UID: \"b656b811-2756-4cb0-8a5e-2fce336a3047\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k9xsh" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.981306 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f3f2460-fdb8-4b47-89f9-3cbb84e143e8-config-volume\") pod \"collect-profiles-29535435-n8728\" (UID: \"0f3f2460-fdb8-4b47-89f9-3cbb84e143e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535435-n8728" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.981324 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24f9a767-19ab-4c2f-a1d6-3378e7c793d1-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6s2jm\" (UID: \"24f9a767-19ab-4c2f-a1d6-3378e7c793d1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6s2jm" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.981340 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4a08155f-ab85-41d4-afcd-6d681c38f727-srv-cert\") pod \"catalog-operator-68c6474976-9bxpz\" (UID: \"4a08155f-ab85-41d4-afcd-6d681c38f727\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bxpz" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.981357 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ff06563-787b-4f43-ab99-d900accc2791-config\") pod \"console-operator-58897d9998-rw6n5\" (UID: \"9ff06563-787b-4f43-ab99-d900accc2791\") " pod="openshift-console-operator/console-operator-58897d9998-rw6n5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.981374 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/19b69797-31c3-4e0e-8968-eb38c731b343-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2nd5r\" (UID: \"19b69797-31c3-4e0e-8968-eb38c731b343\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2nd5r" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.981394 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/07a7b8f7-2103-443e-906f-5d2d74baa5a9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-frcbs\" (UID: \"07a7b8f7-2103-443e-906f-5d2d74baa5a9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-frcbs" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.981415 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sckxv\" (UniqueName: \"kubernetes.io/projected/ca3d06f8-3cc9-4e77-9d45-e1232c00b04f-kube-api-access-sckxv\") pod \"marketplace-operator-79b997595-d7sjf\" (UID: \"ca3d06f8-3cc9-4e77-9d45-e1232c00b04f\") " pod="openshift-marketplace/marketplace-operator-79b997595-d7sjf" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.981439 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b656b811-2756-4cb0-8a5e-2fce336a3047-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-k9xsh\" (UID: \"b656b811-2756-4cb0-8a5e-2fce336a3047\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k9xsh" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.981460 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24f9a767-19ab-4c2f-a1d6-3378e7c793d1-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6s2jm\" (UID: \"24f9a767-19ab-4c2f-a1d6-3378e7c793d1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6s2jm" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.981493 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f5c14f64-688c-4661-a440-5b171ee3d7b6-etcd-service-ca\") pod \"etcd-operator-b45778765-ml8xl\" (UID: \"f5c14f64-688c-4661-a440-5b171ee3d7b6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ml8xl" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.981515 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67e1c0bf-8550-4126-986d-0d88d4d5f8ef-config\") pod \"service-ca-operator-777779d784-5jhjn\" (UID: \"67e1c0bf-8550-4126-986d-0d88d4d5f8ef\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5jhjn" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.981543 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b68ea4bc-0e19-42c3-80f4-7c71d84e97c3-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-hln6b\" (UID: \"b68ea4bc-0e19-42c3-80f4-7c71d84e97c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hln6b" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.981567 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx6xc\" (UniqueName: \"kubernetes.io/projected/9ff06563-787b-4f43-ab99-d900accc2791-kube-api-access-tx6xc\") pod \"console-operator-58897d9998-rw6n5\" (UID: \"9ff06563-787b-4f43-ab99-d900accc2791\") " pod="openshift-console-operator/console-operator-58897d9998-rw6n5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.981589 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5c14f64-688c-4661-a440-5b171ee3d7b6-config\") pod \"etcd-operator-b45778765-ml8xl\" (UID: \"f5c14f64-688c-4661-a440-5b171ee3d7b6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ml8xl" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.981610 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq5qp\" (UniqueName: \"kubernetes.io/projected/f5c14f64-688c-4661-a440-5b171ee3d7b6-kube-api-access-nq5qp\") pod \"etcd-operator-b45778765-ml8xl\" (UID: \"f5c14f64-688c-4661-a440-5b171ee3d7b6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ml8xl" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.981631 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ca3d06f8-3cc9-4e77-9d45-e1232c00b04f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-d7sjf\" (UID: \"ca3d06f8-3cc9-4e77-9d45-e1232c00b04f\") " pod="openshift-marketplace/marketplace-operator-79b997595-d7sjf" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.981676 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b68ea4bc-0e19-42c3-80f4-7c71d84e97c3-serving-cert\") pod \"apiserver-7bbb656c7d-hln6b\" (UID: \"b68ea4bc-0e19-42c3-80f4-7c71d84e97c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hln6b" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.981715 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b656b811-2756-4cb0-8a5e-2fce336a3047-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-k9xsh\" (UID: \"b656b811-2756-4cb0-8a5e-2fce336a3047\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k9xsh" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.981740 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd9j7\" (UniqueName: \"kubernetes.io/projected/b0d1ffc5-ede9-49d3-8d52-3ebc49cdcb30-kube-api-access-xd9j7\") pod \"openshift-controller-manager-operator-756b6f6bc6-25fk7\" (UID: \"b0d1ffc5-ede9-49d3-8d52-3ebc49cdcb30\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-25fk7" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.981769 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tklxz\" (UniqueName: \"kubernetes.io/projected/8c2d3d15-9f73-4fed-81e3-2f8ce400b967-kube-api-access-tklxz\") pod \"openshift-config-operator-7777fb866f-6l2g9\" (UID: \"8c2d3d15-9f73-4fed-81e3-2f8ce400b967\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6l2g9" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.981791 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8c2d3d15-9f73-4fed-81e3-2f8ce400b967-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6l2g9\" (UID: \"8c2d3d15-9f73-4fed-81e3-2f8ce400b967\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6l2g9" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.981811 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67e1c0bf-8550-4126-986d-0d88d4d5f8ef-serving-cert\") pod \"service-ca-operator-777779d784-5jhjn\" (UID: \"67e1c0bf-8550-4126-986d-0d88d4d5f8ef\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5jhjn" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.981842 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b68ea4bc-0e19-42c3-80f4-7c71d84e97c3-encryption-config\") pod \"apiserver-7bbb656c7d-hln6b\" (UID: \"b68ea4bc-0e19-42c3-80f4-7c71d84e97c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hln6b" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.981862 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/19b69797-31c3-4e0e-8968-eb38c731b343-images\") pod \"machine-config-operator-74547568cd-2nd5r\" (UID: \"19b69797-31c3-4e0e-8968-eb38c731b343\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2nd5r" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.981884 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0d1ffc5-ede9-49d3-8d52-3ebc49cdcb30-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-25fk7\" (UID: \"b0d1ffc5-ede9-49d3-8d52-3ebc49cdcb30\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-25fk7" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.981907 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5c14f64-688c-4661-a440-5b171ee3d7b6-serving-cert\") pod \"etcd-operator-b45778765-ml8xl\" (UID: \"f5c14f64-688c-4661-a440-5b171ee3d7b6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ml8xl" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.981928 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/528c9fdc-fb48-460b-a06b-a07ce3c388c4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-wj5zq\" (UID: \"528c9fdc-fb48-460b-a06b-a07ce3c388c4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wj5zq" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.983070 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f5c14f64-688c-4661-a440-5b171ee3d7b6-etcd-ca\") pod \"etcd-operator-b45778765-ml8xl\" (UID: \"f5c14f64-688c-4661-a440-5b171ee3d7b6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ml8xl" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.983146 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b68ea4bc-0e19-42c3-80f4-7c71d84e97c3-audit-dir\") pod \"apiserver-7bbb656c7d-hln6b\" (UID: \"b68ea4bc-0e19-42c3-80f4-7c71d84e97c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hln6b" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.983333 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ff06563-787b-4f43-ab99-d900accc2791-config\") pod \"console-operator-58897d9998-rw6n5\" (UID: \"9ff06563-787b-4f43-ab99-d900accc2791\") " pod="openshift-console-operator/console-operator-58897d9998-rw6n5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.983375 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/19b69797-31c3-4e0e-8968-eb38c731b343-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2nd5r\" (UID: \"19b69797-31c3-4e0e-8968-eb38c731b343\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2nd5r" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.983489 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9ff06563-787b-4f43-ab99-d900accc2791-trusted-ca\") pod \"console-operator-58897d9998-rw6n5\" (UID: \"9ff06563-787b-4f43-ab99-d900accc2791\") " pod="openshift-console-operator/console-operator-58897d9998-rw6n5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.983914 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ff06563-787b-4f43-ab99-d900accc2791-serving-cert\") pod \"console-operator-58897d9998-rw6n5\" (UID: \"9ff06563-787b-4f43-ab99-d900accc2791\") " pod="openshift-console-operator/console-operator-58897d9998-rw6n5" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.984132 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f5c14f64-688c-4661-a440-5b171ee3d7b6-etcd-service-ca\") pod \"etcd-operator-b45778765-ml8xl\" (UID: \"f5c14f64-688c-4661-a440-5b171ee3d7b6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ml8xl" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.984502 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8c2d3d15-9f73-4fed-81e3-2f8ce400b967-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6l2g9\" (UID: \"8c2d3d15-9f73-4fed-81e3-2f8ce400b967\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6l2g9" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.984830 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b68ea4bc-0e19-42c3-80f4-7c71d84e97c3-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-hln6b\" (UID: \"b68ea4bc-0e19-42c3-80f4-7c71d84e97c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hln6b" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.984968 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b68ea4bc-0e19-42c3-80f4-7c71d84e97c3-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-hln6b\" (UID: \"b68ea4bc-0e19-42c3-80f4-7c71d84e97c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hln6b" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.985134 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/528c9fdc-fb48-460b-a06b-a07ce3c388c4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-wj5zq\" (UID: \"528c9fdc-fb48-460b-a06b-a07ce3c388c4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wj5zq" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.985763 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b68ea4bc-0e19-42c3-80f4-7c71d84e97c3-etcd-client\") pod \"apiserver-7bbb656c7d-hln6b\" (UID: \"b68ea4bc-0e19-42c3-80f4-7c71d84e97c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hln6b" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.985900 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c2d3d15-9f73-4fed-81e3-2f8ce400b967-serving-cert\") pod \"openshift-config-operator-7777fb866f-6l2g9\" (UID: \"8c2d3d15-9f73-4fed-81e3-2f8ce400b967\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6l2g9" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.986149 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5c14f64-688c-4661-a440-5b171ee3d7b6-config\") pod \"etcd-operator-b45778765-ml8xl\" (UID: \"f5c14f64-688c-4661-a440-5b171ee3d7b6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ml8xl" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.986574 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b656b811-2756-4cb0-8a5e-2fce336a3047-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-k9xsh\" (UID: \"b656b811-2756-4cb0-8a5e-2fce336a3047\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k9xsh" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.986586 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0d1ffc5-ede9-49d3-8d52-3ebc49cdcb30-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-25fk7\" (UID: \"b0d1ffc5-ede9-49d3-8d52-3ebc49cdcb30\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-25fk7" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.986751 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b68ea4bc-0e19-42c3-80f4-7c71d84e97c3-audit-policies\") pod \"apiserver-7bbb656c7d-hln6b\" (UID: \"b68ea4bc-0e19-42c3-80f4-7c71d84e97c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hln6b" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.987502 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0d1ffc5-ede9-49d3-8d52-3ebc49cdcb30-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-25fk7\" (UID: \"b0d1ffc5-ede9-49d3-8d52-3ebc49cdcb30\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-25fk7" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.987568 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cff54d20-3883-42f8-8f56-7948a115807d-metrics-tls\") pod \"dns-operator-744455d44c-czlsm\" (UID: \"cff54d20-3883-42f8-8f56-7948a115807d\") " pod="openshift-dns-operator/dns-operator-744455d44c-czlsm" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.987840 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f5c14f64-688c-4661-a440-5b171ee3d7b6-etcd-client\") pod \"etcd-operator-b45778765-ml8xl\" (UID: \"f5c14f64-688c-4661-a440-5b171ee3d7b6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ml8xl" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.988541 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b68ea4bc-0e19-42c3-80f4-7c71d84e97c3-serving-cert\") pod \"apiserver-7bbb656c7d-hln6b\" (UID: \"b68ea4bc-0e19-42c3-80f4-7c71d84e97c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hln6b" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.989343 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5c14f64-688c-4661-a440-5b171ee3d7b6-serving-cert\") pod \"etcd-operator-b45778765-ml8xl\" (UID: \"f5c14f64-688c-4661-a440-5b171ee3d7b6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ml8xl" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.990189 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b68ea4bc-0e19-42c3-80f4-7c71d84e97c3-encryption-config\") pod \"apiserver-7bbb656c7d-hln6b\" (UID: \"b68ea4bc-0e19-42c3-80f4-7c71d84e97c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hln6b" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.991195 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b656b811-2756-4cb0-8a5e-2fce336a3047-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-k9xsh\" (UID: \"b656b811-2756-4cb0-8a5e-2fce336a3047\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k9xsh" Feb 26 17:18:31 crc kubenswrapper[4805]: I0226 17:18:31.999354 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.018815 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.038587 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.060095 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.081030 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.100477 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.119387 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.125999 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0fa7532e-b0f6-481d-a6d3-9523edc96c13-metrics-tls\") pod \"ingress-operator-5b745b69d9-f72j6\" (UID: \"0fa7532e-b0f6-481d-a6d3-9523edc96c13\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-f72j6" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.138943 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.165086 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.175911 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0fa7532e-b0f6-481d-a6d3-9523edc96c13-trusted-ca\") pod \"ingress-operator-5b745b69d9-f72j6\" (UID: \"0fa7532e-b0f6-481d-a6d3-9523edc96c13\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-f72j6" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.178976 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.199058 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.218632 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.239410 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.259142 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.278900 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.300792 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.319198 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.340349 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.361064 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.378644 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.399293 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.418586 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.445446 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.477790 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.497724 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.518572 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.539352 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.558724 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.578328 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.598135 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.607464 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4a08155f-ab85-41d4-afcd-6d681c38f727-profile-collector-cert\") pod \"catalog-operator-68c6474976-9bxpz\" (UID: \"4a08155f-ab85-41d4-afcd-6d681c38f727\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bxpz" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.609297 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f3f2460-fdb8-4b47-89f9-3cbb84e143e8-secret-volume\") pod \"collect-profiles-29535435-n8728\" (UID: \"0f3f2460-fdb8-4b47-89f9-3cbb84e143e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535435-n8728" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.618172 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.638137 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.649106 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/07a7b8f7-2103-443e-906f-5d2d74baa5a9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-frcbs\" (UID: \"07a7b8f7-2103-443e-906f-5d2d74baa5a9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-frcbs" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.658748 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.677633 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.697661 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.716570 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.737959 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.745195 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f3f2460-fdb8-4b47-89f9-3cbb84e143e8-config-volume\") pod \"collect-profiles-29535435-n8728\" (UID: \"0f3f2460-fdb8-4b47-89f9-3cbb84e143e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535435-n8728" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.757255 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.778268 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.798099 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.817442 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.836527 4805 request.go:700] Waited for 1.016152716s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dcatalog-operator-serving-cert&limit=500&resourceVersion=0 Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.839682 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.850661 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4a08155f-ab85-41d4-afcd-6d681c38f727-srv-cert\") pod \"catalog-operator-68c6474976-9bxpz\" (UID: \"4a08155f-ab85-41d4-afcd-6d681c38f727\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bxpz" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.857927 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.878643 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.897317 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.917738 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.938053 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.947311 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24f9a767-19ab-4c2f-a1d6-3378e7c793d1-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6s2jm\" (UID: \"24f9a767-19ab-4c2f-a1d6-3378e7c793d1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6s2jm" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.957631 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.977652 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 26 17:18:32 crc kubenswrapper[4805]: E0226 17:18:32.981608 4805 secret.go:188] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 26 17:18:32 crc kubenswrapper[4805]: E0226 17:18:32.981716 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/19b69797-31c3-4e0e-8968-eb38c731b343-proxy-tls podName:19b69797-31c3-4e0e-8968-eb38c731b343 nodeName:}" failed. No retries permitted until 2026-02-26 17:18:33.481689652 +0000 UTC m=+228.043443991 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/19b69797-31c3-4e0e-8968-eb38c731b343-proxy-tls") pod "machine-config-operator-74547568cd-2nd5r" (UID: "19b69797-31c3-4e0e-8968-eb38c731b343") : failed to sync secret cache: timed out waiting for the condition Feb 26 17:18:32 crc kubenswrapper[4805]: E0226 17:18:32.984813 4805 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 26 17:18:32 crc kubenswrapper[4805]: E0226 17:18:32.984811 4805 secret.go:188] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Feb 26 17:18:32 crc kubenswrapper[4805]: E0226 17:18:32.984876 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/67e1c0bf-8550-4126-986d-0d88d4d5f8ef-config podName:67e1c0bf-8550-4126-986d-0d88d4d5f8ef nodeName:}" failed. No retries permitted until 2026-02-26 17:18:33.4848615 +0000 UTC m=+228.046615849 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/67e1c0bf-8550-4126-986d-0d88d4d5f8ef-config") pod "service-ca-operator-777779d784-5jhjn" (UID: "67e1c0bf-8550-4126-986d-0d88d4d5f8ef") : failed to sync configmap cache: timed out waiting for the condition Feb 26 17:18:32 crc kubenswrapper[4805]: E0226 17:18:32.985045 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca3d06f8-3cc9-4e77-9d45-e1232c00b04f-marketplace-operator-metrics podName:ca3d06f8-3cc9-4e77-9d45-e1232c00b04f nodeName:}" failed. No retries permitted until 2026-02-26 17:18:33.484988633 +0000 UTC m=+228.046743002 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/ca3d06f8-3cc9-4e77-9d45-e1232c00b04f-marketplace-operator-metrics") pod "marketplace-operator-79b997595-d7sjf" (UID: "ca3d06f8-3cc9-4e77-9d45-e1232c00b04f") : failed to sync secret cache: timed out waiting for the condition Feb 26 17:18:32 crc kubenswrapper[4805]: E0226 17:18:32.984834 4805 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 26 17:18:32 crc kubenswrapper[4805]: E0226 17:18:32.985172 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ca3d06f8-3cc9-4e77-9d45-e1232c00b04f-marketplace-trusted-ca podName:ca3d06f8-3cc9-4e77-9d45-e1232c00b04f nodeName:}" failed. No retries permitted until 2026-02-26 17:18:33.485102896 +0000 UTC m=+228.046857335 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/ca3d06f8-3cc9-4e77-9d45-e1232c00b04f-marketplace-trusted-ca") pod "marketplace-operator-79b997595-d7sjf" (UID: "ca3d06f8-3cc9-4e77-9d45-e1232c00b04f") : failed to sync configmap cache: timed out waiting for the condition Feb 26 17:18:32 crc kubenswrapper[4805]: E0226 17:18:32.985368 4805 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 26 17:18:32 crc kubenswrapper[4805]: E0226 17:18:32.985533 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67e1c0bf-8550-4126-986d-0d88d4d5f8ef-serving-cert podName:67e1c0bf-8550-4126-986d-0d88d4d5f8ef nodeName:}" failed. No retries permitted until 2026-02-26 17:18:33.485452335 +0000 UTC m=+228.047206714 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/67e1c0bf-8550-4126-986d-0d88d4d5f8ef-serving-cert") pod "service-ca-operator-777779d784-5jhjn" (UID: "67e1c0bf-8550-4126-986d-0d88d4d5f8ef") : failed to sync secret cache: timed out waiting for the condition Feb 26 17:18:32 crc kubenswrapper[4805]: E0226 17:18:32.986241 4805 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 26 17:18:32 crc kubenswrapper[4805]: E0226 17:18:32.986374 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/19b69797-31c3-4e0e-8968-eb38c731b343-images podName:19b69797-31c3-4e0e-8968-eb38c731b343 nodeName:}" failed. No retries permitted until 2026-02-26 17:18:33.486349197 +0000 UTC m=+228.048103576 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/19b69797-31c3-4e0e-8968-eb38c731b343-images") pod "machine-config-operator-74547568cd-2nd5r" (UID: "19b69797-31c3-4e0e-8968-eb38c731b343") : failed to sync configmap cache: timed out waiting for the condition Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.987590 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24f9a767-19ab-4c2f-a1d6-3378e7c793d1-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6s2jm\" (UID: \"24f9a767-19ab-4c2f-a1d6-3378e7c793d1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6s2jm" Feb 26 17:18:32 crc kubenswrapper[4805]: I0226 17:18:32.998485 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.018388 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.038864 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.058236 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.078501 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.098158 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.117631 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.138757 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.158513 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.177801 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.197774 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.225497 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.237907 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.259329 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.278285 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.298734 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.339589 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.358978 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.379271 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.379569 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.398494 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.437531 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwbnd\" (UniqueName: \"kubernetes.io/projected/05e30706-eb6c-42d4-a28d-aa664f89ed80-kube-api-access-qwbnd\") pod \"controller-manager-879f6c89f-ngs29\" (UID: \"05e30706-eb6c-42d4-a28d-aa664f89ed80\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ngs29" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.458208 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vns2j\" (UniqueName: \"kubernetes.io/projected/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-kube-api-access-vns2j\") pod \"oauth-openshift-558db77b4-mcvr5\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.480781 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpd2h\" (UniqueName: \"kubernetes.io/projected/f440d939-b304-4728-afc4-ad814d771fbb-kube-api-access-lpd2h\") pod \"route-controller-manager-6576b87f9c-g44kw\" (UID: \"f440d939-b304-4728-afc4-ad814d771fbb\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g44kw" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.490619 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ngs29" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.497996 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8szxp\" (UniqueName: \"kubernetes.io/projected/99b673f3-8f28-492a-a26e-a5254f7ef796-kube-api-access-8szxp\") pod \"machine-approver-56656f9798-gc2nd\" (UID: \"99b673f3-8f28-492a-a26e-a5254f7ef796\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gc2nd" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.506912 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ca3d06f8-3cc9-4e77-9d45-e1232c00b04f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-d7sjf\" (UID: \"ca3d06f8-3cc9-4e77-9d45-e1232c00b04f\") " pod="openshift-marketplace/marketplace-operator-79b997595-d7sjf" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.507103 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67e1c0bf-8550-4126-986d-0d88d4d5f8ef-config\") pod \"service-ca-operator-777779d784-5jhjn\" (UID: \"67e1c0bf-8550-4126-986d-0d88d4d5f8ef\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5jhjn" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.507226 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ca3d06f8-3cc9-4e77-9d45-e1232c00b04f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-d7sjf\" (UID: \"ca3d06f8-3cc9-4e77-9d45-e1232c00b04f\") " pod="openshift-marketplace/marketplace-operator-79b997595-d7sjf" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.507305 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67e1c0bf-8550-4126-986d-0d88d4d5f8ef-serving-cert\") pod \"service-ca-operator-777779d784-5jhjn\" (UID: \"67e1c0bf-8550-4126-986d-0d88d4d5f8ef\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5jhjn" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.507341 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/19b69797-31c3-4e0e-8968-eb38c731b343-images\") pod \"machine-config-operator-74547568cd-2nd5r\" (UID: \"19b69797-31c3-4e0e-8968-eb38c731b343\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2nd5r" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.507391 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/19b69797-31c3-4e0e-8968-eb38c731b343-proxy-tls\") pod \"machine-config-operator-74547568cd-2nd5r\" (UID: \"19b69797-31c3-4e0e-8968-eb38c731b343\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2nd5r" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.508572 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/19b69797-31c3-4e0e-8968-eb38c731b343-images\") pod \"machine-config-operator-74547568cd-2nd5r\" (UID: \"19b69797-31c3-4e0e-8968-eb38c731b343\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2nd5r" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.508992 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ca3d06f8-3cc9-4e77-9d45-e1232c00b04f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-d7sjf\" (UID: \"ca3d06f8-3cc9-4e77-9d45-e1232c00b04f\") " pod="openshift-marketplace/marketplace-operator-79b997595-d7sjf" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.509601 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67e1c0bf-8550-4126-986d-0d88d4d5f8ef-config\") pod \"service-ca-operator-777779d784-5jhjn\" (UID: \"67e1c0bf-8550-4126-986d-0d88d4d5f8ef\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5jhjn" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.512143 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/67e1c0bf-8550-4126-986d-0d88d4d5f8ef-serving-cert\") pod \"service-ca-operator-777779d784-5jhjn\" (UID: \"67e1c0bf-8550-4126-986d-0d88d4d5f8ef\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5jhjn" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.512333 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/19b69797-31c3-4e0e-8968-eb38c731b343-proxy-tls\") pod \"machine-config-operator-74547568cd-2nd5r\" (UID: \"19b69797-31c3-4e0e-8968-eb38c731b343\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2nd5r" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.512555 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ca3d06f8-3cc9-4e77-9d45-e1232c00b04f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-d7sjf\" (UID: \"ca3d06f8-3cc9-4e77-9d45-e1232c00b04f\") " pod="openshift-marketplace/marketplace-operator-79b997595-d7sjf" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.517135 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6j4c\" (UniqueName: \"kubernetes.io/projected/f6af6767-3331-4e84-97c7-385cd642443c-kube-api-access-s6j4c\") pod \"apiserver-76f77b778f-qkzq5\" (UID: \"f6af6767-3331-4e84-97c7-385cd642443c\") " pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.534496 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmb9h\" (UniqueName: \"kubernetes.io/projected/99edc210-b315-4224-8d9f-a5911f8527b2-kube-api-access-dmb9h\") pod \"machine-api-operator-5694c8668f-v45rz\" (UID: \"99edc210-b315-4224-8d9f-a5911f8527b2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-v45rz" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.545991 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gc2nd" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.557739 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k42f\" (UniqueName: \"kubernetes.io/projected/d1c73718-f5ac-4887-b013-04660314d6ac-kube-api-access-7k42f\") pod \"openshift-apiserver-operator-796bbdcf4f-x8kkn\" (UID: \"d1c73718-f5ac-4887-b013-04660314d6ac\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8kkn" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.576451 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwbdb\" (UniqueName: \"kubernetes.io/projected/38eb7141-6e03-493c-855f-def45c0e7977-kube-api-access-fwbdb\") pod \"authentication-operator-69f744f599-mpnkz\" (UID: \"38eb7141-6e03-493c-855f-def45c0e7977\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mpnkz" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.577609 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.591999 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.598573 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.618066 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.620124 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.639481 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.658805 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.679440 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.698718 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.700417 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ngs29"] Feb 26 17:18:33 crc kubenswrapper[4805]: W0226 17:18:33.710085 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05e30706_eb6c_42d4_a28d_aa664f89ed80.slice/crio-bb7281c2f84b6d9a7a01c2382232fbce8a6109d457a7e9d167f5ec3a1587d747 WatchSource:0}: Error finding container bb7281c2f84b6d9a7a01c2382232fbce8a6109d457a7e9d167f5ec3a1587d747: Status 404 returned error can't find the container with id bb7281c2f84b6d9a7a01c2382232fbce8a6109d457a7e9d167f5ec3a1587d747 Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.718338 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.736663 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8kkn" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.738223 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.758075 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.773348 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g44kw" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.778150 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.797682 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.810159 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-v45rz" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.810507 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mcvr5"] Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.820625 4805 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 26 17:18:33 crc kubenswrapper[4805]: W0226 17:18:33.825523 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f7f215f_544a_4b8a_814d_5e6ecd814b2d.slice/crio-d8b59acb36316aa0fcd78b27458d7994098f3d5e0d302c24eac3ede7a52f170e WatchSource:0}: Error finding container d8b59acb36316aa0fcd78b27458d7994098f3d5e0d302c24eac3ede7a52f170e: Status 404 returned error can't find the container with id d8b59acb36316aa0fcd78b27458d7994098f3d5e0d302c24eac3ede7a52f170e Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.834408 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-mpnkz" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.840259 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.858743 4805 request.go:700] Waited for 1.923460771s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.863349 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.869500 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-qkzq5"] Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.903369 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47nxg\" (UniqueName: \"kubernetes.io/projected/4a08155f-ab85-41d4-afcd-6d681c38f727-kube-api-access-47nxg\") pod \"catalog-operator-68c6474976-9bxpz\" (UID: \"4a08155f-ab85-41d4-afcd-6d681c38f727\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bxpz" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.910466 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bxpz" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.928333 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsdjt\" (UniqueName: \"kubernetes.io/projected/046e32fa-0c59-4d66-9ab3-02d3ab26255b-kube-api-access-jsdjt\") pod \"downloads-7954f5f757-lrc7d\" (UID: \"046e32fa-0c59-4d66-9ab3-02d3ab26255b\") " pod="openshift-console/downloads-7954f5f757-lrc7d" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.943299 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wttvl\" (UniqueName: \"kubernetes.io/projected/19b69797-31c3-4e0e-8968-eb38c731b343-kube-api-access-wttvl\") pod \"machine-config-operator-74547568cd-2nd5r\" (UID: \"19b69797-31c3-4e0e-8968-eb38c731b343\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2nd5r" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.951726 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2nd5r" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.971786 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb4kq\" (UniqueName: \"kubernetes.io/projected/0fa7532e-b0f6-481d-a6d3-9523edc96c13-kube-api-access-wb4kq\") pod \"ingress-operator-5b745b69d9-f72j6\" (UID: \"0fa7532e-b0f6-481d-a6d3-9523edc96c13\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-f72j6" Feb 26 17:18:33 crc kubenswrapper[4805]: I0226 17:18:33.979082 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncjpx\" (UniqueName: \"kubernetes.io/projected/24f9a767-19ab-4c2f-a1d6-3378e7c793d1-kube-api-access-ncjpx\") pod \"kube-storage-version-migrator-operator-b67b599dd-6s2jm\" (UID: \"24f9a767-19ab-4c2f-a1d6-3378e7c793d1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6s2jm" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.006310 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8kkn"] Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.022794 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq5qp\" (UniqueName: \"kubernetes.io/projected/f5c14f64-688c-4661-a440-5b171ee3d7b6-kube-api-access-nq5qp\") pod \"etcd-operator-b45778765-ml8xl\" (UID: \"f5c14f64-688c-4661-a440-5b171ee3d7b6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ml8xl" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.023878 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7n8c\" (UniqueName: \"kubernetes.io/projected/528c9fdc-fb48-460b-a06b-a07ce3c388c4-kube-api-access-j7n8c\") pod \"cluster-samples-operator-665b6dd947-wj5zq\" (UID: \"528c9fdc-fb48-460b-a06b-a07ce3c388c4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wj5zq" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.033535 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-g44kw"] Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.036782 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd22t\" (UniqueName: \"kubernetes.io/projected/0f3f2460-fdb8-4b47-89f9-3cbb84e143e8-kube-api-access-dd22t\") pod \"collect-profiles-29535435-n8728\" (UID: \"0f3f2460-fdb8-4b47-89f9-3cbb84e143e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535435-n8728" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.037615 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wj5zq" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.055738 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nm5r\" (UniqueName: \"kubernetes.io/projected/b68ea4bc-0e19-42c3-80f4-7c71d84e97c3-kube-api-access-2nm5r\") pod \"apiserver-7bbb656c7d-hln6b\" (UID: \"b68ea4bc-0e19-42c3-80f4-7c71d84e97c3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hln6b" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.075635 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sckxv\" (UniqueName: \"kubernetes.io/projected/ca3d06f8-3cc9-4e77-9d45-e1232c00b04f-kube-api-access-sckxv\") pod \"marketplace-operator-79b997595-d7sjf\" (UID: \"ca3d06f8-3cc9-4e77-9d45-e1232c00b04f\") " pod="openshift-marketplace/marketplace-operator-79b997595-d7sjf" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.089628 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-lrc7d" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.092241 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b656b811-2756-4cb0-8a5e-2fce336a3047-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-k9xsh\" (UID: \"b656b811-2756-4cb0-8a5e-2fce336a3047\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k9xsh" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.113202 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-v45rz"] Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.117000 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0fa7532e-b0f6-481d-a6d3-9523edc96c13-bound-sa-token\") pod \"ingress-operator-5b745b69d9-f72j6\" (UID: \"0fa7532e-b0f6-481d-a6d3-9523edc96c13\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-f72j6" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.133495 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6trqw\" (UniqueName: \"kubernetes.io/projected/07a7b8f7-2103-443e-906f-5d2d74baa5a9-kube-api-access-6trqw\") pod \"control-plane-machine-set-operator-78cbb6b69f-frcbs\" (UID: \"07a7b8f7-2103-443e-906f-5d2d74baa5a9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-frcbs" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.156766 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535435-n8728" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.158639 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ndr5\" (UniqueName: \"kubernetes.io/projected/b656b811-2756-4cb0-8a5e-2fce336a3047-kube-api-access-5ndr5\") pod \"cluster-image-registry-operator-dc59b4c8b-k9xsh\" (UID: \"b656b811-2756-4cb0-8a5e-2fce336a3047\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k9xsh" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.170308 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-frcbs" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.174598 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7kxt\" (UniqueName: \"kubernetes.io/projected/cff54d20-3883-42f8-8f56-7948a115807d-kube-api-access-j7kxt\") pod \"dns-operator-744455d44c-czlsm\" (UID: \"cff54d20-3883-42f8-8f56-7948a115807d\") " pod="openshift-dns-operator/dns-operator-744455d44c-czlsm" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.198923 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mpnkz"] Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.199357 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx6xc\" (UniqueName: \"kubernetes.io/projected/9ff06563-787b-4f43-ab99-d900accc2791-kube-api-access-tx6xc\") pod \"console-operator-58897d9998-rw6n5\" (UID: \"9ff06563-787b-4f43-ab99-d900accc2791\") " pod="openshift-console-operator/console-operator-58897d9998-rw6n5" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.219876 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd9j7\" (UniqueName: \"kubernetes.io/projected/b0d1ffc5-ede9-49d3-8d52-3ebc49cdcb30-kube-api-access-xd9j7\") pod \"openshift-controller-manager-operator-756b6f6bc6-25fk7\" (UID: \"b0d1ffc5-ede9-49d3-8d52-3ebc49cdcb30\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-25fk7" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.222713 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6s2jm" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.230979 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-f72j6" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.237629 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-d7sjf" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.242324 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tklxz\" (UniqueName: \"kubernetes.io/projected/8c2d3d15-9f73-4fed-81e3-2f8ce400b967-kube-api-access-tklxz\") pod \"openshift-config-operator-7777fb866f-6l2g9\" (UID: \"8c2d3d15-9f73-4fed-81e3-2f8ce400b967\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6l2g9" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.255463 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwkbb\" (UniqueName: \"kubernetes.io/projected/67e1c0bf-8550-4126-986d-0d88d4d5f8ef-kube-api-access-kwkbb\") pod \"service-ca-operator-777779d784-5jhjn\" (UID: \"67e1c0bf-8550-4126-986d-0d88d4d5f8ef\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5jhjn" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.261458 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hln6b" Feb 26 17:18:34 crc kubenswrapper[4805]: W0226 17:18:34.278712 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38eb7141_6e03_493c_855f_def45c0e7977.slice/crio-43f2692931f505be98408303652f6f1e503319de052a6b384bcb20aba35f880b WatchSource:0}: Error finding container 43f2692931f505be98408303652f6f1e503319de052a6b384bcb20aba35f880b: Status 404 returned error can't find the container with id 43f2692931f505be98408303652f6f1e503319de052a6b384bcb20aba35f880b Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.315399 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-ml8xl" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.325656 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wj5zq"] Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.326881 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/293ce1a3-001e-4935-be39-8b40f869a2ad-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-trm4v\" (UID: \"293ce1a3-001e-4935-be39-8b40f869a2ad\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-trm4v" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.326927 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba4ace0a-1ec3-49ff-aaec-cb6751d9a7df-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-sz82x\" (UID: \"ba4ace0a-1ec3-49ff-aaec-cb6751d9a7df\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sz82x" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.326969 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/24403043-1500-4b05-a4a3-6863604f54ad-tmpfs\") pod \"packageserver-d55dfcdfc-n6s2g\" (UID: \"24403043-1500-4b05-a4a3-6863604f54ad\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6s2g" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.326986 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f57e2f33-3890-47ba-a25d-53d23342a7e7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4ws68\" (UID: \"f57e2f33-3890-47ba-a25d-53d23342a7e7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ws68" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.327002 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-console-config\") pod \"console-f9d7485db-2dnn9\" (UID: \"dd3f2c3b-2417-44c9-bd45-02b10d68cf24\") " pod="openshift-console/console-f9d7485db-2dnn9" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.327056 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-console-oauth-config\") pod \"console-f9d7485db-2dnn9\" (UID: \"dd3f2c3b-2417-44c9-bd45-02b10d68cf24\") " pod="openshift-console/console-f9d7485db-2dnn9" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.327073 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/94037ae8-6f5e-48b4-8cea-ea9cd9d3e0fa-signing-key\") pod \"service-ca-9c57cc56f-79bz8\" (UID: \"94037ae8-6f5e-48b4-8cea-ea9cd9d3e0fa\") " pod="openshift-service-ca/service-ca-9c57cc56f-79bz8" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.327091 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sml99\" (UniqueName: \"kubernetes.io/projected/293ce1a3-001e-4935-be39-8b40f869a2ad-kube-api-access-sml99\") pod \"machine-config-controller-84d6567774-trm4v\" (UID: \"293ce1a3-001e-4935-be39-8b40f869a2ad\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-trm4v" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.327148 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd0845ab-7def-4118-8108-f1254a8f79b4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9ff4c\" (UID: \"cd0845ab-7def-4118-8108-f1254a8f79b4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9ff4c" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.327172 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxn2s\" (UniqueName: \"kubernetes.io/projected/24403043-1500-4b05-a4a3-6863604f54ad-kube-api-access-hxn2s\") pod \"packageserver-d55dfcdfc-n6s2g\" (UID: \"24403043-1500-4b05-a4a3-6863604f54ad\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6s2g" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.327226 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c631c898-5180-424c-8cae-922d1a709938-trusted-ca\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.327249 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm2sp\" (UniqueName: \"kubernetes.io/projected/c631c898-5180-424c-8cae-922d1a709938-kube-api-access-sm2sp\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.327314 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-trusted-ca-bundle\") pod \"console-f9d7485db-2dnn9\" (UID: \"dd3f2c3b-2417-44c9-bd45-02b10d68cf24\") " pod="openshift-console/console-f9d7485db-2dnn9" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.327405 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/24403043-1500-4b05-a4a3-6863604f54ad-apiservice-cert\") pod \"packageserver-d55dfcdfc-n6s2g\" (UID: \"24403043-1500-4b05-a4a3-6863604f54ad\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6s2g" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.327424 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0bc402c9-dc6d-402f-9122-c7054331f144-default-certificate\") pod \"router-default-5444994796-bnlgh\" (UID: \"0bc402c9-dc6d-402f-9122-c7054331f144\") " pod="openshift-ingress/router-default-5444994796-bnlgh" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.327442 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-console-serving-cert\") pod \"console-f9d7485db-2dnn9\" (UID: \"dd3f2c3b-2417-44c9-bd45-02b10d68cf24\") " pod="openshift-console/console-f9d7485db-2dnn9" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.327487 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s26sc\" (UniqueName: \"kubernetes.io/projected/0bc402c9-dc6d-402f-9122-c7054331f144-kube-api-access-s26sc\") pod \"router-default-5444994796-bnlgh\" (UID: \"0bc402c9-dc6d-402f-9122-c7054331f144\") " pod="openshift-ingress/router-default-5444994796-bnlgh" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.327511 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/293ce1a3-001e-4935-be39-8b40f869a2ad-proxy-tls\") pod \"machine-config-controller-84d6567774-trm4v\" (UID: \"293ce1a3-001e-4935-be39-8b40f869a2ad\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-trm4v" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.327563 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.327589 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2d6l\" (UniqueName: \"kubernetes.io/projected/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-kube-api-access-p2d6l\") pod \"console-f9d7485db-2dnn9\" (UID: \"dd3f2c3b-2417-44c9-bd45-02b10d68cf24\") " pod="openshift-console/console-f9d7485db-2dnn9" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.327641 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ba4ace0a-1ec3-49ff-aaec-cb6751d9a7df-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-sz82x\" (UID: \"ba4ace0a-1ec3-49ff-aaec-cb6751d9a7df\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sz82x" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.327694 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/94037ae8-6f5e-48b4-8cea-ea9cd9d3e0fa-signing-cabundle\") pod \"service-ca-9c57cc56f-79bz8\" (UID: \"94037ae8-6f5e-48b4-8cea-ea9cd9d3e0fa\") " pod="openshift-service-ca/service-ca-9c57cc56f-79bz8" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.327724 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c631c898-5180-424c-8cae-922d1a709938-bound-sa-token\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.327747 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0bc402c9-dc6d-402f-9122-c7054331f144-metrics-certs\") pod \"router-default-5444994796-bnlgh\" (UID: \"0bc402c9-dc6d-402f-9122-c7054331f144\") " pod="openshift-ingress/router-default-5444994796-bnlgh" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.327792 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f57e2f33-3890-47ba-a25d-53d23342a7e7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4ws68\" (UID: \"f57e2f33-3890-47ba-a25d-53d23342a7e7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ws68" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.327815 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f57e2f33-3890-47ba-a25d-53d23342a7e7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4ws68\" (UID: \"f57e2f33-3890-47ba-a25d-53d23342a7e7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ws68" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.327848 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/24403043-1500-4b05-a4a3-6863604f54ad-webhook-cert\") pod \"packageserver-d55dfcdfc-n6s2g\" (UID: \"24403043-1500-4b05-a4a3-6863604f54ad\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6s2g" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.327885 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-service-ca\") pod \"console-f9d7485db-2dnn9\" (UID: \"dd3f2c3b-2417-44c9-bd45-02b10d68cf24\") " pod="openshift-console/console-f9d7485db-2dnn9" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.327907 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd0845ab-7def-4118-8108-f1254a8f79b4-config\") pod \"kube-apiserver-operator-766d6c64bb-9ff4c\" (UID: \"cd0845ab-7def-4118-8108-f1254a8f79b4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9ff4c" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.327988 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c631c898-5180-424c-8cae-922d1a709938-registry-tls\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.328979 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c631c898-5180-424c-8cae-922d1a709938-registry-certificates\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.329004 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c631c898-5180-424c-8cae-922d1a709938-installation-pull-secrets\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.329099 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz4n8\" (UniqueName: \"kubernetes.io/projected/94037ae8-6f5e-48b4-8cea-ea9cd9d3e0fa-kube-api-access-wz4n8\") pod \"service-ca-9c57cc56f-79bz8\" (UID: \"94037ae8-6f5e-48b4-8cea-ea9cd9d3e0fa\") " pod="openshift-service-ca/service-ca-9c57cc56f-79bz8" Feb 26 17:18:34 crc kubenswrapper[4805]: E0226 17:18:34.329174 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:34.829157499 +0000 UTC m=+229.390911838 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.329201 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0bc402c9-dc6d-402f-9122-c7054331f144-stats-auth\") pod \"router-default-5444994796-bnlgh\" (UID: \"0bc402c9-dc6d-402f-9122-c7054331f144\") " pod="openshift-ingress/router-default-5444994796-bnlgh" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.329389 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bc402c9-dc6d-402f-9122-c7054331f144-service-ca-bundle\") pod \"router-default-5444994796-bnlgh\" (UID: \"0bc402c9-dc6d-402f-9122-c7054331f144\") " pod="openshift-ingress/router-default-5444994796-bnlgh" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.329433 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-oauth-serving-cert\") pod \"console-f9d7485db-2dnn9\" (UID: \"dd3f2c3b-2417-44c9-bd45-02b10d68cf24\") " pod="openshift-console/console-f9d7485db-2dnn9" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.329614 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c631c898-5180-424c-8cae-922d1a709938-ca-trust-extracted\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.329642 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-794vg\" (UniqueName: \"kubernetes.io/projected/99dff83c-eeb0-4382-89e6-9956167ea61c-kube-api-access-794vg\") pod \"migrator-59844c95c7-6lgpv\" (UID: \"99dff83c-eeb0-4382-89e6-9956167ea61c\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6lgpv" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.329664 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba4ace0a-1ec3-49ff-aaec-cb6751d9a7df-config\") pod \"kube-controller-manager-operator-78b949d7b-sz82x\" (UID: \"ba4ace0a-1ec3-49ff-aaec-cb6751d9a7df\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sz82x" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.329690 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd0845ab-7def-4118-8108-f1254a8f79b4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9ff4c\" (UID: \"cd0845ab-7def-4118-8108-f1254a8f79b4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9ff4c" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.353257 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-czlsm" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.365315 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-rw6n5" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.372267 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ngs29" event={"ID":"05e30706-eb6c-42d4-a28d-aa664f89ed80","Type":"ContainerStarted","Data":"3b80e0f1a6f709e6eb6e980a04877259a91955a68fef5c136017c810f7a95a12"} Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.372341 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ngs29" event={"ID":"05e30706-eb6c-42d4-a28d-aa664f89ed80","Type":"ContainerStarted","Data":"bb7281c2f84b6d9a7a01c2382232fbce8a6109d457a7e9d167f5ec3a1587d747"} Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.373365 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-ngs29" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.378289 4805 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-ngs29 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.378356 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-ngs29" podUID="05e30706-eb6c-42d4-a28d-aa664f89ed80" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.379226 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-25fk7" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.392561 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g44kw" event={"ID":"f440d939-b304-4728-afc4-ad814d771fbb","Type":"ContainerStarted","Data":"f97cbcae5335950a4723f8b9921e93493aa8befc48b9b1d1e39a6269302f2726"} Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.405579 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k9xsh" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.406760 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" event={"ID":"2f7f215f-544a-4b8a-814d-5e6ecd814b2d","Type":"ContainerStarted","Data":"f352fc047e35d3cbddb4bbdebabaf0b17e8ec8f40f668a5f1752a35e0449b8ef"} Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.406797 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" event={"ID":"2f7f215f-544a-4b8a-814d-5e6ecd814b2d","Type":"ContainerStarted","Data":"d8b59acb36316aa0fcd78b27458d7994098f3d5e0d302c24eac3ede7a52f170e"} Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.407322 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.408543 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-mpnkz" event={"ID":"38eb7141-6e03-493c-855f-def45c0e7977","Type":"ContainerStarted","Data":"43f2692931f505be98408303652f6f1e503319de052a6b384bcb20aba35f880b"} Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.415469 4805 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-mcvr5 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.20:6443/healthz\": dial tcp 10.217.0.20:6443: connect: connection refused" start-of-body= Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.415518 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" podUID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.20:6443/healthz\": dial tcp 10.217.0.20:6443: connect: connection refused" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.419134 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8kkn" event={"ID":"d1c73718-f5ac-4887-b013-04660314d6ac","Type":"ContainerStarted","Data":"9a0ab088947537cb333f08b2ad4681b768d239884cbc0bb7dce89733f8a926ff"} Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.419197 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8kkn" event={"ID":"d1c73718-f5ac-4887-b013-04660314d6ac","Type":"ContainerStarted","Data":"d1b3db75ded72ea251afec39ac71bf5faf5c0f6ace706dd13fee0a3d6f38e4e3"} Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.419230 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6l2g9" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.426858 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" event={"ID":"f6af6767-3331-4e84-97c7-385cd642443c","Type":"ContainerStarted","Data":"038e3a5b487dbf4194e88729c46b7a0e8a828a2a812f2d7cef5c519c097bd53d"} Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.426902 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" event={"ID":"f6af6767-3331-4e84-97c7-385cd642443c","Type":"ContainerStarted","Data":"cd8fc84dc962a1c670ab229f0dc37f15c8f203b06f94bf99e78b3883b0ebdaea"} Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.439772 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.440083 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnkd8\" (UniqueName: \"kubernetes.io/projected/392795fe-61ee-4a31-9009-e0be88c6dd2d-kube-api-access-rnkd8\") pod \"package-server-manager-789f6589d5-pzjmg\" (UID: \"392795fe-61ee-4a31-9009-e0be88c6dd2d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pzjmg" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.440157 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s26sc\" (UniqueName: \"kubernetes.io/projected/0bc402c9-dc6d-402f-9122-c7054331f144-kube-api-access-s26sc\") pod \"router-default-5444994796-bnlgh\" (UID: \"0bc402c9-dc6d-402f-9122-c7054331f144\") " pod="openshift-ingress/router-default-5444994796-bnlgh" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.440199 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4a986fa5-f0cd-4512-89fe-8e8ccad45745-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-4b67j\" (UID: \"4a986fa5-f0cd-4512-89fe-8e8ccad45745\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4b67j" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.440269 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/293ce1a3-001e-4935-be39-8b40f869a2ad-proxy-tls\") pod \"machine-config-controller-84d6567774-trm4v\" (UID: \"293ce1a3-001e-4935-be39-8b40f869a2ad\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-trm4v" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.440312 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2d6l\" (UniqueName: \"kubernetes.io/projected/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-kube-api-access-p2d6l\") pod \"console-f9d7485db-2dnn9\" (UID: \"dd3f2c3b-2417-44c9-bd45-02b10d68cf24\") " pod="openshift-console/console-f9d7485db-2dnn9" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.440349 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ba4ace0a-1ec3-49ff-aaec-cb6751d9a7df-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-sz82x\" (UID: \"ba4ace0a-1ec3-49ff-aaec-cb6751d9a7df\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sz82x" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.440434 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b4645cb-cf08-4b8d-9115-c2d7d4a40cae-config-volume\") pod \"dns-default-n2k99\" (UID: \"6b4645cb-cf08-4b8d-9115-c2d7d4a40cae\") " pod="openshift-dns/dns-default-n2k99" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.440504 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlq29\" (UniqueName: \"kubernetes.io/projected/4a986fa5-f0cd-4512-89fe-8e8ccad45745-kube-api-access-hlq29\") pod \"multus-admission-controller-857f4d67dd-4b67j\" (UID: \"4a986fa5-f0cd-4512-89fe-8e8ccad45745\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4b67j" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.440572 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/94037ae8-6f5e-48b4-8cea-ea9cd9d3e0fa-signing-cabundle\") pod \"service-ca-9c57cc56f-79bz8\" (UID: \"94037ae8-6f5e-48b4-8cea-ea9cd9d3e0fa\") " pod="openshift-service-ca/service-ca-9c57cc56f-79bz8" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.440668 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c631c898-5180-424c-8cae-922d1a709938-bound-sa-token\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.440725 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9c4b829e-a7ab-4702-9310-4010906609a4-srv-cert\") pod \"olm-operator-6b444d44fb-2f78d\" (UID: \"9c4b829e-a7ab-4702-9310-4010906609a4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f78d" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.440774 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f86d7b9c-1d60-4153-98aa-6a8985a78907-socket-dir\") pod \"csi-hostpathplugin-xbznm\" (UID: \"f86d7b9c-1d60-4153-98aa-6a8985a78907\") " pod="hostpath-provisioner/csi-hostpathplugin-xbznm" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.440844 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0bc402c9-dc6d-402f-9122-c7054331f144-metrics-certs\") pod \"router-default-5444994796-bnlgh\" (UID: \"0bc402c9-dc6d-402f-9122-c7054331f144\") " pod="openshift-ingress/router-default-5444994796-bnlgh" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.440909 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/99b107f5-1e0b-420c-be32-1ae140cba05d-cert\") pod \"ingress-canary-sqdmc\" (UID: \"99b107f5-1e0b-420c-be32-1ae140cba05d\") " pod="openshift-ingress-canary/ingress-canary-sqdmc" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.440997 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9c4b829e-a7ab-4702-9310-4010906609a4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-2f78d\" (UID: \"9c4b829e-a7ab-4702-9310-4010906609a4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f78d" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.441165 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bf02d64f-cad0-4878-b177-bf89cdfd3587-certs\") pod \"machine-config-server-k87cp\" (UID: \"bf02d64f-cad0-4878-b177-bf89cdfd3587\") " pod="openshift-machine-config-operator/machine-config-server-k87cp" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.441294 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/392795fe-61ee-4a31-9009-e0be88c6dd2d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-pzjmg\" (UID: \"392795fe-61ee-4a31-9009-e0be88c6dd2d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pzjmg" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.441403 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f57e2f33-3890-47ba-a25d-53d23342a7e7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4ws68\" (UID: \"f57e2f33-3890-47ba-a25d-53d23342a7e7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ws68" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.441458 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f57e2f33-3890-47ba-a25d-53d23342a7e7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4ws68\" (UID: \"f57e2f33-3890-47ba-a25d-53d23342a7e7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ws68" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.441500 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/24403043-1500-4b05-a4a3-6863604f54ad-webhook-cert\") pod \"packageserver-d55dfcdfc-n6s2g\" (UID: \"24403043-1500-4b05-a4a3-6863604f54ad\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6s2g" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.441567 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-service-ca\") pod \"console-f9d7485db-2dnn9\" (UID: \"dd3f2c3b-2417-44c9-bd45-02b10d68cf24\") " pod="openshift-console/console-f9d7485db-2dnn9" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.441808 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x44q2\" (UniqueName: \"kubernetes.io/projected/99b107f5-1e0b-420c-be32-1ae140cba05d-kube-api-access-x44q2\") pod \"ingress-canary-sqdmc\" (UID: \"99b107f5-1e0b-420c-be32-1ae140cba05d\") " pod="openshift-ingress-canary/ingress-canary-sqdmc" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.441884 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd0845ab-7def-4118-8108-f1254a8f79b4-config\") pod \"kube-apiserver-operator-766d6c64bb-9ff4c\" (UID: \"cd0845ab-7def-4118-8108-f1254a8f79b4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9ff4c" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.441969 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c631c898-5180-424c-8cae-922d1a709938-registry-tls\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.441999 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f86d7b9c-1d60-4153-98aa-6a8985a78907-registration-dir\") pod \"csi-hostpathplugin-xbznm\" (UID: \"f86d7b9c-1d60-4153-98aa-6a8985a78907\") " pod="hostpath-provisioner/csi-hostpathplugin-xbznm" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.442232 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c631c898-5180-424c-8cae-922d1a709938-registry-certificates\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.442301 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c631c898-5180-424c-8cae-922d1a709938-installation-pull-secrets\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.442340 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz4n8\" (UniqueName: \"kubernetes.io/projected/94037ae8-6f5e-48b4-8cea-ea9cd9d3e0fa-kube-api-access-wz4n8\") pod \"service-ca-9c57cc56f-79bz8\" (UID: \"94037ae8-6f5e-48b4-8cea-ea9cd9d3e0fa\") " pod="openshift-service-ca/service-ca-9c57cc56f-79bz8" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.442394 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gc2nd" event={"ID":"99b673f3-8f28-492a-a26e-a5254f7ef796","Type":"ContainerStarted","Data":"8e29fc91642c505b9687a46e3c0082ac726e37b236e203df6dde2a1c2d92c6a9"} Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.442485 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0bc402c9-dc6d-402f-9122-c7054331f144-stats-auth\") pod \"router-default-5444994796-bnlgh\" (UID: \"0bc402c9-dc6d-402f-9122-c7054331f144\") " pod="openshift-ingress/router-default-5444994796-bnlgh" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.442517 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d95bn\" (UniqueName: \"kubernetes.io/projected/9c4b829e-a7ab-4702-9310-4010906609a4-kube-api-access-d95bn\") pod \"olm-operator-6b444d44fb-2f78d\" (UID: \"9c4b829e-a7ab-4702-9310-4010906609a4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f78d" Feb 26 17:18:34 crc kubenswrapper[4805]: E0226 17:18:34.442744 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:34.942712643 +0000 UTC m=+229.504467162 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.443009 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bc402c9-dc6d-402f-9122-c7054331f144-service-ca-bundle\") pod \"router-default-5444994796-bnlgh\" (UID: \"0bc402c9-dc6d-402f-9122-c7054331f144\") " pod="openshift-ingress/router-default-5444994796-bnlgh" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.443147 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-oauth-serving-cert\") pod \"console-f9d7485db-2dnn9\" (UID: \"dd3f2c3b-2417-44c9-bd45-02b10d68cf24\") " pod="openshift-console/console-f9d7485db-2dnn9" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.443263 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c631c898-5180-424c-8cae-922d1a709938-ca-trust-extracted\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.443316 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-794vg\" (UniqueName: \"kubernetes.io/projected/99dff83c-eeb0-4382-89e6-9956167ea61c-kube-api-access-794vg\") pod \"migrator-59844c95c7-6lgpv\" (UID: \"99dff83c-eeb0-4382-89e6-9956167ea61c\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6lgpv" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.443377 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba4ace0a-1ec3-49ff-aaec-cb6751d9a7df-config\") pod \"kube-controller-manager-operator-78b949d7b-sz82x\" (UID: \"ba4ace0a-1ec3-49ff-aaec-cb6751d9a7df\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sz82x" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.443410 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bf02d64f-cad0-4878-b177-bf89cdfd3587-node-bootstrap-token\") pod \"machine-config-server-k87cp\" (UID: \"bf02d64f-cad0-4878-b177-bf89cdfd3587\") " pod="openshift-machine-config-operator/machine-config-server-k87cp" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.443479 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd0845ab-7def-4118-8108-f1254a8f79b4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9ff4c\" (UID: \"cd0845ab-7def-4118-8108-f1254a8f79b4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9ff4c" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.443507 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6b4645cb-cf08-4b8d-9115-c2d7d4a40cae-metrics-tls\") pod \"dns-default-n2k99\" (UID: \"6b4645cb-cf08-4b8d-9115-c2d7d4a40cae\") " pod="openshift-dns/dns-default-n2k99" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.443577 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cztj2\" (UniqueName: \"kubernetes.io/projected/6b4645cb-cf08-4b8d-9115-c2d7d4a40cae-kube-api-access-cztj2\") pod \"dns-default-n2k99\" (UID: \"6b4645cb-cf08-4b8d-9115-c2d7d4a40cae\") " pod="openshift-dns/dns-default-n2k99" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.443635 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/293ce1a3-001e-4935-be39-8b40f869a2ad-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-trm4v\" (UID: \"293ce1a3-001e-4935-be39-8b40f869a2ad\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-trm4v" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.443749 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxsr5\" (UniqueName: \"kubernetes.io/projected/f86d7b9c-1d60-4153-98aa-6a8985a78907-kube-api-access-sxsr5\") pod \"csi-hostpathplugin-xbznm\" (UID: \"f86d7b9c-1d60-4153-98aa-6a8985a78907\") " pod="hostpath-provisioner/csi-hostpathplugin-xbznm" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.443792 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba4ace0a-1ec3-49ff-aaec-cb6751d9a7df-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-sz82x\" (UID: \"ba4ace0a-1ec3-49ff-aaec-cb6751d9a7df\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sz82x" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.443834 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f86d7b9c-1d60-4153-98aa-6a8985a78907-mountpoint-dir\") pod \"csi-hostpathplugin-xbznm\" (UID: \"f86d7b9c-1d60-4153-98aa-6a8985a78907\") " pod="hostpath-provisioner/csi-hostpathplugin-xbznm" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.443868 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f86d7b9c-1d60-4153-98aa-6a8985a78907-plugins-dir\") pod \"csi-hostpathplugin-xbznm\" (UID: \"f86d7b9c-1d60-4153-98aa-6a8985a78907\") " pod="hostpath-provisioner/csi-hostpathplugin-xbznm" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.445669 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/24403043-1500-4b05-a4a3-6863604f54ad-tmpfs\") pod \"packageserver-d55dfcdfc-n6s2g\" (UID: \"24403043-1500-4b05-a4a3-6863604f54ad\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6s2g" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.445711 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2djv8\" (UniqueName: \"kubernetes.io/projected/ce8a1740-3334-42a4-af1d-1a7de4758c9c-kube-api-access-2djv8\") pod \"auto-csr-approver-29535438-d8kz2\" (UID: \"ce8a1740-3334-42a4-af1d-1a7de4758c9c\") " pod="openshift-infra/auto-csr-approver-29535438-d8kz2" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.445783 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f57e2f33-3890-47ba-a25d-53d23342a7e7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4ws68\" (UID: \"f57e2f33-3890-47ba-a25d-53d23342a7e7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ws68" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.445822 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q7g7\" (UniqueName: \"kubernetes.io/projected/bf02d64f-cad0-4878-b177-bf89cdfd3587-kube-api-access-4q7g7\") pod \"machine-config-server-k87cp\" (UID: \"bf02d64f-cad0-4878-b177-bf89cdfd3587\") " pod="openshift-machine-config-operator/machine-config-server-k87cp" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.445898 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-console-oauth-config\") pod \"console-f9d7485db-2dnn9\" (UID: \"dd3f2c3b-2417-44c9-bd45-02b10d68cf24\") " pod="openshift-console/console-f9d7485db-2dnn9" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.445924 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-console-config\") pod \"console-f9d7485db-2dnn9\" (UID: \"dd3f2c3b-2417-44c9-bd45-02b10d68cf24\") " pod="openshift-console/console-f9d7485db-2dnn9" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.445979 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/94037ae8-6f5e-48b4-8cea-ea9cd9d3e0fa-signing-key\") pod \"service-ca-9c57cc56f-79bz8\" (UID: \"94037ae8-6f5e-48b4-8cea-ea9cd9d3e0fa\") " pod="openshift-service-ca/service-ca-9c57cc56f-79bz8" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.446100 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd0845ab-7def-4118-8108-f1254a8f79b4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9ff4c\" (UID: \"cd0845ab-7def-4118-8108-f1254a8f79b4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9ff4c" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.446128 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sml99\" (UniqueName: \"kubernetes.io/projected/293ce1a3-001e-4935-be39-8b40f869a2ad-kube-api-access-sml99\") pod \"machine-config-controller-84d6567774-trm4v\" (UID: \"293ce1a3-001e-4935-be39-8b40f869a2ad\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-trm4v" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.446177 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxn2s\" (UniqueName: \"kubernetes.io/projected/24403043-1500-4b05-a4a3-6863604f54ad-kube-api-access-hxn2s\") pod \"packageserver-d55dfcdfc-n6s2g\" (UID: \"24403043-1500-4b05-a4a3-6863604f54ad\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6s2g" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.446204 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c631c898-5180-424c-8cae-922d1a709938-trusted-ca\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.446260 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm2sp\" (UniqueName: \"kubernetes.io/projected/c631c898-5180-424c-8cae-922d1a709938-kube-api-access-sm2sp\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.446284 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-trusted-ca-bundle\") pod \"console-f9d7485db-2dnn9\" (UID: \"dd3f2c3b-2417-44c9-bd45-02b10d68cf24\") " pod="openshift-console/console-f9d7485db-2dnn9" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.446319 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f86d7b9c-1d60-4153-98aa-6a8985a78907-csi-data-dir\") pod \"csi-hostpathplugin-xbznm\" (UID: \"f86d7b9c-1d60-4153-98aa-6a8985a78907\") " pod="hostpath-provisioner/csi-hostpathplugin-xbznm" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.446409 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/24403043-1500-4b05-a4a3-6863604f54ad-apiservice-cert\") pod \"packageserver-d55dfcdfc-n6s2g\" (UID: \"24403043-1500-4b05-a4a3-6863604f54ad\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6s2g" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.446437 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0bc402c9-dc6d-402f-9122-c7054331f144-default-certificate\") pod \"router-default-5444994796-bnlgh\" (UID: \"0bc402c9-dc6d-402f-9122-c7054331f144\") " pod="openshift-ingress/router-default-5444994796-bnlgh" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.446490 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-console-serving-cert\") pod \"console-f9d7485db-2dnn9\" (UID: \"dd3f2c3b-2417-44c9-bd45-02b10d68cf24\") " pod="openshift-console/console-f9d7485db-2dnn9" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.479514 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/293ce1a3-001e-4935-be39-8b40f869a2ad-proxy-tls\") pod \"machine-config-controller-84d6567774-trm4v\" (UID: \"293ce1a3-001e-4935-be39-8b40f869a2ad\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-trm4v" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.442490 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gc2nd" event={"ID":"99b673f3-8f28-492a-a26e-a5254f7ef796","Type":"ContainerStarted","Data":"be1299cc52ecd5045ed5e270de38a467f3355f59f728413026d16f2ab47ad938"} Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.481447 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0bc402c9-dc6d-402f-9122-c7054331f144-stats-auth\") pod \"router-default-5444994796-bnlgh\" (UID: \"0bc402c9-dc6d-402f-9122-c7054331f144\") " pod="openshift-ingress/router-default-5444994796-bnlgh" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.482294 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-v45rz" event={"ID":"99edc210-b315-4224-8d9f-a5911f8527b2","Type":"ContainerStarted","Data":"295adefbf5d644607efcd5e9d510d5dcd37bb24f8e0d1a3b9c3a8c0552d71f62"} Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.488394 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/94037ae8-6f5e-48b4-8cea-ea9cd9d3e0fa-signing-cabundle\") pod \"service-ca-9c57cc56f-79bz8\" (UID: \"94037ae8-6f5e-48b4-8cea-ea9cd9d3e0fa\") " pod="openshift-service-ca/service-ca-9c57cc56f-79bz8" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.489064 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd0845ab-7def-4118-8108-f1254a8f79b4-config\") pod \"kube-apiserver-operator-766d6c64bb-9ff4c\" (UID: \"cd0845ab-7def-4118-8108-f1254a8f79b4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9ff4c" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.490321 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c631c898-5180-424c-8cae-922d1a709938-registry-certificates\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.493805 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-oauth-serving-cert\") pod \"console-f9d7485db-2dnn9\" (UID: \"dd3f2c3b-2417-44c9-bd45-02b10d68cf24\") " pod="openshift-console/console-f9d7485db-2dnn9" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.493836 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bc402c9-dc6d-402f-9122-c7054331f144-service-ca-bundle\") pod \"router-default-5444994796-bnlgh\" (UID: \"0bc402c9-dc6d-402f-9122-c7054331f144\") " pod="openshift-ingress/router-default-5444994796-bnlgh" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.494653 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba4ace0a-1ec3-49ff-aaec-cb6751d9a7df-config\") pod \"kube-controller-manager-operator-78b949d7b-sz82x\" (UID: \"ba4ace0a-1ec3-49ff-aaec-cb6751d9a7df\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sz82x" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.495390 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c631c898-5180-424c-8cae-922d1a709938-ca-trust-extracted\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.495809 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c631c898-5180-424c-8cae-922d1a709938-trusted-ca\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.497952 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/24403043-1500-4b05-a4a3-6863604f54ad-tmpfs\") pod \"packageserver-d55dfcdfc-n6s2g\" (UID: \"24403043-1500-4b05-a4a3-6863604f54ad\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6s2g" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.499479 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-trusted-ca-bundle\") pod \"console-f9d7485db-2dnn9\" (UID: \"dd3f2c3b-2417-44c9-bd45-02b10d68cf24\") " pod="openshift-console/console-f9d7485db-2dnn9" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.502185 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/293ce1a3-001e-4935-be39-8b40f869a2ad-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-trm4v\" (UID: \"293ce1a3-001e-4935-be39-8b40f869a2ad\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-trm4v" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.503994 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f57e2f33-3890-47ba-a25d-53d23342a7e7-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4ws68\" (UID: \"f57e2f33-3890-47ba-a25d-53d23342a7e7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ws68" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.505560 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-service-ca\") pod \"console-f9d7485db-2dnn9\" (UID: \"dd3f2c3b-2417-44c9-bd45-02b10d68cf24\") " pod="openshift-console/console-f9d7485db-2dnn9" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.505660 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-console-config\") pod \"console-f9d7485db-2dnn9\" (UID: \"dd3f2c3b-2417-44c9-bd45-02b10d68cf24\") " pod="openshift-console/console-f9d7485db-2dnn9" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.509116 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0bc402c9-dc6d-402f-9122-c7054331f144-metrics-certs\") pod \"router-default-5444994796-bnlgh\" (UID: \"0bc402c9-dc6d-402f-9122-c7054331f144\") " pod="openshift-ingress/router-default-5444994796-bnlgh" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.509923 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c631c898-5180-424c-8cae-922d1a709938-installation-pull-secrets\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.511939 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bxpz"] Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.518802 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-lrc7d"] Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.521748 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c631c898-5180-424c-8cae-922d1a709938-registry-tls\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.526513 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2nd5r"] Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.530318 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba4ace0a-1ec3-49ff-aaec-cb6751d9a7df-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-sz82x\" (UID: \"ba4ace0a-1ec3-49ff-aaec-cb6751d9a7df\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sz82x" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.530887 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0bc402c9-dc6d-402f-9122-c7054331f144-default-certificate\") pod \"router-default-5444994796-bnlgh\" (UID: \"0bc402c9-dc6d-402f-9122-c7054331f144\") " pod="openshift-ingress/router-default-5444994796-bnlgh" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.531890 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/24403043-1500-4b05-a4a3-6863604f54ad-apiservice-cert\") pod \"packageserver-d55dfcdfc-n6s2g\" (UID: \"24403043-1500-4b05-a4a3-6863604f54ad\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6s2g" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.532240 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/94037ae8-6f5e-48b4-8cea-ea9cd9d3e0fa-signing-key\") pod \"service-ca-9c57cc56f-79bz8\" (UID: \"94037ae8-6f5e-48b4-8cea-ea9cd9d3e0fa\") " pod="openshift-service-ca/service-ca-9c57cc56f-79bz8" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.532347 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd0845ab-7def-4118-8108-f1254a8f79b4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9ff4c\" (UID: \"cd0845ab-7def-4118-8108-f1254a8f79b4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9ff4c" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.532561 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz4n8\" (UniqueName: \"kubernetes.io/projected/94037ae8-6f5e-48b4-8cea-ea9cd9d3e0fa-kube-api-access-wz4n8\") pod \"service-ca-9c57cc56f-79bz8\" (UID: \"94037ae8-6f5e-48b4-8cea-ea9cd9d3e0fa\") " pod="openshift-service-ca/service-ca-9c57cc56f-79bz8" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.533527 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-console-oauth-config\") pod \"console-f9d7485db-2dnn9\" (UID: \"dd3f2c3b-2417-44c9-bd45-02b10d68cf24\") " pod="openshift-console/console-f9d7485db-2dnn9" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.534165 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2d6l\" (UniqueName: \"kubernetes.io/projected/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-kube-api-access-p2d6l\") pod \"console-f9d7485db-2dnn9\" (UID: \"dd3f2c3b-2417-44c9-bd45-02b10d68cf24\") " pod="openshift-console/console-f9d7485db-2dnn9" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.535172 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s26sc\" (UniqueName: \"kubernetes.io/projected/0bc402c9-dc6d-402f-9122-c7054331f144-kube-api-access-s26sc\") pod \"router-default-5444994796-bnlgh\" (UID: \"0bc402c9-dc6d-402f-9122-c7054331f144\") " pod="openshift-ingress/router-default-5444994796-bnlgh" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.536590 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/24403043-1500-4b05-a4a3-6863604f54ad-webhook-cert\") pod \"packageserver-d55dfcdfc-n6s2g\" (UID: \"24403043-1500-4b05-a4a3-6863604f54ad\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6s2g" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.538700 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f57e2f33-3890-47ba-a25d-53d23342a7e7-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4ws68\" (UID: \"f57e2f33-3890-47ba-a25d-53d23342a7e7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ws68" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.541828 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-bnlgh" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.543381 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-console-serving-cert\") pod \"console-f9d7485db-2dnn9\" (UID: \"dd3f2c3b-2417-44c9-bd45-02b10d68cf24\") " pod="openshift-console/console-f9d7485db-2dnn9" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.543516 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5jhjn" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.548294 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x44q2\" (UniqueName: \"kubernetes.io/projected/99b107f5-1e0b-420c-be32-1ae140cba05d-kube-api-access-x44q2\") pod \"ingress-canary-sqdmc\" (UID: \"99b107f5-1e0b-420c-be32-1ae140cba05d\") " pod="openshift-ingress-canary/ingress-canary-sqdmc" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.548398 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f86d7b9c-1d60-4153-98aa-6a8985a78907-registration-dir\") pod \"csi-hostpathplugin-xbznm\" (UID: \"f86d7b9c-1d60-4153-98aa-6a8985a78907\") " pod="hostpath-provisioner/csi-hostpathplugin-xbznm" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.548445 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d95bn\" (UniqueName: \"kubernetes.io/projected/9c4b829e-a7ab-4702-9310-4010906609a4-kube-api-access-d95bn\") pod \"olm-operator-6b444d44fb-2f78d\" (UID: \"9c4b829e-a7ab-4702-9310-4010906609a4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f78d" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.548522 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bf02d64f-cad0-4878-b177-bf89cdfd3587-node-bootstrap-token\") pod \"machine-config-server-k87cp\" (UID: \"bf02d64f-cad0-4878-b177-bf89cdfd3587\") " pod="openshift-machine-config-operator/machine-config-server-k87cp" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.548547 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6b4645cb-cf08-4b8d-9115-c2d7d4a40cae-metrics-tls\") pod \"dns-default-n2k99\" (UID: \"6b4645cb-cf08-4b8d-9115-c2d7d4a40cae\") " pod="openshift-dns/dns-default-n2k99" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.548568 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cztj2\" (UniqueName: \"kubernetes.io/projected/6b4645cb-cf08-4b8d-9115-c2d7d4a40cae-kube-api-access-cztj2\") pod \"dns-default-n2k99\" (UID: \"6b4645cb-cf08-4b8d-9115-c2d7d4a40cae\") " pod="openshift-dns/dns-default-n2k99" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.548656 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxsr5\" (UniqueName: \"kubernetes.io/projected/f86d7b9c-1d60-4153-98aa-6a8985a78907-kube-api-access-sxsr5\") pod \"csi-hostpathplugin-xbznm\" (UID: \"f86d7b9c-1d60-4153-98aa-6a8985a78907\") " pod="hostpath-provisioner/csi-hostpathplugin-xbznm" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.548686 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f86d7b9c-1d60-4153-98aa-6a8985a78907-mountpoint-dir\") pod \"csi-hostpathplugin-xbznm\" (UID: \"f86d7b9c-1d60-4153-98aa-6a8985a78907\") " pod="hostpath-provisioner/csi-hostpathplugin-xbznm" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.548710 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f86d7b9c-1d60-4153-98aa-6a8985a78907-plugins-dir\") pod \"csi-hostpathplugin-xbznm\" (UID: \"f86d7b9c-1d60-4153-98aa-6a8985a78907\") " pod="hostpath-provisioner/csi-hostpathplugin-xbznm" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.548749 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2djv8\" (UniqueName: \"kubernetes.io/projected/ce8a1740-3334-42a4-af1d-1a7de4758c9c-kube-api-access-2djv8\") pod \"auto-csr-approver-29535438-d8kz2\" (UID: \"ce8a1740-3334-42a4-af1d-1a7de4758c9c\") " pod="openshift-infra/auto-csr-approver-29535438-d8kz2" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.548799 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q7g7\" (UniqueName: \"kubernetes.io/projected/bf02d64f-cad0-4878-b177-bf89cdfd3587-kube-api-access-4q7g7\") pod \"machine-config-server-k87cp\" (UID: \"bf02d64f-cad0-4878-b177-bf89cdfd3587\") " pod="openshift-machine-config-operator/machine-config-server-k87cp" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.548935 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f86d7b9c-1d60-4153-98aa-6a8985a78907-csi-data-dir\") pod \"csi-hostpathplugin-xbznm\" (UID: \"f86d7b9c-1d60-4153-98aa-6a8985a78907\") " pod="hostpath-provisioner/csi-hostpathplugin-xbznm" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.549053 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnkd8\" (UniqueName: \"kubernetes.io/projected/392795fe-61ee-4a31-9009-e0be88c6dd2d-kube-api-access-rnkd8\") pod \"package-server-manager-789f6589d5-pzjmg\" (UID: \"392795fe-61ee-4a31-9009-e0be88c6dd2d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pzjmg" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.549114 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4a986fa5-f0cd-4512-89fe-8e8ccad45745-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-4b67j\" (UID: \"4a986fa5-f0cd-4512-89fe-8e8ccad45745\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4b67j" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.549166 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.549227 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b4645cb-cf08-4b8d-9115-c2d7d4a40cae-config-volume\") pod \"dns-default-n2k99\" (UID: \"6b4645cb-cf08-4b8d-9115-c2d7d4a40cae\") " pod="openshift-dns/dns-default-n2k99" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.549283 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlq29\" (UniqueName: \"kubernetes.io/projected/4a986fa5-f0cd-4512-89fe-8e8ccad45745-kube-api-access-hlq29\") pod \"multus-admission-controller-857f4d67dd-4b67j\" (UID: \"4a986fa5-f0cd-4512-89fe-8e8ccad45745\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4b67j" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.549344 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9c4b829e-a7ab-4702-9310-4010906609a4-srv-cert\") pod \"olm-operator-6b444d44fb-2f78d\" (UID: \"9c4b829e-a7ab-4702-9310-4010906609a4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f78d" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.549388 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f86d7b9c-1d60-4153-98aa-6a8985a78907-socket-dir\") pod \"csi-hostpathplugin-xbznm\" (UID: \"f86d7b9c-1d60-4153-98aa-6a8985a78907\") " pod="hostpath-provisioner/csi-hostpathplugin-xbznm" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.549414 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/99b107f5-1e0b-420c-be32-1ae140cba05d-cert\") pod \"ingress-canary-sqdmc\" (UID: \"99b107f5-1e0b-420c-be32-1ae140cba05d\") " pod="openshift-ingress-canary/ingress-canary-sqdmc" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.549440 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9c4b829e-a7ab-4702-9310-4010906609a4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-2f78d\" (UID: \"9c4b829e-a7ab-4702-9310-4010906609a4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f78d" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.549482 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bf02d64f-cad0-4878-b177-bf89cdfd3587-certs\") pod \"machine-config-server-k87cp\" (UID: \"bf02d64f-cad0-4878-b177-bf89cdfd3587\") " pod="openshift-machine-config-operator/machine-config-server-k87cp" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.549502 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f86d7b9c-1d60-4153-98aa-6a8985a78907-registration-dir\") pod \"csi-hostpathplugin-xbznm\" (UID: \"f86d7b9c-1d60-4153-98aa-6a8985a78907\") " pod="hostpath-provisioner/csi-hostpathplugin-xbznm" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.549520 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/392795fe-61ee-4a31-9009-e0be88c6dd2d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-pzjmg\" (UID: \"392795fe-61ee-4a31-9009-e0be88c6dd2d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pzjmg" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.550330 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f86d7b9c-1d60-4153-98aa-6a8985a78907-mountpoint-dir\") pod \"csi-hostpathplugin-xbznm\" (UID: \"f86d7b9c-1d60-4153-98aa-6a8985a78907\") " pod="hostpath-provisioner/csi-hostpathplugin-xbznm" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.550377 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f86d7b9c-1d60-4153-98aa-6a8985a78907-plugins-dir\") pod \"csi-hostpathplugin-xbznm\" (UID: \"f86d7b9c-1d60-4153-98aa-6a8985a78907\") " pod="hostpath-provisioner/csi-hostpathplugin-xbznm" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.550885 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f86d7b9c-1d60-4153-98aa-6a8985a78907-socket-dir\") pod \"csi-hostpathplugin-xbznm\" (UID: \"f86d7b9c-1d60-4153-98aa-6a8985a78907\") " pod="hostpath-provisioner/csi-hostpathplugin-xbznm" Feb 26 17:18:34 crc kubenswrapper[4805]: E0226 17:18:34.551076 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:35.05105295 +0000 UTC m=+229.612807479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.553437 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b4645cb-cf08-4b8d-9115-c2d7d4a40cae-config-volume\") pod \"dns-default-n2k99\" (UID: \"6b4645cb-cf08-4b8d-9115-c2d7d4a40cae\") " pod="openshift-dns/dns-default-n2k99" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.553473 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f86d7b9c-1d60-4153-98aa-6a8985a78907-csi-data-dir\") pod \"csi-hostpathplugin-xbznm\" (UID: \"f86d7b9c-1d60-4153-98aa-6a8985a78907\") " pod="hostpath-provisioner/csi-hostpathplugin-xbznm" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.555979 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/99b107f5-1e0b-420c-be32-1ae140cba05d-cert\") pod \"ingress-canary-sqdmc\" (UID: \"99b107f5-1e0b-420c-be32-1ae140cba05d\") " pod="openshift-ingress-canary/ingress-canary-sqdmc" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.557124 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bf02d64f-cad0-4878-b177-bf89cdfd3587-certs\") pod \"machine-config-server-k87cp\" (UID: \"bf02d64f-cad0-4878-b177-bf89cdfd3587\") " pod="openshift-machine-config-operator/machine-config-server-k87cp" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.558769 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bf02d64f-cad0-4878-b177-bf89cdfd3587-node-bootstrap-token\") pod \"machine-config-server-k87cp\" (UID: \"bf02d64f-cad0-4878-b177-bf89cdfd3587\") " pod="openshift-machine-config-operator/machine-config-server-k87cp" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.560475 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9c4b829e-a7ab-4702-9310-4010906609a4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-2f78d\" (UID: \"9c4b829e-a7ab-4702-9310-4010906609a4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f78d" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.563525 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6b4645cb-cf08-4b8d-9115-c2d7d4a40cae-metrics-tls\") pod \"dns-default-n2k99\" (UID: \"6b4645cb-cf08-4b8d-9115-c2d7d4a40cae\") " pod="openshift-dns/dns-default-n2k99" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.566132 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9c4b829e-a7ab-4702-9310-4010906609a4-srv-cert\") pod \"olm-operator-6b444d44fb-2f78d\" (UID: \"9c4b829e-a7ab-4702-9310-4010906609a4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f78d" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.567873 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/392795fe-61ee-4a31-9009-e0be88c6dd2d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-pzjmg\" (UID: \"392795fe-61ee-4a31-9009-e0be88c6dd2d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pzjmg" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.572326 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ba4ace0a-1ec3-49ff-aaec-cb6751d9a7df-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-sz82x\" (UID: \"ba4ace0a-1ec3-49ff-aaec-cb6751d9a7df\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sz82x" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.581624 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4a986fa5-f0cd-4512-89fe-8e8ccad45745-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-4b67j\" (UID: \"4a986fa5-f0cd-4512-89fe-8e8ccad45745\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4b67j" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.584327 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f57e2f33-3890-47ba-a25d-53d23342a7e7-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4ws68\" (UID: \"f57e2f33-3890-47ba-a25d-53d23342a7e7\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ws68" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.595217 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c631c898-5180-424c-8cae-922d1a709938-bound-sa-token\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.626786 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm2sp\" (UniqueName: \"kubernetes.io/projected/c631c898-5180-424c-8cae-922d1a709938-kube-api-access-sm2sp\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.650537 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:34 crc kubenswrapper[4805]: E0226 17:18:34.651138 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:35.151116272 +0000 UTC m=+229.712870611 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.655702 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sml99\" (UniqueName: \"kubernetes.io/projected/293ce1a3-001e-4935-be39-8b40f869a2ad-kube-api-access-sml99\") pod \"machine-config-controller-84d6567774-trm4v\" (UID: \"293ce1a3-001e-4935-be39-8b40f869a2ad\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-trm4v" Feb 26 17:18:34 crc kubenswrapper[4805]: W0226 17:18:34.659215 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a08155f_ab85_41d4_afcd_6d681c38f727.slice/crio-16c42249d2a6d923fe9d135e037492d8763d53e5efa653125ebb828cfbb4ed24 WatchSource:0}: Error finding container 16c42249d2a6d923fe9d135e037492d8763d53e5efa653125ebb828cfbb4ed24: Status 404 returned error can't find the container with id 16c42249d2a6d923fe9d135e037492d8763d53e5efa653125ebb828cfbb4ed24 Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.667737 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd0845ab-7def-4118-8108-f1254a8f79b4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9ff4c\" (UID: \"cd0845ab-7def-4118-8108-f1254a8f79b4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9ff4c" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.689105 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxn2s\" (UniqueName: \"kubernetes.io/projected/24403043-1500-4b05-a4a3-6863604f54ad-kube-api-access-hxn2s\") pod \"packageserver-d55dfcdfc-n6s2g\" (UID: \"24403043-1500-4b05-a4a3-6863604f54ad\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6s2g" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.699541 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9ff4c" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.703216 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-794vg\" (UniqueName: \"kubernetes.io/projected/99dff83c-eeb0-4382-89e6-9956167ea61c-kube-api-access-794vg\") pod \"migrator-59844c95c7-6lgpv\" (UID: \"99dff83c-eeb0-4382-89e6-9956167ea61c\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6lgpv" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.726091 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2dnn9" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.733735 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x44q2\" (UniqueName: \"kubernetes.io/projected/99b107f5-1e0b-420c-be32-1ae140cba05d-kube-api-access-x44q2\") pod \"ingress-canary-sqdmc\" (UID: \"99b107f5-1e0b-420c-be32-1ae140cba05d\") " pod="openshift-ingress-canary/ingress-canary-sqdmc" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.743892 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sz82x" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.749650 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ws68" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.752518 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:34 crc kubenswrapper[4805]: E0226 17:18:34.753409 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:35.253393029 +0000 UTC m=+229.815147378 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.762885 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-79bz8" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.770002 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535435-n8728"] Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.771605 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2djv8\" (UniqueName: \"kubernetes.io/projected/ce8a1740-3334-42a4-af1d-1a7de4758c9c-kube-api-access-2djv8\") pod \"auto-csr-approver-29535438-d8kz2\" (UID: \"ce8a1740-3334-42a4-af1d-1a7de4758c9c\") " pod="openshift-infra/auto-csr-approver-29535438-d8kz2" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.776896 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6s2g" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.777093 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d95bn\" (UniqueName: \"kubernetes.io/projected/9c4b829e-a7ab-4702-9310-4010906609a4-kube-api-access-d95bn\") pod \"olm-operator-6b444d44fb-2f78d\" (UID: \"9c4b829e-a7ab-4702-9310-4010906609a4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f78d" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.798326 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxsr5\" (UniqueName: \"kubernetes.io/projected/f86d7b9c-1d60-4153-98aa-6a8985a78907-kube-api-access-sxsr5\") pod \"csi-hostpathplugin-xbznm\" (UID: \"f86d7b9c-1d60-4153-98aa-6a8985a78907\") " pod="hostpath-provisioner/csi-hostpathplugin-xbznm" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.814399 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cztj2\" (UniqueName: \"kubernetes.io/projected/6b4645cb-cf08-4b8d-9115-c2d7d4a40cae-kube-api-access-cztj2\") pod \"dns-default-n2k99\" (UID: \"6b4645cb-cf08-4b8d-9115-c2d7d4a40cae\") " pod="openshift-dns/dns-default-n2k99" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.815545 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6lgpv" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.832290 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-trm4v" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.839880 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q7g7\" (UniqueName: \"kubernetes.io/projected/bf02d64f-cad0-4878-b177-bf89cdfd3587-kube-api-access-4q7g7\") pod \"machine-config-server-k87cp\" (UID: \"bf02d64f-cad0-4878-b177-bf89cdfd3587\") " pod="openshift-machine-config-operator/machine-config-server-k87cp" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.854339 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:34 crc kubenswrapper[4805]: E0226 17:18:34.854603 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:35.354569599 +0000 UTC m=+229.916323938 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.854919 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:34 crc kubenswrapper[4805]: E0226 17:18:34.855269 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:35.355253316 +0000 UTC m=+229.917007655 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.865667 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlq29\" (UniqueName: \"kubernetes.io/projected/4a986fa5-f0cd-4512-89fe-8e8ccad45745-kube-api-access-hlq29\") pod \"multus-admission-controller-857f4d67dd-4b67j\" (UID: \"4a986fa5-f0cd-4512-89fe-8e8ccad45745\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-4b67j" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.867970 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-4b67j" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.875952 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f78d" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.878746 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnkd8\" (UniqueName: \"kubernetes.io/projected/392795fe-61ee-4a31-9009-e0be88c6dd2d-kube-api-access-rnkd8\") pod \"package-server-manager-789f6589d5-pzjmg\" (UID: \"392795fe-61ee-4a31-9009-e0be88c6dd2d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pzjmg" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.913823 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535438-d8kz2" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.924319 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-k87cp" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.935393 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-n2k99" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.943053 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-sqdmc" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.953769 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xbznm" Feb 26 17:18:34 crc kubenswrapper[4805]: I0226 17:18:34.956631 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:34 crc kubenswrapper[4805]: E0226 17:18:34.957095 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:35.456997379 +0000 UTC m=+230.018751718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.011384 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-frcbs"] Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.011436 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6s2jm"] Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.059216 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:35 crc kubenswrapper[4805]: E0226 17:18:35.059590 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:35.559577474 +0000 UTC m=+230.121331813 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.159906 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pzjmg" Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.160825 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:35 crc kubenswrapper[4805]: E0226 17:18:35.161232 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:35.661218365 +0000 UTC m=+230.222972704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.261770 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.261772 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-f72j6"] Feb 26 17:18:35 crc kubenswrapper[4805]: E0226 17:18:35.263510 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:35.762187359 +0000 UTC m=+230.323941738 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.364172 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:35 crc kubenswrapper[4805]: E0226 17:18:35.364552 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:35.864535278 +0000 UTC m=+230.426289617 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.465254 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:35 crc kubenswrapper[4805]: E0226 17:18:35.465747 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:35.965732348 +0000 UTC m=+230.527486687 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.489197 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-k87cp" event={"ID":"bf02d64f-cad0-4878-b177-bf89cdfd3587","Type":"ContainerStarted","Data":"20e16c539c115a2ff11cb03ce60b29d3bf9ffd372b98d3ad2365127cc194e2f9"} Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.492578 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-lrc7d" event={"ID":"046e32fa-0c59-4d66-9ab3-02d3ab26255b","Type":"ContainerStarted","Data":"81d984c232621a71b9c796a35776e6544c64f33ea9c87e73d976b36b5e998db6"} Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.492684 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-lrc7d" event={"ID":"046e32fa-0c59-4d66-9ab3-02d3ab26255b","Type":"ContainerStarted","Data":"8ebc20d5d30136ff86cf434d591a6831956e04b36977c38f0d5e934dcffe68ef"} Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.505258 4805 generic.go:334] "Generic (PLEG): container finished" podID="f6af6767-3331-4e84-97c7-385cd642443c" containerID="038e3a5b487dbf4194e88729c46b7a0e8a828a2a812f2d7cef5c519c097bd53d" exitCode=0 Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.505378 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" event={"ID":"f6af6767-3331-4e84-97c7-385cd642443c","Type":"ContainerDied","Data":"038e3a5b487dbf4194e88729c46b7a0e8a828a2a812f2d7cef5c519c097bd53d"} Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.505421 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" event={"ID":"f6af6767-3331-4e84-97c7-385cd642443c","Type":"ContainerStarted","Data":"2cbc6450264917c9e9dcdd98c6f7a0cce02aab90fe6c262077076598fbf16602"} Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.514709 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-bnlgh" event={"ID":"0bc402c9-dc6d-402f-9122-c7054331f144","Type":"ContainerStarted","Data":"03562e638e6de72b35fef14fb106534495d0bef39605892f12fc59284922d58a"} Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.527417 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-v45rz" event={"ID":"99edc210-b315-4224-8d9f-a5911f8527b2","Type":"ContainerStarted","Data":"ba48df6998cd1661eb18b55ae49d9435afa764df94da16fc96bb6648655ec5f6"} Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.527858 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-v45rz" event={"ID":"99edc210-b315-4224-8d9f-a5911f8527b2","Type":"ContainerStarted","Data":"32840f018338ad0f2cf22e59f4023d7f2154bad8fb20c71c497a8ccaff69f619"} Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.530354 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2nd5r" event={"ID":"19b69797-31c3-4e0e-8968-eb38c731b343","Type":"ContainerStarted","Data":"7a81865efd6d43e8c4c68bccf9d6a3aca63f904952af0f46cf43a57425357e16"} Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.532008 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2nd5r" event={"ID":"19b69797-31c3-4e0e-8968-eb38c731b343","Type":"ContainerStarted","Data":"9ba9d8d3552cebd332d2f39d816ad45ba0af0ad7b04d85bb23f5f5a5f893af04"} Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.536068 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bxpz" event={"ID":"4a08155f-ab85-41d4-afcd-6d681c38f727","Type":"ContainerStarted","Data":"16c42249d2a6d923fe9d135e037492d8763d53e5efa653125ebb828cfbb4ed24"} Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.536408 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bxpz" Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.537819 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g44kw" event={"ID":"f440d939-b304-4728-afc4-ad814d771fbb","Type":"ContainerStarted","Data":"2fdf2abb7ecac07e5c4969ef95c2d16003cdcdcd2a9f1e420ff7ac2e8dcd43f8"} Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.538276 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g44kw" Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.540534 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wj5zq" event={"ID":"528c9fdc-fb48-460b-a06b-a07ce3c388c4","Type":"ContainerStarted","Data":"0322b55f319436ef6ee259ad97d081813aacd02055825a5e63eb511deeefa538"} Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.548808 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gc2nd" event={"ID":"99b673f3-8f28-492a-a26e-a5254f7ef796","Type":"ContainerStarted","Data":"2abbb0085c7e493cfa88236ba091f248cec766e173b5863a59da0adbcbf9365b"} Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.555401 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-frcbs" event={"ID":"07a7b8f7-2103-443e-906f-5d2d74baa5a9","Type":"ContainerStarted","Data":"6c413dfaefbe1f13bc7df7a46a9b5fd8fa2b3cd7635603483f0ada33faaf90b9"} Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.559735 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-mpnkz" event={"ID":"38eb7141-6e03-493c-855f-def45c0e7977","Type":"ContainerStarted","Data":"60ac54e199db75360a9fdd2ec099c443f2fdd5fcd210b42f06cd0d7a9a8b512e"} Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.565454 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6s2jm" event={"ID":"24f9a767-19ab-4c2f-a1d6-3378e7c793d1","Type":"ContainerStarted","Data":"72985e820a25bf0a698a752c54932a9da3154a7b07f6e169ef857ca947a3d18d"} Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.566176 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:35 crc kubenswrapper[4805]: E0226 17:18:35.566349 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:36.066323363 +0000 UTC m=+230.628077712 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.566797 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.566861 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535435-n8728" event={"ID":"0f3f2460-fdb8-4b47-89f9-3cbb84e143e8","Type":"ContainerStarted","Data":"526d13e50e59859fbee6af1c19e37c6c653a9b32b52dab48154a6a2978b3abc2"} Feb 26 17:18:35 crc kubenswrapper[4805]: E0226 17:18:35.567478 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:36.067456171 +0000 UTC m=+230.629210510 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.574148 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-f72j6" event={"ID":"0fa7532e-b0f6-481d-a6d3-9523edc96c13","Type":"ContainerStarted","Data":"525489ac1d98f61e04181799a3b5df49d9b21f7a778a9d771eff8ebf39ff1799"} Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.577535 4805 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-ngs29 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.577589 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-ngs29" podUID="05e30706-eb6c-42d4-a28d-aa664f89ed80" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.670410 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:35 crc kubenswrapper[4805]: E0226 17:18:35.670676 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:36.17064629 +0000 UTC m=+230.732400629 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.670997 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:35 crc kubenswrapper[4805]: E0226 17:18:35.674441 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:36.174423393 +0000 UTC m=+230.736177942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.701779 4805 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-9bxpz container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.701853 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bxpz" podUID="4a08155f-ab85-41d4-afcd-6d681c38f727" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.774556 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:35 crc kubenswrapper[4805]: E0226 17:18:35.775034 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:36.275003278 +0000 UTC m=+230.836757617 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.876600 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:35 crc kubenswrapper[4805]: E0226 17:18:35.877175 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:36.377154982 +0000 UTC m=+230.938909321 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.978456 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:35 crc kubenswrapper[4805]: E0226 17:18:35.978644 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:36.478611959 +0000 UTC m=+231.040366298 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:35 crc kubenswrapper[4805]: I0226 17:18:35.978877 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:35 crc kubenswrapper[4805]: E0226 17:18:35.979417 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:36.479396268 +0000 UTC m=+231.041150607 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.081571 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:36 crc kubenswrapper[4805]: E0226 17:18:36.081817 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:36.581768297 +0000 UTC m=+231.143522646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.082042 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:36 crc kubenswrapper[4805]: E0226 17:18:36.082426 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:36.582410743 +0000 UTC m=+231.144165082 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.183492 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:36 crc kubenswrapper[4805]: E0226 17:18:36.184061 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:36.684039444 +0000 UTC m=+231.245793783 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.188315 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-25fk7"] Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.215665 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rw6n5"] Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.238317 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-ml8xl"] Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.239503 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g44kw" Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.252657 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6l2g9"] Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.257460 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-czlsm"] Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.285011 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:36 crc kubenswrapper[4805]: E0226 17:18:36.285540 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:36.785524492 +0000 UTC m=+231.347278831 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.300948 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d7sjf"] Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.307470 4805 ???:1] "http: TLS handshake error from 192.168.126.11:59162: no serving certificate available for the kubelet" Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.308560 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.331375 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-hln6b"] Feb 26 17:18:36 crc kubenswrapper[4805]: W0226 17:18:36.346623 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c2d3d15_9f73_4fed_81e3_2f8ce400b967.slice/crio-b4e232ee39cf5e97eb836d4cf49c0e06bede1d3d78a029f1d1ce48a0169024c3 WatchSource:0}: Error finding container b4e232ee39cf5e97eb836d4cf49c0e06bede1d3d78a029f1d1ce48a0169024c3: Status 404 returned error can't find the container with id b4e232ee39cf5e97eb836d4cf49c0e06bede1d3d78a029f1d1ce48a0169024c3 Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.395557 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:36 crc kubenswrapper[4805]: E0226 17:18:36.396143 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:36.896116623 +0000 UTC m=+231.457870962 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.416687 4805 ???:1] "http: TLS handshake error from 192.168.126.11:59174: no serving certificate available for the kubelet" Feb 26 17:18:36 crc kubenswrapper[4805]: W0226 17:18:36.474691 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb68ea4bc_0e19_42c3_80f4_7c71d84e97c3.slice/crio-44409e5f3c239d4654c03c5b56f3d13a36d63b13eb2662035b7e9b9826c41c2d WatchSource:0}: Error finding container 44409e5f3c239d4654c03c5b56f3d13a36d63b13eb2662035b7e9b9826c41c2d: Status 404 returned error can't find the container with id 44409e5f3c239d4654c03c5b56f3d13a36d63b13eb2662035b7e9b9826c41c2d Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.479653 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-5jhjn"] Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.507369 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:36 crc kubenswrapper[4805]: E0226 17:18:36.507878 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:37.007862523 +0000 UTC m=+231.569616862 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.511404 4805 ???:1] "http: TLS handshake error from 192.168.126.11:59188: no serving certificate available for the kubelet" Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.531371 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-v45rz" podStartSLOduration=154.531345741 podStartE2EDuration="2m34.531345741s" podCreationTimestamp="2026-02-26 17:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:36.519833257 +0000 UTC m=+231.081587606" watchObservedRunningTime="2026-02-26 17:18:36.531345741 +0000 UTC m=+231.093100080" Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.541730 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k9xsh"] Feb 26 17:18:36 crc kubenswrapper[4805]: W0226 17:18:36.582519 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb656b811_2756_4cb0_8a5e_2fce336a3047.slice/crio-06b61be56f355cdc4c3c951c66da9427832192af4644c5f5a50cd2065b2346db WatchSource:0}: Error finding container 06b61be56f355cdc4c3c951c66da9427832192af4644c5f5a50cd2065b2346db: Status 404 returned error can't find the container with id 06b61be56f355cdc4c3c951c66da9427832192af4644c5f5a50cd2065b2346db Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.612757 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:36 crc kubenswrapper[4805]: E0226 17:18:36.613835 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:37.11380951 +0000 UTC m=+231.675563849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.619461 4805 ???:1] "http: TLS handshake error from 192.168.126.11:49984: no serving certificate available for the kubelet" Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.649921 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" podStartSLOduration=155.643947372 podStartE2EDuration="2m35.643947372s" podCreationTimestamp="2026-02-26 17:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:36.635992496 +0000 UTC m=+231.197746835" watchObservedRunningTime="2026-02-26 17:18:36.643947372 +0000 UTC m=+231.205701711" Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.652411 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-mpnkz" podStartSLOduration=155.652392859 podStartE2EDuration="2m35.652392859s" podCreationTimestamp="2026-02-26 17:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:36.548612576 +0000 UTC m=+231.110366905" watchObservedRunningTime="2026-02-26 17:18:36.652392859 +0000 UTC m=+231.214147198" Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.668249 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-frcbs" event={"ID":"07a7b8f7-2103-443e-906f-5d2d74baa5a9","Type":"ContainerStarted","Data":"2815c675a2a2de116b5a23e8e6586abc697c37614dbf11ea0adc895a6e768009"} Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.673085 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5jhjn" event={"ID":"67e1c0bf-8550-4126-986d-0d88d4d5f8ef","Type":"ContainerStarted","Data":"95b9da7ce2f12fe7df5a148aa3d4353a9d3fbec9ed386df39eabc039adb7962c"} Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.697152 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-f72j6" event={"ID":"0fa7532e-b0f6-481d-a6d3-9523edc96c13","Type":"ContainerStarted","Data":"70aaee1cfd266ec158235dd880261c20c0d43376ea0d9d51fd2962366d140253"} Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.697240 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-f72j6" event={"ID":"0fa7532e-b0f6-481d-a6d3-9523edc96c13","Type":"ContainerStarted","Data":"88f8d20430b11da4919f6d7f4e49fdd825f2c9804fc7d2a553c244a186109e74"} Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.700347 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-6lgpv"] Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.702477 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535438-d8kz2"] Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.709992 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-czlsm" event={"ID":"cff54d20-3883-42f8-8f56-7948a115807d","Type":"ContainerStarted","Data":"6a3ffe64d2f72a64e87148289d8ec9e15467219a1c62b47c8e5e91caec72a7fb"} Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.717683 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.717991 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-25fk7" event={"ID":"b0d1ffc5-ede9-49d3-8d52-3ebc49cdcb30","Type":"ContainerStarted","Data":"c1eef10aa84a07f7a7c454f931d7a23e288308a8003f7458ae5599741e763604"} Feb 26 17:18:36 crc kubenswrapper[4805]: E0226 17:18:36.719335 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:37.219317586 +0000 UTC m=+231.781071925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.733824 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-bnlgh" event={"ID":"0bc402c9-dc6d-402f-9122-c7054331f144","Type":"ContainerStarted","Data":"0b48926aa229e67e9f9412fee65b3e367d4e4569ae1bff2e0afaf01788c85e88"} Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.734581 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g44kw" podStartSLOduration=154.734556261 podStartE2EDuration="2m34.734556261s" podCreationTimestamp="2026-02-26 17:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:36.732920351 +0000 UTC m=+231.294674690" watchObservedRunningTime="2026-02-26 17:18:36.734556261 +0000 UTC m=+231.296310600" Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.743054 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sz82x"] Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.744172 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6s2jm" event={"ID":"24f9a767-19ab-4c2f-a1d6-3378e7c793d1","Type":"ContainerStarted","Data":"090df4f6be84ae5875c77ef734294ce6d726eccd0a5ea88f2d087162638d5ad5"} Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.780109 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535435-n8728" event={"ID":"0f3f2460-fdb8-4b47-89f9-3cbb84e143e8","Type":"ContainerStarted","Data":"fb246173955ffb56c9ea80ae6637ab230691d880eb3d111a5480e5535212888a"} Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.782901 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-x8kkn" podStartSLOduration=155.78287499 podStartE2EDuration="2m35.78287499s" podCreationTimestamp="2026-02-26 17:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:36.77800235 +0000 UTC m=+231.339756679" watchObservedRunningTime="2026-02-26 17:18:36.78287499 +0000 UTC m=+231.344629319" Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.783571 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9ff4c"] Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.789339 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6l2g9" event={"ID":"8c2d3d15-9f73-4fed-81e3-2f8ce400b967","Type":"ContainerStarted","Data":"b4e232ee39cf5e97eb836d4cf49c0e06bede1d3d78a029f1d1ce48a0169024c3"} Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.794459 4805 ???:1] "http: TLS handshake error from 192.168.126.11:49992: no serving certificate available for the kubelet" Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.801207 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2nd5r" event={"ID":"19b69797-31c3-4e0e-8968-eb38c731b343","Type":"ContainerStarted","Data":"0db3c1e31890839da913b798a1e4e1c7e64e40335460180a2a81db457a127138"} Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.819203 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.819740 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-trm4v"] Feb 26 17:18:36 crc kubenswrapper[4805]: E0226 17:18:36.821695 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:37.321661135 +0000 UTC m=+231.883415624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.826406 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-ml8xl" event={"ID":"f5c14f64-688c-4661-a440-5b171ee3d7b6","Type":"ContainerStarted","Data":"18ae3a7455fa95cd6efd4582d3101961df76b397846baffb6183792ab9c73d16"} Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.860500 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ws68"] Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.887669 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.905345 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bxpz" event={"ID":"4a08155f-ab85-41d4-afcd-6d681c38f727","Type":"ContainerStarted","Data":"50cac1f7c20d16a2d9ecc11da5a9f9dd5e71e92835218e693bdc671cb64aa163"} Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.906310 4805 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-9bxpz container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.906385 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bxpz" podUID="4a08155f-ab85-41d4-afcd-6d681c38f727" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.922760 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:36 crc kubenswrapper[4805]: E0226 17:18:36.923109 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:37.423096701 +0000 UTC m=+231.984851040 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.923357 4805 ???:1] "http: TLS handshake error from 192.168.126.11:49996: no serving certificate available for the kubelet" Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.924363 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-2dnn9"] Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.926123 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-ngs29" podStartSLOduration=155.909008484 podStartE2EDuration="2m35.909008484s" podCreationTimestamp="2026-02-26 17:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:36.906919323 +0000 UTC m=+231.468673652" watchObservedRunningTime="2026-02-26 17:18:36.909008484 +0000 UTC m=+231.470762833" Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.929902 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-n2k99"] Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.940628 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hln6b" event={"ID":"b68ea4bc-0e19-42c3-80f4-7c71d84e97c3","Type":"ContainerStarted","Data":"44409e5f3c239d4654c03c5b56f3d13a36d63b13eb2662035b7e9b9826c41c2d"} Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.964804 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f78d"] Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.980230 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bxpz" podStartSLOduration=154.980196456 podStartE2EDuration="2m34.980196456s" podCreationTimestamp="2026-02-26 17:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:36.955248612 +0000 UTC m=+231.517002951" watchObservedRunningTime="2026-02-26 17:18:36.980196456 +0000 UTC m=+231.541950795" Feb 26 17:18:36 crc kubenswrapper[4805]: I0226 17:18:36.996069 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gc2nd" podStartSLOduration=155.996049406 podStartE2EDuration="2m35.996049406s" podCreationTimestamp="2026-02-26 17:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:36.993649607 +0000 UTC m=+231.555403946" watchObservedRunningTime="2026-02-26 17:18:36.996049406 +0000 UTC m=+231.557803745" Feb 26 17:18:37 crc kubenswrapper[4805]: W0226 17:18:37.016917 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod293ce1a3_001e_4935_be39_8b40f869a2ad.slice/crio-ac231a9e9c7a3795c68369ff89f9288f7bab1d6f3cbad7ec76ac158ebd33e547 WatchSource:0}: Error finding container ac231a9e9c7a3795c68369ff89f9288f7bab1d6f3cbad7ec76ac158ebd33e547: Status 404 returned error can't find the container with id ac231a9e9c7a3795c68369ff89f9288f7bab1d6f3cbad7ec76ac158ebd33e547 Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.018902 4805 ???:1] "http: TLS handshake error from 192.168.126.11:50008: no serving certificate available for the kubelet" Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.024124 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:37 crc kubenswrapper[4805]: E0226 17:18:37.025488 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:37.52546594 +0000 UTC m=+232.087220279 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.037236 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29535435-n8728" podStartSLOduration=156.037216159 podStartE2EDuration="2m36.037216159s" podCreationTimestamp="2026-02-26 17:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:37.028268109 +0000 UTC m=+231.590022458" watchObservedRunningTime="2026-02-26 17:18:37.037216159 +0000 UTC m=+231.598970498" Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.079447 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-d7sjf" event={"ID":"ca3d06f8-3cc9-4e77-9d45-e1232c00b04f","Type":"ContainerStarted","Data":"60164c200f2f099a90f8d460237af904e8603af1a6e9dd10b1d6255edd870f78"} Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.079525 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6s2g"] Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.079561 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-sqdmc"] Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.079572 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-79bz8"] Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.128689 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.149175 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wj5zq" event={"ID":"528c9fdc-fb48-460b-a06b-a07ce3c388c4","Type":"ContainerStarted","Data":"51fa25f42d28aa0657ab20f7d9ca19df2aa2ef00c52aea170c921580a525247f"} Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.149532 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wj5zq" event={"ID":"528c9fdc-fb48-460b-a06b-a07ce3c388c4","Type":"ContainerStarted","Data":"b264b44164a28570e9a077d30e8c8cfb48ce2fecf1f1eeab809c21f00029efe7"} Feb 26 17:18:37 crc kubenswrapper[4805]: W0226 17:18:37.097067 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b4645cb_cf08_4b8d_9115_c2d7d4a40cae.slice/crio-1df3dbe90765963ea733535ab30da11ed0ee7b9d94a90f19cc04e13226fbd181 WatchSource:0}: Error finding container 1df3dbe90765963ea733535ab30da11ed0ee7b9d94a90f19cc04e13226fbd181: Status 404 returned error can't find the container with id 1df3dbe90765963ea733535ab30da11ed0ee7b9d94a90f19cc04e13226fbd181 Feb 26 17:18:37 crc kubenswrapper[4805]: E0226 17:18:37.129443 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:37.629426398 +0000 UTC m=+232.191180737 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.157270 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2nd5r" podStartSLOduration=155.157249813 podStartE2EDuration="2m35.157249813s" podCreationTimestamp="2026-02-26 17:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:37.079798557 +0000 UTC m=+231.641552906" watchObservedRunningTime="2026-02-26 17:18:37.157249813 +0000 UTC m=+231.719004152" Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.159123 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pzjmg"] Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.183354 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" event={"ID":"f6af6767-3331-4e84-97c7-385cd642443c","Type":"ContainerStarted","Data":"ea06cb5e7306df9f48cf1797b05d564ec23ec2af304cc0d08eaa24d4345eb4c3"} Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.193226 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-4b67j"] Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.221824 4805 ???:1] "http: TLS handshake error from 192.168.126.11:50024: no serving certificate available for the kubelet" Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.227865 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-f72j6" podStartSLOduration=156.2278482 podStartE2EDuration="2m36.2278482s" podCreationTimestamp="2026-02-26 17:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:37.143778141 +0000 UTC m=+231.705532490" watchObservedRunningTime="2026-02-26 17:18:37.2278482 +0000 UTC m=+231.789602539" Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.228344 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xbznm"] Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.278644 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.278893 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6s2jm" podStartSLOduration=156.278877146 podStartE2EDuration="2m36.278877146s" podCreationTimestamp="2026-02-26 17:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:37.22053398 +0000 UTC m=+231.782288319" watchObservedRunningTime="2026-02-26 17:18:37.278877146 +0000 UTC m=+231.840631485" Feb 26 17:18:37 crc kubenswrapper[4805]: E0226 17:18:37.284041 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:37.783998642 +0000 UTC m=+232.345752991 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.303717 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-k87cp" event={"ID":"bf02d64f-cad0-4878-b177-bf89cdfd3587","Type":"ContainerStarted","Data":"b6d3ed116e9eab74f37322b90af5fd2bb7bc3d8896be6a4a47151ead9f56b96a"} Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.347203 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-rw6n5" event={"ID":"9ff06563-787b-4f43-ab99-d900accc2791","Type":"ContainerStarted","Data":"26f4e9f455a212e288e18d8a1f3f2cab50dba469d726dc72878407ccfcdd35d5"} Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.347252 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-rw6n5" Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.347266 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-lrc7d" Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.358275 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-lrc7d container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.358340 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lrc7d" podUID="046e32fa-0c59-4d66-9ab3-02d3ab26255b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.358373 4805 patch_prober.go:28] interesting pod/console-operator-58897d9998-rw6n5 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.358422 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-rw6n5" podUID="9ff06563-787b-4f43-ab99-d900accc2791" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.382975 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:37 crc kubenswrapper[4805]: E0226 17:18:37.405729 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:37.905710056 +0000 UTC m=+232.467464395 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.410578 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-frcbs" podStartSLOduration=155.410554675 podStartE2EDuration="2m35.410554675s" podCreationTimestamp="2026-02-26 17:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:37.409413407 +0000 UTC m=+231.971167746" watchObservedRunningTime="2026-02-26 17:18:37.410554675 +0000 UTC m=+231.972309014" Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.411171 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-bnlgh" podStartSLOduration=156.41116425 podStartE2EDuration="2m36.41116425s" podCreationTimestamp="2026-02-26 17:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:37.31885934 +0000 UTC m=+231.880613679" watchObservedRunningTime="2026-02-26 17:18:37.41116425 +0000 UTC m=+231.972918599" Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.485936 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:37 crc kubenswrapper[4805]: E0226 17:18:37.486717 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:37.986689749 +0000 UTC m=+232.548444088 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.534334 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wj5zq" podStartSLOduration=156.53429818 podStartE2EDuration="2m36.53429818s" podCreationTimestamp="2026-02-26 17:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:37.462166625 +0000 UTC m=+232.023920984" watchObservedRunningTime="2026-02-26 17:18:37.53429818 +0000 UTC m=+232.096052519" Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.534897 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-rw6n5" podStartSLOduration=156.534887585 podStartE2EDuration="2m36.534887585s" podCreationTimestamp="2026-02-26 17:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:37.533912751 +0000 UTC m=+232.095667090" watchObservedRunningTime="2026-02-26 17:18:37.534887585 +0000 UTC m=+232.096641924" Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.543500 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-bnlgh" Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.561294 4805 patch_prober.go:28] interesting pod/router-default-5444994796-bnlgh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 17:18:37 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Feb 26 17:18:37 crc kubenswrapper[4805]: [+]process-running ok Feb 26 17:18:37 crc kubenswrapper[4805]: healthz check failed Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.561418 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bnlgh" podUID="0bc402c9-dc6d-402f-9122-c7054331f144" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.588771 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:37 crc kubenswrapper[4805]: E0226 17:18:37.589441 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:38.089416416 +0000 UTC m=+232.651170755 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.610568 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-k87cp" podStartSLOduration=6.610543926 podStartE2EDuration="6.610543926s" podCreationTimestamp="2026-02-26 17:18:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:37.591327523 +0000 UTC m=+232.153081872" watchObservedRunningTime="2026-02-26 17:18:37.610543926 +0000 UTC m=+232.172298265" Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.645569 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" podStartSLOduration=156.645548478 podStartE2EDuration="2m36.645548478s" podCreationTimestamp="2026-02-26 17:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:37.645132097 +0000 UTC m=+232.206886446" watchObservedRunningTime="2026-02-26 17:18:37.645548478 +0000 UTC m=+232.207302817" Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.690047 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:37 crc kubenswrapper[4805]: E0226 17:18:37.700139 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:38.20009711 +0000 UTC m=+232.761851459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:37 crc kubenswrapper[4805]: E0226 17:18:37.793791 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:38.293758125 +0000 UTC m=+232.855512464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.795406 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.821953 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-lrc7d" podStartSLOduration=156.821936038 podStartE2EDuration="2m36.821936038s" podCreationTimestamp="2026-02-26 17:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:37.696202774 +0000 UTC m=+232.257957113" watchObservedRunningTime="2026-02-26 17:18:37.821936038 +0000 UTC m=+232.383690377" Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.897875 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:37 crc kubenswrapper[4805]: E0226 17:18:37.902539 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:38.402505751 +0000 UTC m=+232.964260090 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:37 crc kubenswrapper[4805]: I0226 17:18:37.923055 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:37 crc kubenswrapper[4805]: E0226 17:18:37.936521 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:38.436495207 +0000 UTC m=+232.998249546 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:37 crc kubenswrapper[4805]: E0226 17:18:37.958851 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb68ea4bc_0e19_42c3_80f4_7c71d84e97c3.slice/crio-d48cd1210e8c6532b7fda4858599d720efee99dae076a0f4055171a7fc1932ed.scope\": RecentStats: unable to find data in memory cache]" Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.027269 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:38 crc kubenswrapper[4805]: E0226 17:18:38.027666 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:38.5276395 +0000 UTC m=+233.089393839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.027782 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:38 crc kubenswrapper[4805]: E0226 17:18:38.028165 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:38.528154233 +0000 UTC m=+233.089908572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.050103 4805 ???:1] "http: TLS handshake error from 192.168.126.11:50034: no serving certificate available for the kubelet" Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.132757 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:38 crc kubenswrapper[4805]: E0226 17:18:38.133365 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:38.633341411 +0000 UTC m=+233.195095750 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.234135 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:38 crc kubenswrapper[4805]: E0226 17:18:38.234450 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:38.734439749 +0000 UTC m=+233.296194088 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.335086 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:38 crc kubenswrapper[4805]: E0226 17:18:38.335476 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:38.835447385 +0000 UTC m=+233.397201724 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.375898 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-trm4v" event={"ID":"293ce1a3-001e-4935-be39-8b40f869a2ad","Type":"ContainerStarted","Data":"ac231a9e9c7a3795c68369ff89f9288f7bab1d6f3cbad7ec76ac158ebd33e547"} Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.377845 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-rw6n5" event={"ID":"9ff06563-787b-4f43-ab99-d900accc2791","Type":"ContainerStarted","Data":"f11eacbc081aeff631bc5532c71470407632be167691c3807640ada69b6847a1"} Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.379970 4805 patch_prober.go:28] interesting pod/console-operator-58897d9998-rw6n5 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.380005 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-rw6n5" podUID="9ff06563-787b-4f43-ab99-d900accc2791" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.381468 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6lgpv" event={"ID":"99dff83c-eeb0-4382-89e6-9956167ea61c","Type":"ContainerStarted","Data":"a6b1a6b1a316606fd724a5db14bc131dd176a817a3406061e0fa0178dabb8ae4"} Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.381529 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6lgpv" event={"ID":"99dff83c-eeb0-4382-89e6-9956167ea61c","Type":"ContainerStarted","Data":"9973bfbc603429800555db39e87afbe8a00f20c999b5fd7e952555d358b036be"} Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.382719 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f78d" event={"ID":"9c4b829e-a7ab-4702-9310-4010906609a4","Type":"ContainerStarted","Data":"235a901088cdae375bc5e53f25eb8fd0739ec89ce68714e69b5515f6ca231eb3"} Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.396042 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-n2k99" event={"ID":"6b4645cb-cf08-4b8d-9115-c2d7d4a40cae","Type":"ContainerStarted","Data":"1df3dbe90765963ea733535ab30da11ed0ee7b9d94a90f19cc04e13226fbd181"} Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.397832 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6s2g" event={"ID":"24403043-1500-4b05-a4a3-6863604f54ad","Type":"ContainerStarted","Data":"4c3e2f874d3924a99580ea858c7204d561f48cc18e38948c9ff5ab45f5a3fb01"} Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.397854 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6s2g" event={"ID":"24403043-1500-4b05-a4a3-6863604f54ad","Type":"ContainerStarted","Data":"ee6009c8f7f5dcdcd32812c2f4215882a8b23af5571bb5792b9b9d9fe7b28df6"} Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.398714 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6s2g" Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.400923 4805 generic.go:334] "Generic (PLEG): container finished" podID="b68ea4bc-0e19-42c3-80f4-7c71d84e97c3" containerID="d48cd1210e8c6532b7fda4858599d720efee99dae076a0f4055171a7fc1932ed" exitCode=0 Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.400977 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hln6b" event={"ID":"b68ea4bc-0e19-42c3-80f4-7c71d84e97c3","Type":"ContainerDied","Data":"d48cd1210e8c6532b7fda4858599d720efee99dae076a0f4055171a7fc1932ed"} Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.401514 4805 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-n6s2g container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.401538 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6s2g" podUID="24403043-1500-4b05-a4a3-6863604f54ad" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.413749 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5jhjn" event={"ID":"67e1c0bf-8550-4126-986d-0d88d4d5f8ef","Type":"ContainerStarted","Data":"115ac26c02141f3321ec5487bdfc7edbc76413acb6bdd95d82b3fbc9e452b96c"} Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.436949 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:38 crc kubenswrapper[4805]: E0226 17:18:38.438331 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:38.938315206 +0000 UTC m=+233.500069535 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.443243 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pzjmg" event={"ID":"392795fe-61ee-4a31-9009-e0be88c6dd2d","Type":"ContainerStarted","Data":"0ec7e0dc0001699d54cd465876663c5d67034d297aa86e4456f9a817d4ff064a"} Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.460726 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ws68" event={"ID":"f57e2f33-3890-47ba-a25d-53d23342a7e7","Type":"ContainerStarted","Data":"22905df9eb5eedf2a93578690e2edecd9824c40d7c2cb0179f332c87ef439126"} Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.508596 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-czlsm" event={"ID":"cff54d20-3883-42f8-8f56-7948a115807d","Type":"ContainerStarted","Data":"eede755285bfed1498a0285e2ac01959409d44b5cc460cf279b5c83832db8db3"} Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.530928 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-4b67j" event={"ID":"4a986fa5-f0cd-4512-89fe-8e8ccad45745","Type":"ContainerStarted","Data":"c443e612514ee453c420f908d2697e1d1567cc782cc0008f2e57b9b85121e405"} Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.538448 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:38 crc kubenswrapper[4805]: E0226 17:18:38.546842 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:39.046803926 +0000 UTC m=+233.608558265 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.549147 4805 patch_prober.go:28] interesting pod/router-default-5444994796-bnlgh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 17:18:38 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Feb 26 17:18:38 crc kubenswrapper[4805]: [+]process-running ok Feb 26 17:18:38 crc kubenswrapper[4805]: healthz check failed Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.549216 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bnlgh" podUID="0bc402c9-dc6d-402f-9122-c7054331f144" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.622314 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.623851 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.629634 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-79bz8" event={"ID":"94037ae8-6f5e-48b4-8cea-ea9cd9d3e0fa","Type":"ContainerStarted","Data":"63fea005b2bd9f803d0542f2054ad19f2ca04eb2fabb05cc8fd241c2089ce3bf"} Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.641089 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:38 crc kubenswrapper[4805]: E0226 17:18:38.641452 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:39.141439515 +0000 UTC m=+233.703193854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.680673 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-sqdmc" event={"ID":"99b107f5-1e0b-420c-be32-1ae140cba05d","Type":"ContainerStarted","Data":"6e73f8a5454088aa16cb1fd46e1f34c715927d6c0f49fcb166ed1958e8d9fd44"} Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.692627 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9ff4c" event={"ID":"cd0845ab-7def-4118-8108-f1254a8f79b4","Type":"ContainerStarted","Data":"21716ef12b4ca4a8605515a9d20defa25fa34df0004446d511fe14e28fd94070"} Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.722635 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sz82x" event={"ID":"ba4ace0a-1ec3-49ff-aaec-cb6751d9a7df","Type":"ContainerStarted","Data":"c6d42831bfb41fac749b9466f491548f523f27086568ce5e4c4118c29f008db8"} Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.724988 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sz82x" event={"ID":"ba4ace0a-1ec3-49ff-aaec-cb6751d9a7df","Type":"ContainerStarted","Data":"4243224ec38055fd0be2373a37cd27fc96f59ce6851101ee00eac599d3230ce9"} Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.758188 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-25fk7" event={"ID":"b0d1ffc5-ede9-49d3-8d52-3ebc49cdcb30","Type":"ContainerStarted","Data":"3c7020c29f063c298e46c58d5027dc5ff4c1c1c69386ffc93427d20238f66c73"} Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.765403 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:38 crc kubenswrapper[4805]: E0226 17:18:38.772547 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:39.27250428 +0000 UTC m=+233.834258619 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.772767 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:38 crc kubenswrapper[4805]: E0226 17:18:38.775582 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:39.275568645 +0000 UTC m=+233.837322984 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.847738 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k9xsh" event={"ID":"b656b811-2756-4cb0-8a5e-2fce336a3047","Type":"ContainerStarted","Data":"9243ae90063636082431d6c15c0166cc5b276fd0390e708215f7260571484f2d"} Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.847824 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k9xsh" event={"ID":"b656b811-2756-4cb0-8a5e-2fce336a3047","Type":"ContainerStarted","Data":"06b61be56f355cdc4c3c951c66da9427832192af4644c5f5a50cd2065b2346db"} Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.852584 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-d7sjf" event={"ID":"ca3d06f8-3cc9-4e77-9d45-e1232c00b04f","Type":"ContainerStarted","Data":"715c9c1a7e8cdd6d792854aee2c8264bf52d2d10d0c3d38305e6d873da60da55"} Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.853221 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-d7sjf" Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.861687 4805 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-d7sjf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.861766 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-d7sjf" podUID="ca3d06f8-3cc9-4e77-9d45-e1232c00b04f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.864040 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-sz82x" podStartSLOduration=157.863994561 podStartE2EDuration="2m37.863994561s" podCreationTimestamp="2026-02-26 17:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:38.862134796 +0000 UTC m=+233.423889135" watchObservedRunningTime="2026-02-26 17:18:38.863994561 +0000 UTC m=+233.425748900" Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.874133 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:38 crc kubenswrapper[4805]: E0226 17:18:38.874576 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:39.374543561 +0000 UTC m=+233.936297900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.874776 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:38 crc kubenswrapper[4805]: E0226 17:18:38.876993 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:39.376974811 +0000 UTC m=+233.938729360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.885648 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6s2g" podStartSLOduration=156.885619344 podStartE2EDuration="2m36.885619344s" podCreationTimestamp="2026-02-26 17:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:38.881850761 +0000 UTC m=+233.443605100" watchObservedRunningTime="2026-02-26 17:18:38.885619344 +0000 UTC m=+233.447373683" Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.909072 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535438-d8kz2" event={"ID":"ce8a1740-3334-42a4-af1d-1a7de4758c9c","Type":"ContainerStarted","Data":"39cb649c80d4dcec4829bcfb53f972b0c82e025d1454908e435876e33183651d"} Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.947070 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xbznm" event={"ID":"f86d7b9c-1d60-4153-98aa-6a8985a78907","Type":"ContainerStarted","Data":"5fcc40705f017b71a752eec35ff1995f674f84a62f80c6caefade7b27a33f9e8"} Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.951813 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2dnn9" event={"ID":"dd3f2c3b-2417-44c9-bd45-02b10d68cf24","Type":"ContainerStarted","Data":"6c7fc7cc98f5a11339ae09d9e842054dc3e3266731dca078ea954da269a655c0"} Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.957956 4805 generic.go:334] "Generic (PLEG): container finished" podID="8c2d3d15-9f73-4fed-81e3-2f8ce400b967" containerID="9d13f66a8c22a8caaf66eac2c0a6b0c18431f1c6e5a67df589a8b0994e91cd3d" exitCode=0 Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.958063 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6l2g9" event={"ID":"8c2d3d15-9f73-4fed-81e3-2f8ce400b967","Type":"ContainerDied","Data":"9d13f66a8c22a8caaf66eac2c0a6b0c18431f1c6e5a67df589a8b0994e91cd3d"} Feb 26 17:18:38 crc kubenswrapper[4805]: I0226 17:18:38.976794 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:38 crc kubenswrapper[4805]: E0226 17:18:38.978085 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:39.478046418 +0000 UTC m=+234.039800747 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.005004 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-lrc7d container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.005180 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lrc7d" podUID="046e32fa-0c59-4d66-9ab3-02d3ab26255b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.047715 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-ml8xl" event={"ID":"f5c14f64-688c-4661-a440-5b171ee3d7b6","Type":"ContainerStarted","Data":"2bedd670332d1b109468fbf6773041a07bb75ec5f42306e44cca4921e3da077c"} Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.047841 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9bxpz" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.078941 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.083382 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9ff4c" podStartSLOduration=158.083358659 podStartE2EDuration="2m38.083358659s" podCreationTimestamp="2026-02-26 17:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:39.082953759 +0000 UTC m=+233.644708098" watchObservedRunningTime="2026-02-26 17:18:39.083358659 +0000 UTC m=+233.645112998" Feb 26 17:18:39 crc kubenswrapper[4805]: E0226 17:18:39.083925 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:39.583908083 +0000 UTC m=+234.145662422 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.183228 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:39 crc kubenswrapper[4805]: E0226 17:18:39.184094 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:39.684062847 +0000 UTC m=+234.245817206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.184396 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:39 crc kubenswrapper[4805]: E0226 17:18:39.190741 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:39.690699621 +0000 UTC m=+234.252453960 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.201193 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5jhjn" podStartSLOduration=157.201176429 podStartE2EDuration="2m37.201176429s" podCreationTimestamp="2026-02-26 17:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:39.200617945 +0000 UTC m=+233.762372284" watchObservedRunningTime="2026-02-26 17:18:39.201176429 +0000 UTC m=+233.762930768" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.201484 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-25fk7" podStartSLOduration=158.201479216 podStartE2EDuration="2m38.201479216s" podCreationTimestamp="2026-02-26 17:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:39.136535928 +0000 UTC m=+233.698290277" watchObservedRunningTime="2026-02-26 17:18:39.201479216 +0000 UTC m=+233.763233545" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.238636 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tfcdt"] Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.249033 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tfcdt" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.271547 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-k9xsh" podStartSLOduration=158.27153061 podStartE2EDuration="2m38.27153061s" podCreationTimestamp="2026-02-26 17:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:39.256719655 +0000 UTC m=+233.818474004" watchObservedRunningTime="2026-02-26 17:18:39.27153061 +0000 UTC m=+233.833284949" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.273071 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tfcdt"] Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.273490 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.294120 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.294347 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/043bfd8c-1387-4b00-ad52-1e4efd43c942-catalog-content\") pod \"community-operators-tfcdt\" (UID: \"043bfd8c-1387-4b00-ad52-1e4efd43c942\") " pod="openshift-marketplace/community-operators-tfcdt" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.294388 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/043bfd8c-1387-4b00-ad52-1e4efd43c942-utilities\") pod \"community-operators-tfcdt\" (UID: \"043bfd8c-1387-4b00-ad52-1e4efd43c942\") " pod="openshift-marketplace/community-operators-tfcdt" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.294432 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfrzh\" (UniqueName: \"kubernetes.io/projected/043bfd8c-1387-4b00-ad52-1e4efd43c942-kube-api-access-xfrzh\") pod \"community-operators-tfcdt\" (UID: \"043bfd8c-1387-4b00-ad52-1e4efd43c942\") " pod="openshift-marketplace/community-operators-tfcdt" Feb 26 17:18:39 crc kubenswrapper[4805]: E0226 17:18:39.294561 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:39.794546776 +0000 UTC m=+234.356301115 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.402490 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.402989 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/043bfd8c-1387-4b00-ad52-1e4efd43c942-catalog-content\") pod \"community-operators-tfcdt\" (UID: \"043bfd8c-1387-4b00-ad52-1e4efd43c942\") " pod="openshift-marketplace/community-operators-tfcdt" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.403120 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/043bfd8c-1387-4b00-ad52-1e4efd43c942-utilities\") pod \"community-operators-tfcdt\" (UID: \"043bfd8c-1387-4b00-ad52-1e4efd43c942\") " pod="openshift-marketplace/community-operators-tfcdt" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.403302 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfrzh\" (UniqueName: \"kubernetes.io/projected/043bfd8c-1387-4b00-ad52-1e4efd43c942-kube-api-access-xfrzh\") pod \"community-operators-tfcdt\" (UID: \"043bfd8c-1387-4b00-ad52-1e4efd43c942\") " pod="openshift-marketplace/community-operators-tfcdt" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.404306 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/043bfd8c-1387-4b00-ad52-1e4efd43c942-catalog-content\") pod \"community-operators-tfcdt\" (UID: \"043bfd8c-1387-4b00-ad52-1e4efd43c942\") " pod="openshift-marketplace/community-operators-tfcdt" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.404589 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/043bfd8c-1387-4b00-ad52-1e4efd43c942-utilities\") pod \"community-operators-tfcdt\" (UID: \"043bfd8c-1387-4b00-ad52-1e4efd43c942\") " pod="openshift-marketplace/community-operators-tfcdt" Feb 26 17:18:39 crc kubenswrapper[4805]: E0226 17:18:39.404805 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:39.904778959 +0000 UTC m=+234.466533298 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.417733 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-2dnn9" podStartSLOduration=158.417715207 podStartE2EDuration="2m38.417715207s" podCreationTimestamp="2026-02-26 17:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:39.416746383 +0000 UTC m=+233.978500732" watchObservedRunningTime="2026-02-26 17:18:39.417715207 +0000 UTC m=+233.979469636" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.457798 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-d7sjf" podStartSLOduration=157.457774113 podStartE2EDuration="2m37.457774113s" podCreationTimestamp="2026-02-26 17:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:39.373654233 +0000 UTC m=+233.935408572" watchObservedRunningTime="2026-02-26 17:18:39.457774113 +0000 UTC m=+234.019528452" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.462677 4805 ???:1] "http: TLS handshake error from 192.168.126.11:50036: no serving certificate available for the kubelet" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.473797 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfrzh\" (UniqueName: \"kubernetes.io/projected/043bfd8c-1387-4b00-ad52-1e4efd43c942-kube-api-access-xfrzh\") pod \"community-operators-tfcdt\" (UID: \"043bfd8c-1387-4b00-ad52-1e4efd43c942\") " pod="openshift-marketplace/community-operators-tfcdt" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.494157 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rmjsl"] Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.495986 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rmjsl" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.505651 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:39 crc kubenswrapper[4805]: E0226 17:18:39.506414 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:40.006393619 +0000 UTC m=+234.568147958 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.506498 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.506730 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rmjsl"] Feb 26 17:18:39 crc kubenswrapper[4805]: E0226 17:18:39.506860 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:40.006853561 +0000 UTC m=+234.568607890 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.531697 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.552564 4805 patch_prober.go:28] interesting pod/router-default-5444994796-bnlgh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 17:18:39 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Feb 26 17:18:39 crc kubenswrapper[4805]: [+]process-running ok Feb 26 17:18:39 crc kubenswrapper[4805]: healthz check failed Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.552736 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bnlgh" podUID="0bc402c9-dc6d-402f-9122-c7054331f144" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.572152 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-ml8xl" podStartSLOduration=158.572115467 podStartE2EDuration="2m38.572115467s" podCreationTimestamp="2026-02-26 17:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:39.570419905 +0000 UTC m=+234.132174264" watchObservedRunningTime="2026-02-26 17:18:39.572115467 +0000 UTC m=+234.133869806" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.613366 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.613673 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnv8p\" (UniqueName: \"kubernetes.io/projected/1f5ab03e-b223-4e5b-8c9f-3d350a66156e-kube-api-access-rnv8p\") pod \"certified-operators-rmjsl\" (UID: \"1f5ab03e-b223-4e5b-8c9f-3d350a66156e\") " pod="openshift-marketplace/certified-operators-rmjsl" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.613746 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f5ab03e-b223-4e5b-8c9f-3d350a66156e-catalog-content\") pod \"certified-operators-rmjsl\" (UID: \"1f5ab03e-b223-4e5b-8c9f-3d350a66156e\") " pod="openshift-marketplace/certified-operators-rmjsl" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.613799 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f5ab03e-b223-4e5b-8c9f-3d350a66156e-utilities\") pod \"certified-operators-rmjsl\" (UID: \"1f5ab03e-b223-4e5b-8c9f-3d350a66156e\") " pod="openshift-marketplace/certified-operators-rmjsl" Feb 26 17:18:39 crc kubenswrapper[4805]: E0226 17:18:39.613977 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:40.113955226 +0000 UTC m=+234.675709565 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.679565 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7fvvs"] Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.682383 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fvvs" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.699564 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tfcdt" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.713515 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7fvvs"] Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.716867 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83d8504f-33ad-4812-bc1e-11233c225974-catalog-content\") pod \"community-operators-7fvvs\" (UID: \"83d8504f-33ad-4812-bc1e-11233c225974\") " pod="openshift-marketplace/community-operators-7fvvs" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.716914 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.716942 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnrtw\" (UniqueName: \"kubernetes.io/projected/83d8504f-33ad-4812-bc1e-11233c225974-kube-api-access-bnrtw\") pod \"community-operators-7fvvs\" (UID: \"83d8504f-33ad-4812-bc1e-11233c225974\") " pod="openshift-marketplace/community-operators-7fvvs" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.716971 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f5ab03e-b223-4e5b-8c9f-3d350a66156e-catalog-content\") pod \"certified-operators-rmjsl\" (UID: \"1f5ab03e-b223-4e5b-8c9f-3d350a66156e\") " pod="openshift-marketplace/certified-operators-rmjsl" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.717005 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f5ab03e-b223-4e5b-8c9f-3d350a66156e-utilities\") pod \"certified-operators-rmjsl\" (UID: \"1f5ab03e-b223-4e5b-8c9f-3d350a66156e\") " pod="openshift-marketplace/certified-operators-rmjsl" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.717085 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83d8504f-33ad-4812-bc1e-11233c225974-utilities\") pod \"community-operators-7fvvs\" (UID: \"83d8504f-33ad-4812-bc1e-11233c225974\") " pod="openshift-marketplace/community-operators-7fvvs" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.717137 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnv8p\" (UniqueName: \"kubernetes.io/projected/1f5ab03e-b223-4e5b-8c9f-3d350a66156e-kube-api-access-rnv8p\") pod \"certified-operators-rmjsl\" (UID: \"1f5ab03e-b223-4e5b-8c9f-3d350a66156e\") " pod="openshift-marketplace/certified-operators-rmjsl" Feb 26 17:18:39 crc kubenswrapper[4805]: E0226 17:18:39.717656 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:40.217636558 +0000 UTC m=+234.779390977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.717835 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f5ab03e-b223-4e5b-8c9f-3d350a66156e-catalog-content\") pod \"certified-operators-rmjsl\" (UID: \"1f5ab03e-b223-4e5b-8c9f-3d350a66156e\") " pod="openshift-marketplace/certified-operators-rmjsl" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.722615 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f5ab03e-b223-4e5b-8c9f-3d350a66156e-utilities\") pod \"certified-operators-rmjsl\" (UID: \"1f5ab03e-b223-4e5b-8c9f-3d350a66156e\") " pod="openshift-marketplace/certified-operators-rmjsl" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.767728 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnv8p\" (UniqueName: \"kubernetes.io/projected/1f5ab03e-b223-4e5b-8c9f-3d350a66156e-kube-api-access-rnv8p\") pod \"certified-operators-rmjsl\" (UID: \"1f5ab03e-b223-4e5b-8c9f-3d350a66156e\") " pod="openshift-marketplace/certified-operators-rmjsl" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.818775 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.819067 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83d8504f-33ad-4812-bc1e-11233c225974-utilities\") pod \"community-operators-7fvvs\" (UID: \"83d8504f-33ad-4812-bc1e-11233c225974\") " pod="openshift-marketplace/community-operators-7fvvs" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.819157 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83d8504f-33ad-4812-bc1e-11233c225974-catalog-content\") pod \"community-operators-7fvvs\" (UID: \"83d8504f-33ad-4812-bc1e-11233c225974\") " pod="openshift-marketplace/community-operators-7fvvs" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.819187 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnrtw\" (UniqueName: \"kubernetes.io/projected/83d8504f-33ad-4812-bc1e-11233c225974-kube-api-access-bnrtw\") pod \"community-operators-7fvvs\" (UID: \"83d8504f-33ad-4812-bc1e-11233c225974\") " pod="openshift-marketplace/community-operators-7fvvs" Feb 26 17:18:39 crc kubenswrapper[4805]: E0226 17:18:39.819634 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:40.319614577 +0000 UTC m=+234.881368916 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.820045 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83d8504f-33ad-4812-bc1e-11233c225974-utilities\") pod \"community-operators-7fvvs\" (UID: \"83d8504f-33ad-4812-bc1e-11233c225974\") " pod="openshift-marketplace/community-operators-7fvvs" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.820289 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83d8504f-33ad-4812-bc1e-11233c225974-catalog-content\") pod \"community-operators-7fvvs\" (UID: \"83d8504f-33ad-4812-bc1e-11233c225974\") " pod="openshift-marketplace/community-operators-7fvvs" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.831470 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gzkpq"] Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.832846 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gzkpq" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.845890 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rmjsl" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.882709 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gzkpq"] Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.882914 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnrtw\" (UniqueName: \"kubernetes.io/projected/83d8504f-33ad-4812-bc1e-11233c225974-kube-api-access-bnrtw\") pod \"community-operators-7fvvs\" (UID: \"83d8504f-33ad-4812-bc1e-11233c225974\") " pod="openshift-marketplace/community-operators-7fvvs" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.923642 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87196950-f6be-442b-a725-3cdee5962f55-catalog-content\") pod \"certified-operators-gzkpq\" (UID: \"87196950-f6be-442b-a725-3cdee5962f55\") " pod="openshift-marketplace/certified-operators-gzkpq" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.923706 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6lzz\" (UniqueName: \"kubernetes.io/projected/87196950-f6be-442b-a725-3cdee5962f55-kube-api-access-p6lzz\") pod \"certified-operators-gzkpq\" (UID: \"87196950-f6be-442b-a725-3cdee5962f55\") " pod="openshift-marketplace/certified-operators-gzkpq" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.923752 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87196950-f6be-442b-a725-3cdee5962f55-utilities\") pod \"certified-operators-gzkpq\" (UID: \"87196950-f6be-442b-a725-3cdee5962f55\") " pod="openshift-marketplace/certified-operators-gzkpq" Feb 26 17:18:39 crc kubenswrapper[4805]: I0226 17:18:39.923802 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:39 crc kubenswrapper[4805]: E0226 17:18:39.924205 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:40.4241905 +0000 UTC m=+234.985944839 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.018605 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fvvs" Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.024580 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.024828 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87196950-f6be-442b-a725-3cdee5962f55-catalog-content\") pod \"certified-operators-gzkpq\" (UID: \"87196950-f6be-442b-a725-3cdee5962f55\") " pod="openshift-marketplace/certified-operators-gzkpq" Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.024855 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6lzz\" (UniqueName: \"kubernetes.io/projected/87196950-f6be-442b-a725-3cdee5962f55-kube-api-access-p6lzz\") pod \"certified-operators-gzkpq\" (UID: \"87196950-f6be-442b-a725-3cdee5962f55\") " pod="openshift-marketplace/certified-operators-gzkpq" Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.024886 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87196950-f6be-442b-a725-3cdee5962f55-utilities\") pod \"certified-operators-gzkpq\" (UID: \"87196950-f6be-442b-a725-3cdee5962f55\") " pod="openshift-marketplace/certified-operators-gzkpq" Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.028522 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87196950-f6be-442b-a725-3cdee5962f55-catalog-content\") pod \"certified-operators-gzkpq\" (UID: \"87196950-f6be-442b-a725-3cdee5962f55\") " pod="openshift-marketplace/certified-operators-gzkpq" Feb 26 17:18:40 crc kubenswrapper[4805]: E0226 17:18:40.028638 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:40.52861527 +0000 UTC m=+235.090369669 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.031554 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87196950-f6be-442b-a725-3cdee5962f55-utilities\") pod \"certified-operators-gzkpq\" (UID: \"87196950-f6be-442b-a725-3cdee5962f55\") " pod="openshift-marketplace/certified-operators-gzkpq" Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.066972 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6l2g9" event={"ID":"8c2d3d15-9f73-4fed-81e3-2f8ce400b967","Type":"ContainerStarted","Data":"5a62079f179c4ddf96df633f562c2c523548116d0acbff53f1e66546297d447f"} Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.067845 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6l2g9" Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.090322 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6lzz\" (UniqueName: \"kubernetes.io/projected/87196950-f6be-442b-a725-3cdee5962f55-kube-api-access-p6lzz\") pod \"certified-operators-gzkpq\" (UID: \"87196950-f6be-442b-a725-3cdee5962f55\") " pod="openshift-marketplace/certified-operators-gzkpq" Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.110851 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-trm4v" event={"ID":"293ce1a3-001e-4935-be39-8b40f869a2ad","Type":"ContainerStarted","Data":"350ed6728383c3f6e24ef8eb790c5f361edbdb75b004b7056fe11184cd76ff56"} Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.110946 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-trm4v" event={"ID":"293ce1a3-001e-4935-be39-8b40f869a2ad","Type":"ContainerStarted","Data":"afbd395cd11e6738962b7b6816824f2fc1e1dd6dd1a3105a492c451a47bd9156"} Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.140958 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:40 crc kubenswrapper[4805]: E0226 17:18:40.142582 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:40.642569244 +0000 UTC m=+235.204323583 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.144675 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6l2g9" podStartSLOduration=159.144661856 podStartE2EDuration="2m39.144661856s" podCreationTimestamp="2026-02-26 17:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:40.142880822 +0000 UTC m=+234.704635161" watchObservedRunningTime="2026-02-26 17:18:40.144661856 +0000 UTC m=+234.706416195" Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.148541 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6lgpv" event={"ID":"99dff83c-eeb0-4382-89e6-9956167ea61c","Type":"ContainerStarted","Data":"5f58705c95ebe05cd53e271b69b78d28be91d6bffcbe3dfad7990dc01f752953"} Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.150779 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pzjmg" event={"ID":"392795fe-61ee-4a31-9009-e0be88c6dd2d","Type":"ContainerStarted","Data":"7af5099e2010f56dcc702ea7eb16bb8fb5fd2996aa89c4d1f16a1f3df51cb009"} Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.150836 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pzjmg" event={"ID":"392795fe-61ee-4a31-9009-e0be88c6dd2d","Type":"ContainerStarted","Data":"822a5edd49aad3eacb2cc8c554eb63836769f16ad1d54a32b4e451d5af5547c7"} Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.151081 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pzjmg" Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.196527 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gzkpq" Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.201520 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f78d" event={"ID":"9c4b829e-a7ab-4702-9310-4010906609a4","Type":"ContainerStarted","Data":"9104d4010ec1a5d8b79273fb64157b0fc0e000581a75bba8699c20d0d3289306"} Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.203320 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f78d" Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.205789 4805 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-2f78d container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.205884 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f78d" podUID="9c4b829e-a7ab-4702-9310-4010906609a4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.208314 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-trm4v" podStartSLOduration=158.208289982 podStartE2EDuration="2m38.208289982s" podCreationTimestamp="2026-02-26 17:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:40.207080602 +0000 UTC m=+234.768834961" watchObservedRunningTime="2026-02-26 17:18:40.208289982 +0000 UTC m=+234.770044321" Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.223245 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2dnn9" event={"ID":"dd3f2c3b-2417-44c9-bd45-02b10d68cf24","Type":"ContainerStarted","Data":"7f837bdfdc837e512c61bb638e4a979e8ac1cea90bf65554544533a743c5594b"} Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.232556 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hln6b" event={"ID":"b68ea4bc-0e19-42c3-80f4-7c71d84e97c3","Type":"ContainerStarted","Data":"ff250e04335654cfe6502e21d2f2cfbca260c31f4491d05cedd811d7320fe964"} Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.243776 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:40 crc kubenswrapper[4805]: E0226 17:18:40.244909 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:40.744893403 +0000 UTC m=+235.306647742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.253283 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-4b67j" event={"ID":"4a986fa5-f0cd-4512-89fe-8e8ccad45745","Type":"ContainerStarted","Data":"4ff877d15174e8270cdff3463752c5a77cd64c51a68e80f9681647f0316a5c43"} Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.289457 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f78d" podStartSLOduration=158.289417168 podStartE2EDuration="2m38.289417168s" podCreationTimestamp="2026-02-26 17:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:40.281738919 +0000 UTC m=+234.843493268" watchObservedRunningTime="2026-02-26 17:18:40.289417168 +0000 UTC m=+234.851171507" Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.302574 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-n2k99" event={"ID":"6b4645cb-cf08-4b8d-9115-c2d7d4a40cae","Type":"ContainerStarted","Data":"26bef9c5bca1706210d8d1615d8ec41651464abc0486c87d462cfc18df0ea461"} Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.346687 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:40 crc kubenswrapper[4805]: E0226 17:18:40.347480 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:40.847462187 +0000 UTC m=+235.409216536 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.353716 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-79bz8" event={"ID":"94037ae8-6f5e-48b4-8cea-ea9cd9d3e0fa","Type":"ContainerStarted","Data":"93f8337eff99d149d50b01df435f12b1bc81787f9b4bec5794c5c99979819724"} Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.392268 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-sqdmc" event={"ID":"99b107f5-1e0b-420c-be32-1ae140cba05d","Type":"ContainerStarted","Data":"3cad9387793acfddbeada3b0bf20c1d41e08475b7955cd225d0e4fdadf6a029d"} Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.400913 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pzjmg" podStartSLOduration=158.400883151 podStartE2EDuration="2m38.400883151s" podCreationTimestamp="2026-02-26 17:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:40.397475267 +0000 UTC m=+234.959229606" watchObservedRunningTime="2026-02-26 17:18:40.400883151 +0000 UTC m=+234.962637480" Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.402962 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6lgpv" podStartSLOduration=158.402951422 podStartE2EDuration="2m38.402951422s" podCreationTimestamp="2026-02-26 17:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:40.309950643 +0000 UTC m=+234.871704982" watchObservedRunningTime="2026-02-26 17:18:40.402951422 +0000 UTC m=+234.964705761" Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.438855 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9ff4c" event={"ID":"cd0845ab-7def-4118-8108-f1254a8f79b4","Type":"ContainerStarted","Data":"bae0a53cd1fbf4d5f6f266ebd7b2215759020055e891ed6e5205aca946c59d02"} Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.452275 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:40 crc kubenswrapper[4805]: E0226 17:18:40.455185 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:40.955146206 +0000 UTC m=+235.516900695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.471572 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-4b67j" podStartSLOduration=158.47154516 podStartE2EDuration="2m38.47154516s" podCreationTimestamp="2026-02-26 17:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:40.458668573 +0000 UTC m=+235.020422922" watchObservedRunningTime="2026-02-26 17:18:40.47154516 +0000 UTC m=+235.033299499" Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.472756 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ws68" event={"ID":"f57e2f33-3890-47ba-a25d-53d23342a7e7","Type":"ContainerStarted","Data":"b7f1ed88df8848b657d7c3518b93d4f62be234f1b6cdd98273e61464a8279bab"} Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.531764 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-sqdmc" podStartSLOduration=9.531744291 podStartE2EDuration="9.531744291s" podCreationTimestamp="2026-02-26 17:18:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:40.508970251 +0000 UTC m=+235.070724590" watchObservedRunningTime="2026-02-26 17:18:40.531744291 +0000 UTC m=+235.093498630" Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.558382 4805 patch_prober.go:28] interesting pod/router-default-5444994796-bnlgh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 17:18:40 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Feb 26 17:18:40 crc kubenswrapper[4805]: [+]process-running ok Feb 26 17:18:40 crc kubenswrapper[4805]: healthz check failed Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.558446 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bnlgh" podUID="0bc402c9-dc6d-402f-9122-c7054331f144" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.559439 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:40 crc kubenswrapper[4805]: E0226 17:18:40.561331 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:41.061313359 +0000 UTC m=+235.623067768 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.588103 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-czlsm" event={"ID":"cff54d20-3883-42f8-8f56-7948a115807d","Type":"ContainerStarted","Data":"8fd3b61efa2ebca940e210812ca2a01ff731585cd5b4143d378be9e665a740be"} Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.590268 4805 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-n6s2g container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.590335 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6s2g" podUID="24403043-1500-4b05-a4a3-6863604f54ad" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.590292 4805 patch_prober.go:28] interesting pod/console-operator-58897d9998-rw6n5 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.590431 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-rw6n5" podUID="9ff06563-787b-4f43-ab99-d900accc2791" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.597483 4805 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-d7sjf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.597590 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-d7sjf" podUID="ca3d06f8-3cc9-4e77-9d45-e1232c00b04f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.618009 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-79bz8" podStartSLOduration=158.617986944 podStartE2EDuration="2m38.617986944s" podCreationTimestamp="2026-02-26 17:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:40.589204425 +0000 UTC m=+235.150958764" watchObservedRunningTime="2026-02-26 17:18:40.617986944 +0000 UTC m=+235.179741283" Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.632829 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hln6b" podStartSLOduration=158.632795088 podStartE2EDuration="2m38.632795088s" podCreationTimestamp="2026-02-26 17:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:40.621738276 +0000 UTC m=+235.183492615" watchObservedRunningTime="2026-02-26 17:18:40.632795088 +0000 UTC m=+235.194549427" Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.670813 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:40 crc kubenswrapper[4805]: E0226 17:18:40.673186 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:41.173161181 +0000 UTC m=+235.734915690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.673228 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-czlsm" podStartSLOduration=159.673208032 podStartE2EDuration="2m39.673208032s" podCreationTimestamp="2026-02-26 17:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:40.67309451 +0000 UTC m=+235.234848849" watchObservedRunningTime="2026-02-26 17:18:40.673208032 +0000 UTC m=+235.234962381" Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.752280 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4ws68" podStartSLOduration=159.752254898 podStartE2EDuration="2m39.752254898s" podCreationTimestamp="2026-02-26 17:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:40.734655544 +0000 UTC m=+235.296409903" watchObservedRunningTime="2026-02-26 17:18:40.752254898 +0000 UTC m=+235.314009237" Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.775829 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:40 crc kubenswrapper[4805]: E0226 17:18:40.782727 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:41.282709587 +0000 UTC m=+235.844464036 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.882976 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:40 crc kubenswrapper[4805]: E0226 17:18:40.883212 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:41.383183389 +0000 UTC m=+235.944937728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.883471 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:40 crc kubenswrapper[4805]: E0226 17:18:40.883967 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:41.383959829 +0000 UTC m=+235.945714158 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:40 crc kubenswrapper[4805]: I0226 17:18:40.994561 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:40 crc kubenswrapper[4805]: E0226 17:18:40.994900 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:41.494884477 +0000 UTC m=+236.056638816 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.071382 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rmjsl"] Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.102875 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:41 crc kubenswrapper[4805]: E0226 17:18:41.103629 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:41.603601192 +0000 UTC m=+236.165355721 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.184409 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tfcdt"] Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.206079 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:41 crc kubenswrapper[4805]: E0226 17:18:41.206591 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:41.706562646 +0000 UTC m=+236.268316985 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.212103 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jttjv"] Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.213574 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jttjv" Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.218826 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.247317 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jttjv"] Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.280179 4805 patch_prober.go:28] interesting pod/apiserver-76f77b778f-qkzq5 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 26 17:18:41 crc kubenswrapper[4805]: [+]log ok Feb 26 17:18:41 crc kubenswrapper[4805]: [+]etcd ok Feb 26 17:18:41 crc kubenswrapper[4805]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 26 17:18:41 crc kubenswrapper[4805]: [-]poststarthook/generic-apiserver-start-informers failed: reason withheld Feb 26 17:18:41 crc kubenswrapper[4805]: [-]poststarthook/max-in-flight-filter failed: reason withheld Feb 26 17:18:41 crc kubenswrapper[4805]: [-]poststarthook/storage-object-count-tracker-hook failed: reason withheld Feb 26 17:18:41 crc kubenswrapper[4805]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 26 17:18:41 crc kubenswrapper[4805]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 26 17:18:41 crc kubenswrapper[4805]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 26 17:18:41 crc kubenswrapper[4805]: [-]poststarthook/project.openshift.io-projectcache failed: reason withheld Feb 26 17:18:41 crc kubenswrapper[4805]: [-]poststarthook/project.openshift.io-projectauthorizationcache failed: reason withheld Feb 26 17:18:41 crc kubenswrapper[4805]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Feb 26 17:18:41 crc kubenswrapper[4805]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 26 17:18:41 crc kubenswrapper[4805]: [-]poststarthook/quota.openshift.io-clusterquotamapping failed: reason withheld Feb 26 17:18:41 crc kubenswrapper[4805]: livez check failed Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.280267 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" podUID="f6af6767-3331-4e84-97c7-385cd642443c" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.300470 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gzkpq"] Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.314310 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6239f68a-d80a-4fd4-9a6d-69bd48b1c015-catalog-content\") pod \"redhat-marketplace-jttjv\" (UID: \"6239f68a-d80a-4fd4-9a6d-69bd48b1c015\") " pod="openshift-marketplace/redhat-marketplace-jttjv" Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.314406 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58lf6\" (UniqueName: \"kubernetes.io/projected/6239f68a-d80a-4fd4-9a6d-69bd48b1c015-kube-api-access-58lf6\") pod \"redhat-marketplace-jttjv\" (UID: \"6239f68a-d80a-4fd4-9a6d-69bd48b1c015\") " pod="openshift-marketplace/redhat-marketplace-jttjv" Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.314449 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.314489 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6239f68a-d80a-4fd4-9a6d-69bd48b1c015-utilities\") pod \"redhat-marketplace-jttjv\" (UID: \"6239f68a-d80a-4fd4-9a6d-69bd48b1c015\") " pod="openshift-marketplace/redhat-marketplace-jttjv" Feb 26 17:18:41 crc kubenswrapper[4805]: E0226 17:18:41.314871 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:41.814857831 +0000 UTC m=+236.376612170 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.420255 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:41 crc kubenswrapper[4805]: E0226 17:18:41.420555 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:41.920519311 +0000 UTC m=+236.482273660 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.420998 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6239f68a-d80a-4fd4-9a6d-69bd48b1c015-catalog-content\") pod \"redhat-marketplace-jttjv\" (UID: \"6239f68a-d80a-4fd4-9a6d-69bd48b1c015\") " pod="openshift-marketplace/redhat-marketplace-jttjv" Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.421187 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58lf6\" (UniqueName: \"kubernetes.io/projected/6239f68a-d80a-4fd4-9a6d-69bd48b1c015-kube-api-access-58lf6\") pod \"redhat-marketplace-jttjv\" (UID: \"6239f68a-d80a-4fd4-9a6d-69bd48b1c015\") " pod="openshift-marketplace/redhat-marketplace-jttjv" Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.421261 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.421329 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6239f68a-d80a-4fd4-9a6d-69bd48b1c015-utilities\") pod \"redhat-marketplace-jttjv\" (UID: \"6239f68a-d80a-4fd4-9a6d-69bd48b1c015\") " pod="openshift-marketplace/redhat-marketplace-jttjv" Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.422127 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6239f68a-d80a-4fd4-9a6d-69bd48b1c015-utilities\") pod \"redhat-marketplace-jttjv\" (UID: \"6239f68a-d80a-4fd4-9a6d-69bd48b1c015\") " pod="openshift-marketplace/redhat-marketplace-jttjv" Feb 26 17:18:41 crc kubenswrapper[4805]: E0226 17:18:41.422561 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:41.922548421 +0000 UTC m=+236.484302830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.422818 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6239f68a-d80a-4fd4-9a6d-69bd48b1c015-catalog-content\") pod \"redhat-marketplace-jttjv\" (UID: \"6239f68a-d80a-4fd4-9a6d-69bd48b1c015\") " pod="openshift-marketplace/redhat-marketplace-jttjv" Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.486473 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7fvvs"] Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.501075 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58lf6\" (UniqueName: \"kubernetes.io/projected/6239f68a-d80a-4fd4-9a6d-69bd48b1c015-kube-api-access-58lf6\") pod \"redhat-marketplace-jttjv\" (UID: \"6239f68a-d80a-4fd4-9a6d-69bd48b1c015\") " pod="openshift-marketplace/redhat-marketplace-jttjv" Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.522055 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:41 crc kubenswrapper[4805]: E0226 17:18:41.522500 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:42.02247079 +0000 UTC m=+236.584225169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.553297 4805 patch_prober.go:28] interesting pod/router-default-5444994796-bnlgh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 17:18:41 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Feb 26 17:18:41 crc kubenswrapper[4805]: [+]process-running ok Feb 26 17:18:41 crc kubenswrapper[4805]: healthz check failed Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.553368 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bnlgh" podUID="0bc402c9-dc6d-402f-9122-c7054331f144" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.611504 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2xsss"] Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.613162 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2xsss" Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.624212 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:41 crc kubenswrapper[4805]: E0226 17:18:41.624668 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:42.124647234 +0000 UTC m=+236.686401613 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.636807 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jttjv" Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.638171 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2xsss"] Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.646141 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fvvs" event={"ID":"83d8504f-33ad-4812-bc1e-11233c225974","Type":"ContainerStarted","Data":"6072246c8d3ab45eac6d8b3267a2965acb1a1ee9ade6e33d188acd735bf12d0e"} Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.681128 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfcdt" event={"ID":"043bfd8c-1387-4b00-ad52-1e4efd43c942","Type":"ContainerStarted","Data":"22404ef21c15145bc08558aea2ffe42aa9d1fc50fdb7930736eb5177f0156df5"} Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.689498 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzkpq" event={"ID":"87196950-f6be-442b-a725-3cdee5962f55","Type":"ContainerStarted","Data":"34cbff5bc8cf6298bff43d3ca62fde5fab21ed596e5c62a72589a70dc9e3c774"} Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.702105 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rmjsl" event={"ID":"1f5ab03e-b223-4e5b-8c9f-3d350a66156e","Type":"ContainerStarted","Data":"e7543bd7fcd7df374d52902d3296b8ae90c1ae522e097d10ddcd6c9fe31d0c9a"} Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.702151 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rmjsl" event={"ID":"1f5ab03e-b223-4e5b-8c9f-3d350a66156e","Type":"ContainerStarted","Data":"ada6bb3732d4070c97c30a4bbdcc84dba25bd7637760060d01e08e0dfd378959"} Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.713003 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-4b67j" event={"ID":"4a986fa5-f0cd-4512-89fe-8e8ccad45745","Type":"ContainerStarted","Data":"849594bff9bf5be4938708bae1bc1206b6522d11b58c6e016ddb0467b5d27ac2"} Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.731125 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.731341 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a876494-42c5-4be9-aad1-46d8ce3c68bb-catalog-content\") pod \"redhat-marketplace-2xsss\" (UID: \"7a876494-42c5-4be9-aad1-46d8ce3c68bb\") " pod="openshift-marketplace/redhat-marketplace-2xsss" Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.731394 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a876494-42c5-4be9-aad1-46d8ce3c68bb-utilities\") pod \"redhat-marketplace-2xsss\" (UID: \"7a876494-42c5-4be9-aad1-46d8ce3c68bb\") " pod="openshift-marketplace/redhat-marketplace-2xsss" Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.731499 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2qjc\" (UniqueName: \"kubernetes.io/projected/7a876494-42c5-4be9-aad1-46d8ce3c68bb-kube-api-access-n2qjc\") pod \"redhat-marketplace-2xsss\" (UID: \"7a876494-42c5-4be9-aad1-46d8ce3c68bb\") " pod="openshift-marketplace/redhat-marketplace-2xsss" Feb 26 17:18:41 crc kubenswrapper[4805]: E0226 17:18:41.731702 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:42.231667698 +0000 UTC m=+236.793422047 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.759992 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xbznm" event={"ID":"f86d7b9c-1d60-4153-98aa-6a8985a78907","Type":"ContainerStarted","Data":"d5a81049704daacba704c4e5baebf1d43d856d4c2a546829259a975b816220f4"} Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.805043 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-n2k99" event={"ID":"6b4645cb-cf08-4b8d-9115-c2d7d4a40cae","Type":"ContainerStarted","Data":"91166ec033a7ba8eae12d8728dde561fecec2b5ddbbc31745e0b157d061fc466"} Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.808994 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-n2k99" Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.836925 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2qjc\" (UniqueName: \"kubernetes.io/projected/7a876494-42c5-4be9-aad1-46d8ce3c68bb-kube-api-access-n2qjc\") pod \"redhat-marketplace-2xsss\" (UID: \"7a876494-42c5-4be9-aad1-46d8ce3c68bb\") " pod="openshift-marketplace/redhat-marketplace-2xsss" Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.837087 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a876494-42c5-4be9-aad1-46d8ce3c68bb-catalog-content\") pod \"redhat-marketplace-2xsss\" (UID: \"7a876494-42c5-4be9-aad1-46d8ce3c68bb\") " pod="openshift-marketplace/redhat-marketplace-2xsss" Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.837152 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a876494-42c5-4be9-aad1-46d8ce3c68bb-utilities\") pod \"redhat-marketplace-2xsss\" (UID: \"7a876494-42c5-4be9-aad1-46d8ce3c68bb\") " pod="openshift-marketplace/redhat-marketplace-2xsss" Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.837255 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:41 crc kubenswrapper[4805]: E0226 17:18:41.840206 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:42.340170748 +0000 UTC m=+236.901925087 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.840221 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a876494-42c5-4be9-aad1-46d8ce3c68bb-utilities\") pod \"redhat-marketplace-2xsss\" (UID: \"7a876494-42c5-4be9-aad1-46d8ce3c68bb\") " pod="openshift-marketplace/redhat-marketplace-2xsss" Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.840832 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a876494-42c5-4be9-aad1-46d8ce3c68bb-catalog-content\") pod \"redhat-marketplace-2xsss\" (UID: \"7a876494-42c5-4be9-aad1-46d8ce3c68bb\") " pod="openshift-marketplace/redhat-marketplace-2xsss" Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.874388 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2f78d" Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.903156 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2qjc\" (UniqueName: \"kubernetes.io/projected/7a876494-42c5-4be9-aad1-46d8ce3c68bb-kube-api-access-n2qjc\") pod \"redhat-marketplace-2xsss\" (UID: \"7a876494-42c5-4be9-aad1-46d8ce3c68bb\") " pod="openshift-marketplace/redhat-marketplace-2xsss" Feb 26 17:18:41 crc kubenswrapper[4805]: I0226 17:18:41.939988 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:41 crc kubenswrapper[4805]: E0226 17:18:41.942077 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:42.442055475 +0000 UTC m=+237.003809814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:41.999637 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-n2k99" podStartSLOduration=10.999612211 podStartE2EDuration="10.999612211s" podCreationTimestamp="2026-02-26 17:18:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:41.903626199 +0000 UTC m=+236.465380538" watchObservedRunningTime="2026-02-26 17:18:41.999612211 +0000 UTC m=+236.561366550" Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.042979 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:42 crc kubenswrapper[4805]: E0226 17:18:42.043307 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:42.543294976 +0000 UTC m=+237.105049305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.130499 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2xsss" Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.148283 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:42 crc kubenswrapper[4805]: E0226 17:18:42.148815 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:42.648794822 +0000 UTC m=+237.210549161 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.163903 4805 ???:1] "http: TLS handshake error from 192.168.126.11:50052: no serving certificate available for the kubelet" Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.258813 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:42 crc kubenswrapper[4805]: E0226 17:18:42.259740 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:42.759724392 +0000 UTC m=+237.321478731 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.361437 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:42 crc kubenswrapper[4805]: E0226 17:18:42.362622 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:42.862587093 +0000 UTC m=+237.424341432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.422078 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6s2g" Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.468069 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:42 crc kubenswrapper[4805]: E0226 17:18:42.468475 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:42.968461478 +0000 UTC m=+237.530215817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.471920 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jttjv"] Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.549923 4805 patch_prober.go:28] interesting pod/router-default-5444994796-bnlgh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 17:18:42 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Feb 26 17:18:42 crc kubenswrapper[4805]: [+]process-running ok Feb 26 17:18:42 crc kubenswrapper[4805]: healthz check failed Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.550033 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bnlgh" podUID="0bc402c9-dc6d-402f-9122-c7054331f144" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.572088 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:42 crc kubenswrapper[4805]: E0226 17:18:42.572634 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:43.072607551 +0000 UTC m=+237.634361890 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.606771 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wnt7l"] Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.608461 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wnt7l" Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.621731 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.632061 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wnt7l"] Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.675411 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ngs29"] Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.676032 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-ngs29" podUID="05e30706-eb6c-42d4-a28d-aa664f89ed80" containerName="controller-manager" containerID="cri-o://3b80e0f1a6f709e6eb6e980a04877259a91955a68fef5c136017c810f7a95a12" gracePeriod=30 Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.679652 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/231a1216-2a55-4e7b-b026-104624c69857-utilities\") pod \"redhat-operators-wnt7l\" (UID: \"231a1216-2a55-4e7b-b026-104624c69857\") " pod="openshift-marketplace/redhat-operators-wnt7l" Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.679710 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfr46\" (UniqueName: \"kubernetes.io/projected/231a1216-2a55-4e7b-b026-104624c69857-kube-api-access-dfr46\") pod \"redhat-operators-wnt7l\" (UID: \"231a1216-2a55-4e7b-b026-104624c69857\") " pod="openshift-marketplace/redhat-operators-wnt7l" Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.679748 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/231a1216-2a55-4e7b-b026-104624c69857-catalog-content\") pod \"redhat-operators-wnt7l\" (UID: \"231a1216-2a55-4e7b-b026-104624c69857\") " pod="openshift-marketplace/redhat-operators-wnt7l" Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.679790 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:42 crc kubenswrapper[4805]: E0226 17:18:42.680234 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:43.180202219 +0000 UTC m=+237.741956558 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.698522 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-ngs29" Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.753225 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-g44kw"] Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.753598 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g44kw" podUID="f440d939-b304-4728-afc4-ad814d771fbb" containerName="route-controller-manager" containerID="cri-o://2fdf2abb7ecac07e5c4969ef95c2d16003cdcdcd2a9f1e420ff7ac2e8dcd43f8" gracePeriod=30 Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.780810 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.781342 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/231a1216-2a55-4e7b-b026-104624c69857-utilities\") pod \"redhat-operators-wnt7l\" (UID: \"231a1216-2a55-4e7b-b026-104624c69857\") " pod="openshift-marketplace/redhat-operators-wnt7l" Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.781399 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfr46\" (UniqueName: \"kubernetes.io/projected/231a1216-2a55-4e7b-b026-104624c69857-kube-api-access-dfr46\") pod \"redhat-operators-wnt7l\" (UID: \"231a1216-2a55-4e7b-b026-104624c69857\") " pod="openshift-marketplace/redhat-operators-wnt7l" Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.781437 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/231a1216-2a55-4e7b-b026-104624c69857-catalog-content\") pod \"redhat-operators-wnt7l\" (UID: \"231a1216-2a55-4e7b-b026-104624c69857\") " pod="openshift-marketplace/redhat-operators-wnt7l" Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.781995 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/231a1216-2a55-4e7b-b026-104624c69857-catalog-content\") pod \"redhat-operators-wnt7l\" (UID: \"231a1216-2a55-4e7b-b026-104624c69857\") " pod="openshift-marketplace/redhat-operators-wnt7l" Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.788463 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/231a1216-2a55-4e7b-b026-104624c69857-utilities\") pod \"redhat-operators-wnt7l\" (UID: \"231a1216-2a55-4e7b-b026-104624c69857\") " pod="openshift-marketplace/redhat-operators-wnt7l" Feb 26 17:18:42 crc kubenswrapper[4805]: E0226 17:18:42.788908 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:43.288845162 +0000 UTC m=+237.850599501 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.841930 4805 generic.go:334] "Generic (PLEG): container finished" podID="1f5ab03e-b223-4e5b-8c9f-3d350a66156e" containerID="e7543bd7fcd7df374d52902d3296b8ae90c1ae522e097d10ddcd6c9fe31d0c9a" exitCode=0 Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.842126 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rmjsl" event={"ID":"1f5ab03e-b223-4e5b-8c9f-3d350a66156e","Type":"ContainerDied","Data":"e7543bd7fcd7df374d52902d3296b8ae90c1ae522e097d10ddcd6c9fe31d0c9a"} Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.842785 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfr46\" (UniqueName: \"kubernetes.io/projected/231a1216-2a55-4e7b-b026-104624c69857-kube-api-access-dfr46\") pod \"redhat-operators-wnt7l\" (UID: \"231a1216-2a55-4e7b-b026-104624c69857\") " pod="openshift-marketplace/redhat-operators-wnt7l" Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.889171 4805 generic.go:334] "Generic (PLEG): container finished" podID="83d8504f-33ad-4812-bc1e-11233c225974" containerID="74e7423b37658a4afd05201fd49048f980d08f7797dafc6b276de33565e71893" exitCode=0 Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.890052 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fvvs" event={"ID":"83d8504f-33ad-4812-bc1e-11233c225974","Type":"ContainerDied","Data":"74e7423b37658a4afd05201fd49048f980d08f7797dafc6b276de33565e71893"} Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.890230 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:42 crc kubenswrapper[4805]: E0226 17:18:42.890524 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:43.390511324 +0000 UTC m=+237.952265663 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.922222 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jttjv" event={"ID":"6239f68a-d80a-4fd4-9a6d-69bd48b1c015","Type":"ContainerStarted","Data":"15ac97b2b081bb76149219f251e5191ed61b510817331a1e777f102bd39901ce"} Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.941915 4805 generic.go:334] "Generic (PLEG): container finished" podID="043bfd8c-1387-4b00-ad52-1e4efd43c942" containerID="4678de5c7423994d3244ee7830b9b9fc2f8143927cf03668fd8217bf5a2cd6ca" exitCode=0 Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.941998 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfcdt" event={"ID":"043bfd8c-1387-4b00-ad52-1e4efd43c942","Type":"ContainerDied","Data":"4678de5c7423994d3244ee7830b9b9fc2f8143927cf03668fd8217bf5a2cd6ca"} Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.952680 4805 generic.go:334] "Generic (PLEG): container finished" podID="87196950-f6be-442b-a725-3cdee5962f55" containerID="aedd15a02a145d8ac533dabf5e478a79787dda12378181d8a5de7742c0debb66" exitCode=0 Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.954173 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzkpq" event={"ID":"87196950-f6be-442b-a725-3cdee5962f55","Type":"ContainerDied","Data":"aedd15a02a145d8ac533dabf5e478a79787dda12378181d8a5de7742c0debb66"} Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.996669 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n5l62"] Feb 26 17:18:42 crc kubenswrapper[4805]: I0226 17:18:42.997853 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n5l62" Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.001407 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:43 crc kubenswrapper[4805]: E0226 17:18:43.003413 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:43.503392452 +0000 UTC m=+238.065146801 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.009675 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n5l62"] Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.033056 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wnt7l" Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.064890 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2xsss"] Feb 26 17:18:43 crc kubenswrapper[4805]: W0226 17:18:43.108227 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a876494_42c5_4be9_aad1_46d8ce3c68bb.slice/crio-2cf95c71cada185b9870da61b2081b121a02856e36fd269527db7a0f8a0fe6de WatchSource:0}: Error finding container 2cf95c71cada185b9870da61b2081b121a02856e36fd269527db7a0f8a0fe6de: Status 404 returned error can't find the container with id 2cf95c71cada185b9870da61b2081b121a02856e36fd269527db7a0f8a0fe6de Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.109261 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/884caea0-c055-415b-92c3-fae420465726-utilities\") pod \"redhat-operators-n5l62\" (UID: \"884caea0-c055-415b-92c3-fae420465726\") " pod="openshift-marketplace/redhat-operators-n5l62" Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.109343 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.109475 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjwsj\" (UniqueName: \"kubernetes.io/projected/884caea0-c055-415b-92c3-fae420465726-kube-api-access-rjwsj\") pod \"redhat-operators-n5l62\" (UID: \"884caea0-c055-415b-92c3-fae420465726\") " pod="openshift-marketplace/redhat-operators-n5l62" Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.109518 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/884caea0-c055-415b-92c3-fae420465726-catalog-content\") pod \"redhat-operators-n5l62\" (UID: \"884caea0-c055-415b-92c3-fae420465726\") " pod="openshift-marketplace/redhat-operators-n5l62" Feb 26 17:18:43 crc kubenswrapper[4805]: E0226 17:18:43.123164 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:43.623146109 +0000 UTC m=+238.184900448 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.214658 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:43 crc kubenswrapper[4805]: E0226 17:18:43.215867 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:43.71583238 +0000 UTC m=+238.277586729 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.215976 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjwsj\" (UniqueName: \"kubernetes.io/projected/884caea0-c055-415b-92c3-fae420465726-kube-api-access-rjwsj\") pod \"redhat-operators-n5l62\" (UID: \"884caea0-c055-415b-92c3-fae420465726\") " pod="openshift-marketplace/redhat-operators-n5l62" Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.216051 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/884caea0-c055-415b-92c3-fae420465726-catalog-content\") pod \"redhat-operators-n5l62\" (UID: \"884caea0-c055-415b-92c3-fae420465726\") " pod="openshift-marketplace/redhat-operators-n5l62" Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.216141 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/884caea0-c055-415b-92c3-fae420465726-utilities\") pod \"redhat-operators-n5l62\" (UID: \"884caea0-c055-415b-92c3-fae420465726\") " pod="openshift-marketplace/redhat-operators-n5l62" Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.216726 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/884caea0-c055-415b-92c3-fae420465726-utilities\") pod \"redhat-operators-n5l62\" (UID: \"884caea0-c055-415b-92c3-fae420465726\") " pod="openshift-marketplace/redhat-operators-n5l62" Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.217355 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/884caea0-c055-415b-92c3-fae420465726-catalog-content\") pod \"redhat-operators-n5l62\" (UID: \"884caea0-c055-415b-92c3-fae420465726\") " pod="openshift-marketplace/redhat-operators-n5l62" Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.276673 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjwsj\" (UniqueName: \"kubernetes.io/projected/884caea0-c055-415b-92c3-fae420465726-kube-api-access-rjwsj\") pod \"redhat-operators-n5l62\" (UID: \"884caea0-c055-415b-92c3-fae420465726\") " pod="openshift-marketplace/redhat-operators-n5l62" Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.317257 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:43 crc kubenswrapper[4805]: E0226 17:18:43.317599 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:43.817586533 +0000 UTC m=+238.379340872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.334367 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n5l62" Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.406452 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ngs29" Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.418417 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:43 crc kubenswrapper[4805]: E0226 17:18:43.419292 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:43.919272006 +0000 UTC m=+238.481026345 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.442602 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wnt7l"] Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.452609 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g44kw" Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.517528 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6l2g9" Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.519467 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05e30706-eb6c-42d4-a28d-aa664f89ed80-serving-cert\") pod \"05e30706-eb6c-42d4-a28d-aa664f89ed80\" (UID: \"05e30706-eb6c-42d4-a28d-aa664f89ed80\") " Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.519574 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/05e30706-eb6c-42d4-a28d-aa664f89ed80-proxy-ca-bundles\") pod \"05e30706-eb6c-42d4-a28d-aa664f89ed80\" (UID: \"05e30706-eb6c-42d4-a28d-aa664f89ed80\") " Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.519683 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05e30706-eb6c-42d4-a28d-aa664f89ed80-config\") pod \"05e30706-eb6c-42d4-a28d-aa664f89ed80\" (UID: \"05e30706-eb6c-42d4-a28d-aa664f89ed80\") " Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.519880 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwbnd\" (UniqueName: \"kubernetes.io/projected/05e30706-eb6c-42d4-a28d-aa664f89ed80-kube-api-access-qwbnd\") pod \"05e30706-eb6c-42d4-a28d-aa664f89ed80\" (UID: \"05e30706-eb6c-42d4-a28d-aa664f89ed80\") " Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.519938 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/05e30706-eb6c-42d4-a28d-aa664f89ed80-client-ca\") pod \"05e30706-eb6c-42d4-a28d-aa664f89ed80\" (UID: \"05e30706-eb6c-42d4-a28d-aa664f89ed80\") " Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.520134 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:43 crc kubenswrapper[4805]: E0226 17:18:43.520500 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:44.020487916 +0000 UTC m=+238.582242255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.521647 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05e30706-eb6c-42d4-a28d-aa664f89ed80-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "05e30706-eb6c-42d4-a28d-aa664f89ed80" (UID: "05e30706-eb6c-42d4-a28d-aa664f89ed80"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.522332 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05e30706-eb6c-42d4-a28d-aa664f89ed80-client-ca" (OuterVolumeSpecName: "client-ca") pod "05e30706-eb6c-42d4-a28d-aa664f89ed80" (UID: "05e30706-eb6c-42d4-a28d-aa664f89ed80"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.522497 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05e30706-eb6c-42d4-a28d-aa664f89ed80-config" (OuterVolumeSpecName: "config") pod "05e30706-eb6c-42d4-a28d-aa664f89ed80" (UID: "05e30706-eb6c-42d4-a28d-aa664f89ed80"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.526425 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05e30706-eb6c-42d4-a28d-aa664f89ed80-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "05e30706-eb6c-42d4-a28d-aa664f89ed80" (UID: "05e30706-eb6c-42d4-a28d-aa664f89ed80"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.527263 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05e30706-eb6c-42d4-a28d-aa664f89ed80-kube-api-access-qwbnd" (OuterVolumeSpecName: "kube-api-access-qwbnd") pod "05e30706-eb6c-42d4-a28d-aa664f89ed80" (UID: "05e30706-eb6c-42d4-a28d-aa664f89ed80"). InnerVolumeSpecName "kube-api-access-qwbnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.552123 4805 patch_prober.go:28] interesting pod/router-default-5444994796-bnlgh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 17:18:43 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Feb 26 17:18:43 crc kubenswrapper[4805]: [+]process-running ok Feb 26 17:18:43 crc kubenswrapper[4805]: healthz check failed Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.552209 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bnlgh" podUID="0bc402c9-dc6d-402f-9122-c7054331f144" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.620645 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f440d939-b304-4728-afc4-ad814d771fbb-config\") pod \"f440d939-b304-4728-afc4-ad814d771fbb\" (UID: \"f440d939-b304-4728-afc4-ad814d771fbb\") " Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.620688 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f440d939-b304-4728-afc4-ad814d771fbb-serving-cert\") pod \"f440d939-b304-4728-afc4-ad814d771fbb\" (UID: \"f440d939-b304-4728-afc4-ad814d771fbb\") " Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.620836 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.620865 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpd2h\" (UniqueName: \"kubernetes.io/projected/f440d939-b304-4728-afc4-ad814d771fbb-kube-api-access-lpd2h\") pod \"f440d939-b304-4728-afc4-ad814d771fbb\" (UID: \"f440d939-b304-4728-afc4-ad814d771fbb\") " Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.620907 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f440d939-b304-4728-afc4-ad814d771fbb-client-ca\") pod \"f440d939-b304-4728-afc4-ad814d771fbb\" (UID: \"f440d939-b304-4728-afc4-ad814d771fbb\") " Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.621273 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwbnd\" (UniqueName: \"kubernetes.io/projected/05e30706-eb6c-42d4-a28d-aa664f89ed80-kube-api-access-qwbnd\") on node \"crc\" DevicePath \"\"" Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.621287 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/05e30706-eb6c-42d4-a28d-aa664f89ed80-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.621296 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05e30706-eb6c-42d4-a28d-aa664f89ed80-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.621304 4805 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/05e30706-eb6c-42d4-a28d-aa664f89ed80-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.621313 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05e30706-eb6c-42d4-a28d-aa664f89ed80-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.621857 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f440d939-b304-4728-afc4-ad814d771fbb-client-ca" (OuterVolumeSpecName: "client-ca") pod "f440d939-b304-4728-afc4-ad814d771fbb" (UID: "f440d939-b304-4728-afc4-ad814d771fbb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.621901 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f440d939-b304-4728-afc4-ad814d771fbb-config" (OuterVolumeSpecName: "config") pod "f440d939-b304-4728-afc4-ad814d771fbb" (UID: "f440d939-b304-4728-afc4-ad814d771fbb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:18:43 crc kubenswrapper[4805]: E0226 17:18:43.622256 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:44.12221916 +0000 UTC m=+238.683973509 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.629251 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.631044 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f440d939-b304-4728-afc4-ad814d771fbb-kube-api-access-lpd2h" (OuterVolumeSpecName: "kube-api-access-lpd2h") pod "f440d939-b304-4728-afc4-ad814d771fbb" (UID: "f440d939-b304-4728-afc4-ad814d771fbb"). InnerVolumeSpecName "kube-api-access-lpd2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.637617 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f440d939-b304-4728-afc4-ad814d771fbb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f440d939-b304-4728-afc4-ad814d771fbb" (UID: "f440d939-b304-4728-afc4-ad814d771fbb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.639140 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-qkzq5" Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.678120 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n5l62"] Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.722831 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.722943 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f440d939-b304-4728-afc4-ad814d771fbb-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.722954 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f440d939-b304-4728-afc4-ad814d771fbb-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.722963 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpd2h\" (UniqueName: \"kubernetes.io/projected/f440d939-b304-4728-afc4-ad814d771fbb-kube-api-access-lpd2h\") on node \"crc\" DevicePath \"\"" Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.722971 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f440d939-b304-4728-afc4-ad814d771fbb-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 17:18:43 crc kubenswrapper[4805]: E0226 17:18:43.723576 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:44.223555973 +0000 UTC m=+238.785310393 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:43 crc kubenswrapper[4805]: I0226 17:18:43.830794 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:43 crc kubenswrapper[4805]: E0226 17:18:43.831181 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:44.331144491 +0000 UTC m=+238.892898830 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:43.932529 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:44 crc kubenswrapper[4805]: E0226 17:18:43.932906 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:44.432888205 +0000 UTC m=+238.994642544 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:43.972151 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5l62" event={"ID":"884caea0-c055-415b-92c3-fae420465726","Type":"ContainerStarted","Data":"8ae67eb4b56d2107cbec72987ceb1b6325462d73fa9a5b6c0d81be810c77df81"} Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:43.973622 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wnt7l" event={"ID":"231a1216-2a55-4e7b-b026-104624c69857","Type":"ContainerStarted","Data":"58b9956844f37b58c34bf84f23e37b4dbb0f0cdf3f01c2fbc77bce8e737f7629"} Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:43.975027 4805 generic.go:334] "Generic (PLEG): container finished" podID="6239f68a-d80a-4fd4-9a6d-69bd48b1c015" containerID="58f7ff0d0eb4297c76fcabdaeaeda1cf4c6bc95a60fd79af453098edfe922877" exitCode=0 Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:43.975174 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jttjv" event={"ID":"6239f68a-d80a-4fd4-9a6d-69bd48b1c015","Type":"ContainerDied","Data":"58f7ff0d0eb4297c76fcabdaeaeda1cf4c6bc95a60fd79af453098edfe922877"} Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:43.983087 4805 generic.go:334] "Generic (PLEG): container finished" podID="f440d939-b304-4728-afc4-ad814d771fbb" containerID="2fdf2abb7ecac07e5c4969ef95c2d16003cdcdcd2a9f1e420ff7ac2e8dcd43f8" exitCode=0 Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:43.983256 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g44kw" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:43.983683 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g44kw" event={"ID":"f440d939-b304-4728-afc4-ad814d771fbb","Type":"ContainerDied","Data":"2fdf2abb7ecac07e5c4969ef95c2d16003cdcdcd2a9f1e420ff7ac2e8dcd43f8"} Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:43.983726 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g44kw" event={"ID":"f440d939-b304-4728-afc4-ad814d771fbb","Type":"ContainerDied","Data":"f97cbcae5335950a4723f8b9921e93493aa8befc48b9b1d1e39a6269302f2726"} Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:43.983747 4805 scope.go:117] "RemoveContainer" containerID="2fdf2abb7ecac07e5c4969ef95c2d16003cdcdcd2a9f1e420ff7ac2e8dcd43f8" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:43.989085 4805 generic.go:334] "Generic (PLEG): container finished" podID="05e30706-eb6c-42d4-a28d-aa664f89ed80" containerID="3b80e0f1a6f709e6eb6e980a04877259a91955a68fef5c136017c810f7a95a12" exitCode=0 Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:43.989157 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ngs29" event={"ID":"05e30706-eb6c-42d4-a28d-aa664f89ed80","Type":"ContainerDied","Data":"3b80e0f1a6f709e6eb6e980a04877259a91955a68fef5c136017c810f7a95a12"} Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:43.989192 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ngs29" event={"ID":"05e30706-eb6c-42d4-a28d-aa664f89ed80","Type":"ContainerDied","Data":"bb7281c2f84b6d9a7a01c2382232fbce8a6109d457a7e9d167f5ec3a1587d747"} Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:43.989194 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ngs29" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.014970 4805 generic.go:334] "Generic (PLEG): container finished" podID="7a876494-42c5-4be9-aad1-46d8ce3c68bb" containerID="8e17cb90e7bcf797777439980b61bac20684c9283052ccf63e9aa8b0a6e38341" exitCode=0 Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.015571 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2xsss" event={"ID":"7a876494-42c5-4be9-aad1-46d8ce3c68bb","Type":"ContainerDied","Data":"8e17cb90e7bcf797777439980b61bac20684c9283052ccf63e9aa8b0a6e38341"} Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.015596 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2xsss" event={"ID":"7a876494-42c5-4be9-aad1-46d8ce3c68bb","Type":"ContainerStarted","Data":"2cf95c71cada185b9870da61b2081b121a02856e36fd269527db7a0f8a0fe6de"} Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.033774 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:44 crc kubenswrapper[4805]: E0226 17:18:44.034383 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:44.534363282 +0000 UTC m=+239.096117621 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.035599 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:44 crc kubenswrapper[4805]: E0226 17:18:44.036282 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:44.536270729 +0000 UTC m=+239.098025068 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.091445 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-lrc7d container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.091504 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-lrc7d" podUID="046e32fa-0c59-4d66-9ab3-02d3ab26255b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.098772 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-lrc7d container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.098808 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lrc7d" podUID="046e32fa-0c59-4d66-9ab3-02d3ab26255b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.136649 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:44 crc kubenswrapper[4805]: E0226 17:18:44.136873 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:44.636841594 +0000 UTC m=+239.198595943 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.137301 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:44 crc kubenswrapper[4805]: E0226 17:18:44.137652 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:44.637637213 +0000 UTC m=+239.199391562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.180936 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 26 17:18:44 crc kubenswrapper[4805]: E0226 17:18:44.181267 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f440d939-b304-4728-afc4-ad814d771fbb" containerName="route-controller-manager" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.181279 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f440d939-b304-4728-afc4-ad814d771fbb" containerName="route-controller-manager" Feb 26 17:18:44 crc kubenswrapper[4805]: E0226 17:18:44.181291 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05e30706-eb6c-42d4-a28d-aa664f89ed80" containerName="controller-manager" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.181297 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="05e30706-eb6c-42d4-a28d-aa664f89ed80" containerName="controller-manager" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.181402 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="05e30706-eb6c-42d4-a28d-aa664f89ed80" containerName="controller-manager" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.181412 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f440d939-b304-4728-afc4-ad814d771fbb" containerName="route-controller-manager" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.181830 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.185603 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-g44kw"] Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.186980 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.187303 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.191969 4805 scope.go:117] "RemoveContainer" containerID="2fdf2abb7ecac07e5c4969ef95c2d16003cdcdcd2a9f1e420ff7ac2e8dcd43f8" Feb 26 17:18:44 crc kubenswrapper[4805]: E0226 17:18:44.193061 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fdf2abb7ecac07e5c4969ef95c2d16003cdcdcd2a9f1e420ff7ac2e8dcd43f8\": container with ID starting with 2fdf2abb7ecac07e5c4969ef95c2d16003cdcdcd2a9f1e420ff7ac2e8dcd43f8 not found: ID does not exist" containerID="2fdf2abb7ecac07e5c4969ef95c2d16003cdcdcd2a9f1e420ff7ac2e8dcd43f8" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.193130 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fdf2abb7ecac07e5c4969ef95c2d16003cdcdcd2a9f1e420ff7ac2e8dcd43f8"} err="failed to get container status \"2fdf2abb7ecac07e5c4969ef95c2d16003cdcdcd2a9f1e420ff7ac2e8dcd43f8\": rpc error: code = NotFound desc = could not find container \"2fdf2abb7ecac07e5c4969ef95c2d16003cdcdcd2a9f1e420ff7ac2e8dcd43f8\": container with ID starting with 2fdf2abb7ecac07e5c4969ef95c2d16003cdcdcd2a9f1e420ff7ac2e8dcd43f8 not found: ID does not exist" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.193175 4805 scope.go:117] "RemoveContainer" containerID="3b80e0f1a6f709e6eb6e980a04877259a91955a68fef5c136017c810f7a95a12" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.203632 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-g44kw"] Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.208001 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.212201 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ngs29"] Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.221320 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ngs29"] Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.235559 4805 scope.go:117] "RemoveContainer" containerID="3b80e0f1a6f709e6eb6e980a04877259a91955a68fef5c136017c810f7a95a12" Feb 26 17:18:44 crc kubenswrapper[4805]: E0226 17:18:44.236128 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b80e0f1a6f709e6eb6e980a04877259a91955a68fef5c136017c810f7a95a12\": container with ID starting with 3b80e0f1a6f709e6eb6e980a04877259a91955a68fef5c136017c810f7a95a12 not found: ID does not exist" containerID="3b80e0f1a6f709e6eb6e980a04877259a91955a68fef5c136017c810f7a95a12" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.236241 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b80e0f1a6f709e6eb6e980a04877259a91955a68fef5c136017c810f7a95a12"} err="failed to get container status \"3b80e0f1a6f709e6eb6e980a04877259a91955a68fef5c136017c810f7a95a12\": rpc error: code = NotFound desc = could not find container \"3b80e0f1a6f709e6eb6e980a04877259a91955a68fef5c136017c810f7a95a12\": container with ID starting with 3b80e0f1a6f709e6eb6e980a04877259a91955a68fef5c136017c810f7a95a12 not found: ID does not exist" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.238450 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:44 crc kubenswrapper[4805]: E0226 17:18:44.238568 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:44.738541346 +0000 UTC m=+239.300295685 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.238794 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c828360-94fa-4770-95a0-d9e635cabb89-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1c828360-94fa-4770-95a0-d9e635cabb89\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.238844 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c828360-94fa-4770-95a0-d9e635cabb89-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1c828360-94fa-4770-95a0-d9e635cabb89\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.238897 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:44 crc kubenswrapper[4805]: E0226 17:18:44.239462 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:44.739453409 +0000 UTC m=+239.301207748 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.241527 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-d7sjf" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.262119 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hln6b" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.263308 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hln6b" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.282172 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hln6b" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.340041 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:44 crc kubenswrapper[4805]: E0226 17:18:44.340254 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:44.840221328 +0000 UTC m=+239.401975667 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.340443 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c828360-94fa-4770-95a0-d9e635cabb89-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1c828360-94fa-4770-95a0-d9e635cabb89\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.340565 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.340656 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c828360-94fa-4770-95a0-d9e635cabb89-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1c828360-94fa-4770-95a0-d9e635cabb89\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.341516 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c828360-94fa-4770-95a0-d9e635cabb89-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1c828360-94fa-4770-95a0-d9e635cabb89\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 17:18:44 crc kubenswrapper[4805]: E0226 17:18:44.342354 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:44.842343121 +0000 UTC m=+239.404097460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.381818 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-rw6n5" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.384736 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c828360-94fa-4770-95a0-d9e635cabb89-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1c828360-94fa-4770-95a0-d9e635cabb89\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.441447 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:44 crc kubenswrapper[4805]: E0226 17:18:44.442453 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:44.942437494 +0000 UTC m=+239.504191823 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.510278 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.542294 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-bnlgh" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.543728 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:44 crc kubenswrapper[4805]: E0226 17:18:44.544182 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:45.044168896 +0000 UTC m=+239.605923235 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.546676 4805 patch_prober.go:28] interesting pod/router-default-5444994796-bnlgh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 17:18:44 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Feb 26 17:18:44 crc kubenswrapper[4805]: [+]process-running ok Feb 26 17:18:44 crc kubenswrapper[4805]: healthz check failed Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.546855 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bnlgh" podUID="0bc402c9-dc6d-402f-9122-c7054331f144" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.628705 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6ff9c75dd6-568lh"] Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.629785 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff9c75dd6-568lh" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.632916 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.633298 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.633429 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.633572 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.634456 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.634492 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.639310 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d79569bf6-j6hbn"] Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.640057 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d79569bf6-j6hbn" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.641264 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.641389 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.644716 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.644761 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.644937 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.644968 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.645087 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.645096 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 26 17:18:44 crc kubenswrapper[4805]: E0226 17:18:44.645157 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:45.145141681 +0000 UTC m=+239.706896020 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.645670 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnjfs\" (UniqueName: \"kubernetes.io/projected/721f07b1-285b-438d-b9dd-13fc2112310b-kube-api-access-vnjfs\") pod \"route-controller-manager-7d79569bf6-j6hbn\" (UID: \"721f07b1-285b-438d-b9dd-13fc2112310b\") " pod="openshift-route-controller-manager/route-controller-manager-7d79569bf6-j6hbn" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.645703 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27763fd0-2213-406e-b488-fb2d67d891f8-config\") pod \"controller-manager-6ff9c75dd6-568lh\" (UID: \"27763fd0-2213-406e-b488-fb2d67d891f8\") " pod="openshift-controller-manager/controller-manager-6ff9c75dd6-568lh" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.645723 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27763fd0-2213-406e-b488-fb2d67d891f8-client-ca\") pod \"controller-manager-6ff9c75dd6-568lh\" (UID: \"27763fd0-2213-406e-b488-fb2d67d891f8\") " pod="openshift-controller-manager/controller-manager-6ff9c75dd6-568lh" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.645792 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bll7m\" (UniqueName: \"kubernetes.io/projected/27763fd0-2213-406e-b488-fb2d67d891f8-kube-api-access-bll7m\") pod \"controller-manager-6ff9c75dd6-568lh\" (UID: \"27763fd0-2213-406e-b488-fb2d67d891f8\") " pod="openshift-controller-manager/controller-manager-6ff9c75dd6-568lh" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.645817 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27763fd0-2213-406e-b488-fb2d67d891f8-serving-cert\") pod \"controller-manager-6ff9c75dd6-568lh\" (UID: \"27763fd0-2213-406e-b488-fb2d67d891f8\") " pod="openshift-controller-manager/controller-manager-6ff9c75dd6-568lh" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.645871 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/721f07b1-285b-438d-b9dd-13fc2112310b-config\") pod \"route-controller-manager-7d79569bf6-j6hbn\" (UID: \"721f07b1-285b-438d-b9dd-13fc2112310b\") " pod="openshift-route-controller-manager/route-controller-manager-7d79569bf6-j6hbn" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.645891 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/721f07b1-285b-438d-b9dd-13fc2112310b-serving-cert\") pod \"route-controller-manager-7d79569bf6-j6hbn\" (UID: \"721f07b1-285b-438d-b9dd-13fc2112310b\") " pod="openshift-route-controller-manager/route-controller-manager-7d79569bf6-j6hbn" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.645929 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.645951 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27763fd0-2213-406e-b488-fb2d67d891f8-proxy-ca-bundles\") pod \"controller-manager-6ff9c75dd6-568lh\" (UID: \"27763fd0-2213-406e-b488-fb2d67d891f8\") " pod="openshift-controller-manager/controller-manager-6ff9c75dd6-568lh" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.646073 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/721f07b1-285b-438d-b9dd-13fc2112310b-client-ca\") pod \"route-controller-manager-7d79569bf6-j6hbn\" (UID: \"721f07b1-285b-438d-b9dd-13fc2112310b\") " pod="openshift-route-controller-manager/route-controller-manager-7d79569bf6-j6hbn" Feb 26 17:18:44 crc kubenswrapper[4805]: E0226 17:18:44.646594 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:45.146585936 +0000 UTC m=+239.708340275 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.659475 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6ff9c75dd6-568lh"] Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.676135 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d79569bf6-j6hbn"] Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.726634 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-2dnn9" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.726785 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-2dnn9" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.728535 4805 patch_prober.go:28] interesting pod/console-f9d7485db-2dnn9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.728576 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-2dnn9" podUID="dd3f2c3b-2417-44c9-bd45-02b10d68cf24" containerName="console" probeResult="failure" output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.747679 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.748068 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bll7m\" (UniqueName: \"kubernetes.io/projected/27763fd0-2213-406e-b488-fb2d67d891f8-kube-api-access-bll7m\") pod \"controller-manager-6ff9c75dd6-568lh\" (UID: \"27763fd0-2213-406e-b488-fb2d67d891f8\") " pod="openshift-controller-manager/controller-manager-6ff9c75dd6-568lh" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.748121 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27763fd0-2213-406e-b488-fb2d67d891f8-serving-cert\") pod \"controller-manager-6ff9c75dd6-568lh\" (UID: \"27763fd0-2213-406e-b488-fb2d67d891f8\") " pod="openshift-controller-manager/controller-manager-6ff9c75dd6-568lh" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.748186 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/721f07b1-285b-438d-b9dd-13fc2112310b-config\") pod \"route-controller-manager-7d79569bf6-j6hbn\" (UID: \"721f07b1-285b-438d-b9dd-13fc2112310b\") " pod="openshift-route-controller-manager/route-controller-manager-7d79569bf6-j6hbn" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.748203 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/721f07b1-285b-438d-b9dd-13fc2112310b-serving-cert\") pod \"route-controller-manager-7d79569bf6-j6hbn\" (UID: \"721f07b1-285b-438d-b9dd-13fc2112310b\") " pod="openshift-route-controller-manager/route-controller-manager-7d79569bf6-j6hbn" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.748246 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27763fd0-2213-406e-b488-fb2d67d891f8-proxy-ca-bundles\") pod \"controller-manager-6ff9c75dd6-568lh\" (UID: \"27763fd0-2213-406e-b488-fb2d67d891f8\") " pod="openshift-controller-manager/controller-manager-6ff9c75dd6-568lh" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.748279 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/721f07b1-285b-438d-b9dd-13fc2112310b-client-ca\") pod \"route-controller-manager-7d79569bf6-j6hbn\" (UID: \"721f07b1-285b-438d-b9dd-13fc2112310b\") " pod="openshift-route-controller-manager/route-controller-manager-7d79569bf6-j6hbn" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.748345 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnjfs\" (UniqueName: \"kubernetes.io/projected/721f07b1-285b-438d-b9dd-13fc2112310b-kube-api-access-vnjfs\") pod \"route-controller-manager-7d79569bf6-j6hbn\" (UID: \"721f07b1-285b-438d-b9dd-13fc2112310b\") " pod="openshift-route-controller-manager/route-controller-manager-7d79569bf6-j6hbn" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.748369 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27763fd0-2213-406e-b488-fb2d67d891f8-config\") pod \"controller-manager-6ff9c75dd6-568lh\" (UID: \"27763fd0-2213-406e-b488-fb2d67d891f8\") " pod="openshift-controller-manager/controller-manager-6ff9c75dd6-568lh" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.748399 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27763fd0-2213-406e-b488-fb2d67d891f8-client-ca\") pod \"controller-manager-6ff9c75dd6-568lh\" (UID: \"27763fd0-2213-406e-b488-fb2d67d891f8\") " pod="openshift-controller-manager/controller-manager-6ff9c75dd6-568lh" Feb 26 17:18:44 crc kubenswrapper[4805]: E0226 17:18:44.750446 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:45.250419141 +0000 UTC m=+239.812173480 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.750808 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27763fd0-2213-406e-b488-fb2d67d891f8-client-ca\") pod \"controller-manager-6ff9c75dd6-568lh\" (UID: \"27763fd0-2213-406e-b488-fb2d67d891f8\") " pod="openshift-controller-manager/controller-manager-6ff9c75dd6-568lh" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.752170 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/721f07b1-285b-438d-b9dd-13fc2112310b-client-ca\") pod \"route-controller-manager-7d79569bf6-j6hbn\" (UID: \"721f07b1-285b-438d-b9dd-13fc2112310b\") " pod="openshift-route-controller-manager/route-controller-manager-7d79569bf6-j6hbn" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.753215 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/721f07b1-285b-438d-b9dd-13fc2112310b-config\") pod \"route-controller-manager-7d79569bf6-j6hbn\" (UID: \"721f07b1-285b-438d-b9dd-13fc2112310b\") " pod="openshift-route-controller-manager/route-controller-manager-7d79569bf6-j6hbn" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.754700 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27763fd0-2213-406e-b488-fb2d67d891f8-config\") pod \"controller-manager-6ff9c75dd6-568lh\" (UID: \"27763fd0-2213-406e-b488-fb2d67d891f8\") " pod="openshift-controller-manager/controller-manager-6ff9c75dd6-568lh" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.755520 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27763fd0-2213-406e-b488-fb2d67d891f8-proxy-ca-bundles\") pod \"controller-manager-6ff9c75dd6-568lh\" (UID: \"27763fd0-2213-406e-b488-fb2d67d891f8\") " pod="openshift-controller-manager/controller-manager-6ff9c75dd6-568lh" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.759138 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/721f07b1-285b-438d-b9dd-13fc2112310b-serving-cert\") pod \"route-controller-manager-7d79569bf6-j6hbn\" (UID: \"721f07b1-285b-438d-b9dd-13fc2112310b\") " pod="openshift-route-controller-manager/route-controller-manager-7d79569bf6-j6hbn" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.772360 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bll7m\" (UniqueName: \"kubernetes.io/projected/27763fd0-2213-406e-b488-fb2d67d891f8-kube-api-access-bll7m\") pod \"controller-manager-6ff9c75dd6-568lh\" (UID: \"27763fd0-2213-406e-b488-fb2d67d891f8\") " pod="openshift-controller-manager/controller-manager-6ff9c75dd6-568lh" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.776092 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnjfs\" (UniqueName: \"kubernetes.io/projected/721f07b1-285b-438d-b9dd-13fc2112310b-kube-api-access-vnjfs\") pod \"route-controller-manager-7d79569bf6-j6hbn\" (UID: \"721f07b1-285b-438d-b9dd-13fc2112310b\") " pod="openshift-route-controller-manager/route-controller-manager-7d79569bf6-j6hbn" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.777474 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27763fd0-2213-406e-b488-fb2d67d891f8-serving-cert\") pod \"controller-manager-6ff9c75dd6-568lh\" (UID: \"27763fd0-2213-406e-b488-fb2d67d891f8\") " pod="openshift-controller-manager/controller-manager-6ff9c75dd6-568lh" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.850848 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:44 crc kubenswrapper[4805]: E0226 17:18:44.851364 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:45.351346905 +0000 UTC m=+239.913101244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.950690 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.951713 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:44 crc kubenswrapper[4805]: E0226 17:18:44.952188 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:45.452155586 +0000 UTC m=+240.013909935 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.952342 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:44 crc kubenswrapper[4805]: E0226 17:18:44.952841 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:45.452830402 +0000 UTC m=+240.014584741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.962140 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff9c75dd6-568lh" Feb 26 17:18:44 crc kubenswrapper[4805]: W0226 17:18:44.966038 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod1c828360_94fa_4770_95a0_d9e635cabb89.slice/crio-26731699884c6346273b5f90377c602dc91a4f54466be8421e278725b9404ebb WatchSource:0}: Error finding container 26731699884c6346273b5f90377c602dc91a4f54466be8421e278725b9404ebb: Status 404 returned error can't find the container with id 26731699884c6346273b5f90377c602dc91a4f54466be8421e278725b9404ebb Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.967762 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05e30706-eb6c-42d4-a28d-aa664f89ed80" path="/var/lib/kubelet/pods/05e30706-eb6c-42d4-a28d-aa664f89ed80/volumes" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.969332 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f440d939-b304-4728-afc4-ad814d771fbb" path="/var/lib/kubelet/pods/f440d939-b304-4728-afc4-ad814d771fbb/volumes" Feb 26 17:18:44 crc kubenswrapper[4805]: I0226 17:18:44.969905 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d79569bf6-j6hbn" Feb 26 17:18:45 crc kubenswrapper[4805]: I0226 17:18:45.029454 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"1c828360-94fa-4770-95a0-d9e635cabb89","Type":"ContainerStarted","Data":"26731699884c6346273b5f90377c602dc91a4f54466be8421e278725b9404ebb"} Feb 26 17:18:45 crc kubenswrapper[4805]: I0226 17:18:45.033158 4805 generic.go:334] "Generic (PLEG): container finished" podID="884caea0-c055-415b-92c3-fae420465726" containerID="4ababcfbc06775918fd1e3e6d68709088132077d63810c00be2d25568a725cc9" exitCode=0 Feb 26 17:18:45 crc kubenswrapper[4805]: I0226 17:18:45.033222 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5l62" event={"ID":"884caea0-c055-415b-92c3-fae420465726","Type":"ContainerDied","Data":"4ababcfbc06775918fd1e3e6d68709088132077d63810c00be2d25568a725cc9"} Feb 26 17:18:45 crc kubenswrapper[4805]: I0226 17:18:45.039936 4805 generic.go:334] "Generic (PLEG): container finished" podID="0f3f2460-fdb8-4b47-89f9-3cbb84e143e8" containerID="fb246173955ffb56c9ea80ae6637ab230691d880eb3d111a5480e5535212888a" exitCode=0 Feb 26 17:18:45 crc kubenswrapper[4805]: I0226 17:18:45.039995 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535435-n8728" event={"ID":"0f3f2460-fdb8-4b47-89f9-3cbb84e143e8","Type":"ContainerDied","Data":"fb246173955ffb56c9ea80ae6637ab230691d880eb3d111a5480e5535212888a"} Feb 26 17:18:45 crc kubenswrapper[4805]: I0226 17:18:45.042926 4805 generic.go:334] "Generic (PLEG): container finished" podID="231a1216-2a55-4e7b-b026-104624c69857" containerID="3f2c66fe6516bca25e2fbbeaf987d3fe0a5aebf039162896c9b26801204f2dfa" exitCode=0 Feb 26 17:18:45 crc kubenswrapper[4805]: I0226 17:18:45.043036 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wnt7l" event={"ID":"231a1216-2a55-4e7b-b026-104624c69857","Type":"ContainerDied","Data":"3f2c66fe6516bca25e2fbbeaf987d3fe0a5aebf039162896c9b26801204f2dfa"} Feb 26 17:18:45 crc kubenswrapper[4805]: I0226 17:18:45.051976 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hln6b" Feb 26 17:18:45 crc kubenswrapper[4805]: I0226 17:18:45.054622 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:45 crc kubenswrapper[4805]: E0226 17:18:45.055205 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:45.55515838 +0000 UTC m=+240.116912729 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:45 crc kubenswrapper[4805]: I0226 17:18:45.055945 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:45 crc kubenswrapper[4805]: E0226 17:18:45.057406 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:45.557380225 +0000 UTC m=+240.119134564 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:45 crc kubenswrapper[4805]: I0226 17:18:45.162512 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:45 crc kubenswrapper[4805]: E0226 17:18:45.162873 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:45.662857991 +0000 UTC m=+240.224612330 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:45 crc kubenswrapper[4805]: I0226 17:18:45.264143 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:45 crc kubenswrapper[4805]: E0226 17:18:45.264908 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:45.764891021 +0000 UTC m=+240.326645360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:45 crc kubenswrapper[4805]: I0226 17:18:45.365770 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:45 crc kubenswrapper[4805]: E0226 17:18:45.366357 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:45.866337618 +0000 UTC m=+240.428091957 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:45 crc kubenswrapper[4805]: I0226 17:18:45.468130 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:45 crc kubenswrapper[4805]: E0226 17:18:45.468587 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:45.968566103 +0000 UTC m=+240.530320502 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:45 crc kubenswrapper[4805]: I0226 17:18:45.484652 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d79569bf6-j6hbn"] Feb 26 17:18:45 crc kubenswrapper[4805]: W0226 17:18:45.500152 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod721f07b1_285b_438d_b9dd_13fc2112310b.slice/crio-db594adead13f6557c97c9950b3b0b14b7cba52a2881a04286b3ef163b52ef83 WatchSource:0}: Error finding container db594adead13f6557c97c9950b3b0b14b7cba52a2881a04286b3ef163b52ef83: Status 404 returned error can't find the container with id db594adead13f6557c97c9950b3b0b14b7cba52a2881a04286b3ef163b52ef83 Feb 26 17:18:45 crc kubenswrapper[4805]: I0226 17:18:45.540172 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6ff9c75dd6-568lh"] Feb 26 17:18:45 crc kubenswrapper[4805]: I0226 17:18:45.546384 4805 patch_prober.go:28] interesting pod/router-default-5444994796-bnlgh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 17:18:45 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Feb 26 17:18:45 crc kubenswrapper[4805]: [+]process-running ok Feb 26 17:18:45 crc kubenswrapper[4805]: healthz check failed Feb 26 17:18:45 crc kubenswrapper[4805]: I0226 17:18:45.546443 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bnlgh" podUID="0bc402c9-dc6d-402f-9122-c7054331f144" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 17:18:45 crc kubenswrapper[4805]: I0226 17:18:45.569635 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:45 crc kubenswrapper[4805]: E0226 17:18:45.570171 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:46.070149663 +0000 UTC m=+240.631904002 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:45 crc kubenswrapper[4805]: W0226 17:18:45.612404 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27763fd0_2213_406e_b488_fb2d67d891f8.slice/crio-2ec662f55270b3ea71d78b3489a447df9351bce9bd49f60dfdc8843b22449738 WatchSource:0}: Error finding container 2ec662f55270b3ea71d78b3489a447df9351bce9bd49f60dfdc8843b22449738: Status 404 returned error can't find the container with id 2ec662f55270b3ea71d78b3489a447df9351bce9bd49f60dfdc8843b22449738 Feb 26 17:18:45 crc kubenswrapper[4805]: I0226 17:18:45.671770 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:45 crc kubenswrapper[4805]: E0226 17:18:45.672300 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:46.172279966 +0000 UTC m=+240.734034375 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:45 crc kubenswrapper[4805]: I0226 17:18:45.773226 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:45 crc kubenswrapper[4805]: E0226 17:18:45.773775 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:46.27362559 +0000 UTC m=+240.835379929 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:45 crc kubenswrapper[4805]: I0226 17:18:45.875235 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:45 crc kubenswrapper[4805]: E0226 17:18:45.875680 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:46.375658371 +0000 UTC m=+240.937412770 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:45 crc kubenswrapper[4805]: I0226 17:18:45.976451 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:45 crc kubenswrapper[4805]: E0226 17:18:45.976634 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:46.476609505 +0000 UTC m=+241.038363844 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:45 crc kubenswrapper[4805]: I0226 17:18:45.977160 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:45 crc kubenswrapper[4805]: E0226 17:18:45.977491 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:46.477474417 +0000 UTC m=+241.039228756 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:45 crc kubenswrapper[4805]: I0226 17:18:45.995554 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 26 17:18:45 crc kubenswrapper[4805]: I0226 17:18:45.996462 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.000547 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.004065 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.066500 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.069888 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6ff9c75dd6-568lh" event={"ID":"27763fd0-2213-406e-b488-fb2d67d891f8","Type":"ContainerStarted","Data":"e67c1e671b704c11d87a8b085af718dcb823b7cd73b377f1b9b0b60d9a675d8b"} Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.070952 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6ff9c75dd6-568lh" event={"ID":"27763fd0-2213-406e-b488-fb2d67d891f8","Type":"ContainerStarted","Data":"2ec662f55270b3ea71d78b3489a447df9351bce9bd49f60dfdc8843b22449738"} Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.072551 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6ff9c75dd6-568lh" Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.077786 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.078007 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e0a98c4-d884-40b6-a094-098c07a7ca2c-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3e0a98c4-d884-40b6-a094-098c07a7ca2c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.078080 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e0a98c4-d884-40b6-a094-098c07a7ca2c-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3e0a98c4-d884-40b6-a094-098c07a7ca2c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 17:18:46 crc kubenswrapper[4805]: E0226 17:18:46.078216 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:46.578201605 +0000 UTC m=+241.139955944 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.079511 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"1c828360-94fa-4770-95a0-d9e635cabb89","Type":"ContainerStarted","Data":"007abb9507d43b7e35d927209824df66b9617074efaac480d45092a1d9b580c9"} Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.083175 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xbznm" event={"ID":"f86d7b9c-1d60-4153-98aa-6a8985a78907","Type":"ContainerStarted","Data":"40aedaf9692c47527774757e7dc354767770374421dead74310b478aac54510e"} Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.083727 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6ff9c75dd6-568lh" Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.087630 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d79569bf6-j6hbn" event={"ID":"721f07b1-285b-438d-b9dd-13fc2112310b","Type":"ContainerStarted","Data":"97e6970084d6018115885e78b608dbc109221a6d9f9aad3cb7b992231f2a3835"} Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.087689 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d79569bf6-j6hbn" event={"ID":"721f07b1-285b-438d-b9dd-13fc2112310b","Type":"ContainerStarted","Data":"db594adead13f6557c97c9950b3b0b14b7cba52a2881a04286b3ef163b52ef83"} Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.088896 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7d79569bf6-j6hbn" Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.104337 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6ff9c75dd6-568lh" podStartSLOduration=3.104293897 podStartE2EDuration="3.104293897s" podCreationTimestamp="2026-02-26 17:18:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:46.091747169 +0000 UTC m=+240.653501528" watchObservedRunningTime="2026-02-26 17:18:46.104293897 +0000 UTC m=+240.666048236" Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.120198 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.120170118 podStartE2EDuration="2.120170118s" podCreationTimestamp="2026-02-26 17:18:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:46.118737803 +0000 UTC m=+240.680492142" watchObservedRunningTime="2026-02-26 17:18:46.120170118 +0000 UTC m=+240.681924467" Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.180334 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e0a98c4-d884-40b6-a094-098c07a7ca2c-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3e0a98c4-d884-40b6-a094-098c07a7ca2c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.182111 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e0a98c4-d884-40b6-a094-098c07a7ca2c-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3e0a98c4-d884-40b6-a094-098c07a7ca2c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.180407 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.182699 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e0a98c4-d884-40b6-a094-098c07a7ca2c-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3e0a98c4-d884-40b6-a094-098c07a7ca2c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 17:18:46 crc kubenswrapper[4805]: E0226 17:18:46.182880 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:46.682860271 +0000 UTC m=+241.244614780 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.235087 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e0a98c4-d884-40b6-a094-098c07a7ca2c-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3e0a98c4-d884-40b6-a094-098c07a7ca2c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.284039 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:46 crc kubenswrapper[4805]: E0226 17:18:46.284219 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:46.784185334 +0000 UTC m=+241.345939683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.284385 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:46 crc kubenswrapper[4805]: E0226 17:18:46.284697 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:46.784685016 +0000 UTC m=+241.346439355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.334267 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7d79569bf6-j6hbn" Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.339028 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.393109 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:46 crc kubenswrapper[4805]: E0226 17:18:46.393933 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:46.893901474 +0000 UTC m=+241.455655823 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.394442 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7d79569bf6-j6hbn" podStartSLOduration=3.394407897 podStartE2EDuration="3.394407897s" podCreationTimestamp="2026-02-26 17:18:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:46.172700561 +0000 UTC m=+240.734454900" watchObservedRunningTime="2026-02-26 17:18:46.394407897 +0000 UTC m=+240.956162256" Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.495963 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:46 crc kubenswrapper[4805]: E0226 17:18:46.496296 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:46.996279993 +0000 UTC m=+241.558034332 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.499340 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535435-n8728" Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.546142 4805 patch_prober.go:28] interesting pod/router-default-5444994796-bnlgh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 17:18:46 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Feb 26 17:18:46 crc kubenswrapper[4805]: [+]process-running ok Feb 26 17:18:46 crc kubenswrapper[4805]: healthz check failed Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.546395 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bnlgh" podUID="0bc402c9-dc6d-402f-9122-c7054331f144" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.598139 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.598242 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f3f2460-fdb8-4b47-89f9-3cbb84e143e8-config-volume\") pod \"0f3f2460-fdb8-4b47-89f9-3cbb84e143e8\" (UID: \"0f3f2460-fdb8-4b47-89f9-3cbb84e143e8\") " Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.598315 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f3f2460-fdb8-4b47-89f9-3cbb84e143e8-secret-volume\") pod \"0f3f2460-fdb8-4b47-89f9-3cbb84e143e8\" (UID: \"0f3f2460-fdb8-4b47-89f9-3cbb84e143e8\") " Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.598387 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dd22t\" (UniqueName: \"kubernetes.io/projected/0f3f2460-fdb8-4b47-89f9-3cbb84e143e8-kube-api-access-dd22t\") pod \"0f3f2460-fdb8-4b47-89f9-3cbb84e143e8\" (UID: \"0f3f2460-fdb8-4b47-89f9-3cbb84e143e8\") " Feb 26 17:18:46 crc kubenswrapper[4805]: E0226 17:18:46.599557 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:47.099531154 +0000 UTC m=+241.661285493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.602710 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f3f2460-fdb8-4b47-89f9-3cbb84e143e8-config-volume" (OuterVolumeSpecName: "config-volume") pod "0f3f2460-fdb8-4b47-89f9-3cbb84e143e8" (UID: "0f3f2460-fdb8-4b47-89f9-3cbb84e143e8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.615714 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f3f2460-fdb8-4b47-89f9-3cbb84e143e8-kube-api-access-dd22t" (OuterVolumeSpecName: "kube-api-access-dd22t") pod "0f3f2460-fdb8-4b47-89f9-3cbb84e143e8" (UID: "0f3f2460-fdb8-4b47-89f9-3cbb84e143e8"). InnerVolumeSpecName "kube-api-access-dd22t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.624522 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f3f2460-fdb8-4b47-89f9-3cbb84e143e8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0f3f2460-fdb8-4b47-89f9-3cbb84e143e8" (UID: "0f3f2460-fdb8-4b47-89f9-3cbb84e143e8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.702906 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.703214 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dd22t\" (UniqueName: \"kubernetes.io/projected/0f3f2460-fdb8-4b47-89f9-3cbb84e143e8-kube-api-access-dd22t\") on node \"crc\" DevicePath \"\"" Feb 26 17:18:46 crc kubenswrapper[4805]: E0226 17:18:46.703216 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:47.203196665 +0000 UTC m=+241.764951004 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.703239 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f3f2460-fdb8-4b47-89f9-3cbb84e143e8-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.703252 4805 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0f3f2460-fdb8-4b47-89f9-3cbb84e143e8-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.808717 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:46 crc kubenswrapper[4805]: E0226 17:18:46.808940 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:47.308909767 +0000 UTC m=+241.870664106 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.809039 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:46 crc kubenswrapper[4805]: E0226 17:18:46.809545 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:47.309538642 +0000 UTC m=+241.871292981 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.909923 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:46 crc kubenswrapper[4805]: E0226 17:18:46.913754 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:47.413729176 +0000 UTC m=+241.975483515 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:46 crc kubenswrapper[4805]: I0226 17:18:46.914560 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:46 crc kubenswrapper[4805]: E0226 17:18:46.914875 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:47.414867304 +0000 UTC m=+241.976621643 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:47 crc kubenswrapper[4805]: I0226 17:18:47.008184 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 26 17:18:47 crc kubenswrapper[4805]: I0226 17:18:47.018175 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:47 crc kubenswrapper[4805]: E0226 17:18:47.018632 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:47.518617247 +0000 UTC m=+242.080371586 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:47 crc kubenswrapper[4805]: I0226 17:18:47.095620 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3e0a98c4-d884-40b6-a094-098c07a7ca2c","Type":"ContainerStarted","Data":"225f269b2b3f748ca805aa8cbaab6e36e30197f2cc53f8e371c63e40aa8b02ba"} Feb 26 17:18:47 crc kubenswrapper[4805]: I0226 17:18:47.107653 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535435-n8728" event={"ID":"0f3f2460-fdb8-4b47-89f9-3cbb84e143e8","Type":"ContainerDied","Data":"526d13e50e59859fbee6af1c19e37c6c653a9b32b52dab48154a6a2978b3abc2"} Feb 26 17:18:47 crc kubenswrapper[4805]: I0226 17:18:47.107752 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="526d13e50e59859fbee6af1c19e37c6c653a9b32b52dab48154a6a2978b3abc2" Feb 26 17:18:47 crc kubenswrapper[4805]: I0226 17:18:47.108248 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535435-n8728" Feb 26 17:18:47 crc kubenswrapper[4805]: I0226 17:18:47.119716 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:47 crc kubenswrapper[4805]: E0226 17:18:47.120437 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:47.620424972 +0000 UTC m=+242.182179311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:47 crc kubenswrapper[4805]: I0226 17:18:47.220955 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:47 crc kubenswrapper[4805]: E0226 17:18:47.221385 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:47.721310955 +0000 UTC m=+242.283065344 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:47 crc kubenswrapper[4805]: I0226 17:18:47.222027 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:47 crc kubenswrapper[4805]: E0226 17:18:47.223069 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:47.723041387 +0000 UTC m=+242.284795846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:47 crc kubenswrapper[4805]: I0226 17:18:47.291500 4805 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 26 17:18:47 crc kubenswrapper[4805]: I0226 17:18:47.323252 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:47 crc kubenswrapper[4805]: E0226 17:18:47.323455 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:47.823423468 +0000 UTC m=+242.385177807 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:47 crc kubenswrapper[4805]: I0226 17:18:47.323516 4805 ???:1] "http: TLS handshake error from 192.168.126.11:36546: no serving certificate available for the kubelet" Feb 26 17:18:47 crc kubenswrapper[4805]: I0226 17:18:47.323612 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:47 crc kubenswrapper[4805]: E0226 17:18:47.323966 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:47.823949651 +0000 UTC m=+242.385703990 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:47 crc kubenswrapper[4805]: I0226 17:18:47.424884 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:47 crc kubenswrapper[4805]: E0226 17:18:47.425094 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:47.925072149 +0000 UTC m=+242.486826498 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:47 crc kubenswrapper[4805]: I0226 17:18:47.425262 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:47 crc kubenswrapper[4805]: E0226 17:18:47.425568 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:47.925559281 +0000 UTC m=+242.487313620 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:47 crc kubenswrapper[4805]: I0226 17:18:47.526743 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:47 crc kubenswrapper[4805]: E0226 17:18:47.527352 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:48.027330465 +0000 UTC m=+242.589084814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:47 crc kubenswrapper[4805]: I0226 17:18:47.546557 4805 patch_prober.go:28] interesting pod/router-default-5444994796-bnlgh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 17:18:47 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Feb 26 17:18:47 crc kubenswrapper[4805]: [+]process-running ok Feb 26 17:18:47 crc kubenswrapper[4805]: healthz check failed Feb 26 17:18:47 crc kubenswrapper[4805]: I0226 17:18:47.546611 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bnlgh" podUID="0bc402c9-dc6d-402f-9122-c7054331f144" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 17:18:47 crc kubenswrapper[4805]: I0226 17:18:47.628427 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:47 crc kubenswrapper[4805]: E0226 17:18:47.628759 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:48.128745661 +0000 UTC m=+242.690500000 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:47 crc kubenswrapper[4805]: I0226 17:18:47.693889 4805 ???:1] "http: TLS handshake error from 192.168.126.11:36556: no serving certificate available for the kubelet" Feb 26 17:18:47 crc kubenswrapper[4805]: I0226 17:18:47.731925 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:47 crc kubenswrapper[4805]: E0226 17:18:47.732141 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:48.232085104 +0000 UTC m=+242.793839443 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:47 crc kubenswrapper[4805]: I0226 17:18:47.732254 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:47 crc kubenswrapper[4805]: E0226 17:18:47.733345 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:48.233335115 +0000 UTC m=+242.795089544 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:47 crc kubenswrapper[4805]: I0226 17:18:47.833155 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:47 crc kubenswrapper[4805]: E0226 17:18:47.833449 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:48.333420438 +0000 UTC m=+242.895174777 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:47 crc kubenswrapper[4805]: I0226 17:18:47.833651 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:47 crc kubenswrapper[4805]: E0226 17:18:47.834062 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:48.334049223 +0000 UTC m=+242.895803562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:47 crc kubenswrapper[4805]: I0226 17:18:47.936056 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:47 crc kubenswrapper[4805]: E0226 17:18:47.936759 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:48.43673859 +0000 UTC m=+242.998492929 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:48 crc kubenswrapper[4805]: I0226 17:18:48.039064 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:48 crc kubenswrapper[4805]: E0226 17:18:48.039404 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:48.539391336 +0000 UTC m=+243.101145675 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:48 crc kubenswrapper[4805]: I0226 17:18:48.124002 4805 generic.go:334] "Generic (PLEG): container finished" podID="1c828360-94fa-4770-95a0-d9e635cabb89" containerID="007abb9507d43b7e35d927209824df66b9617074efaac480d45092a1d9b580c9" exitCode=0 Feb 26 17:18:48 crc kubenswrapper[4805]: I0226 17:18:48.124111 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"1c828360-94fa-4770-95a0-d9e635cabb89","Type":"ContainerDied","Data":"007abb9507d43b7e35d927209824df66b9617074efaac480d45092a1d9b580c9"} Feb 26 17:18:48 crc kubenswrapper[4805]: I0226 17:18:48.131296 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xbznm" event={"ID":"f86d7b9c-1d60-4153-98aa-6a8985a78907","Type":"ContainerStarted","Data":"1a69c7ec796bd0259df8ec9808b38b367327cfa3ad5f68a19f221957f5b80e96"} Feb 26 17:18:48 crc kubenswrapper[4805]: I0226 17:18:48.139762 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:48 crc kubenswrapper[4805]: E0226 17:18:48.140144 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-26 17:18:48.640116504 +0000 UTC m=+243.201870843 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:48 crc kubenswrapper[4805]: I0226 17:18:48.140675 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:48 crc kubenswrapper[4805]: E0226 17:18:48.141686 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-26 17:18:48.641670772 +0000 UTC m=+243.203425101 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tshb4" (UID: "c631c898-5180-424c-8cae-922d1a709938") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 26 17:18:48 crc kubenswrapper[4805]: E0226 17:18:48.201952 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod1c828360_94fa_4770_95a0_d9e635cabb89.slice/crio-conmon-007abb9507d43b7e35d927209824df66b9617074efaac480d45092a1d9b580c9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod528c9fdc_fb48_460b_a06b_a07ce3c388c4.slice/crio-51fa25f42d28aa0657ab20f7d9ca19df2aa2ef00c52aea170c921580a525247f.scope\": RecentStats: unable to find data in memory cache]" Feb 26 17:18:48 crc kubenswrapper[4805]: I0226 17:18:48.213166 4805 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-26T17:18:47.291540353Z","Handler":null,"Name":""} Feb 26 17:18:48 crc kubenswrapper[4805]: I0226 17:18:48.224214 4805 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 26 17:18:48 crc kubenswrapper[4805]: I0226 17:18:48.224250 4805 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 26 17:18:48 crc kubenswrapper[4805]: I0226 17:18:48.247341 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 26 17:18:48 crc kubenswrapper[4805]: I0226 17:18:48.253451 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 17:18:48 crc kubenswrapper[4805]: I0226 17:18:48.352084 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:48 crc kubenswrapper[4805]: I0226 17:18:48.446120 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 17:18:48 crc kubenswrapper[4805]: I0226 17:18:48.446182 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:48 crc kubenswrapper[4805]: I0226 17:18:48.489591 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tshb4\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:48 crc kubenswrapper[4805]: I0226 17:18:48.536916 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 26 17:18:48 crc kubenswrapper[4805]: I0226 17:18:48.544708 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:48 crc kubenswrapper[4805]: I0226 17:18:48.551207 4805 patch_prober.go:28] interesting pod/router-default-5444994796-bnlgh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 17:18:48 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Feb 26 17:18:48 crc kubenswrapper[4805]: [+]process-running ok Feb 26 17:18:48 crc kubenswrapper[4805]: healthz check failed Feb 26 17:18:48 crc kubenswrapper[4805]: I0226 17:18:48.551281 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bnlgh" podUID="0bc402c9-dc6d-402f-9122-c7054331f144" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 17:18:48 crc kubenswrapper[4805]: I0226 17:18:48.963243 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 26 17:18:48 crc kubenswrapper[4805]: I0226 17:18:48.971195 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tshb4"] Feb 26 17:18:48 crc kubenswrapper[4805]: W0226 17:18:48.981717 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc631c898_5180_424c_8cae_922d1a709938.slice/crio-ff7cd83cbaf11d1c207c6f8602ebc000bea62031cb792a62dc9432eb4f16d7ce WatchSource:0}: Error finding container ff7cd83cbaf11d1c207c6f8602ebc000bea62031cb792a62dc9432eb4f16d7ce: Status 404 returned error can't find the container with id ff7cd83cbaf11d1c207c6f8602ebc000bea62031cb792a62dc9432eb4f16d7ce Feb 26 17:18:49 crc kubenswrapper[4805]: I0226 17:18:49.143098 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" event={"ID":"c631c898-5180-424c-8cae-922d1a709938","Type":"ContainerStarted","Data":"ff7cd83cbaf11d1c207c6f8602ebc000bea62031cb792a62dc9432eb4f16d7ce"} Feb 26 17:18:49 crc kubenswrapper[4805]: I0226 17:18:49.153195 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3e0a98c4-d884-40b6-a094-098c07a7ca2c","Type":"ContainerStarted","Data":"daebb6c8c0b00ae1bf3604c07dcbceae4676863ca245f02a5b9370bd19f2a39f"} Feb 26 17:18:49 crc kubenswrapper[4805]: I0226 17:18:49.509960 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 17:18:49 crc kubenswrapper[4805]: I0226 17:18:49.547472 4805 patch_prober.go:28] interesting pod/router-default-5444994796-bnlgh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 17:18:49 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Feb 26 17:18:49 crc kubenswrapper[4805]: [+]process-running ok Feb 26 17:18:49 crc kubenswrapper[4805]: healthz check failed Feb 26 17:18:49 crc kubenswrapper[4805]: I0226 17:18:49.550336 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bnlgh" podUID="0bc402c9-dc6d-402f-9122-c7054331f144" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 17:18:49 crc kubenswrapper[4805]: I0226 17:18:49.675269 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c828360-94fa-4770-95a0-d9e635cabb89-kubelet-dir\") pod \"1c828360-94fa-4770-95a0-d9e635cabb89\" (UID: \"1c828360-94fa-4770-95a0-d9e635cabb89\") " Feb 26 17:18:49 crc kubenswrapper[4805]: I0226 17:18:49.675441 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c828360-94fa-4770-95a0-d9e635cabb89-kube-api-access\") pod \"1c828360-94fa-4770-95a0-d9e635cabb89\" (UID: \"1c828360-94fa-4770-95a0-d9e635cabb89\") " Feb 26 17:18:49 crc kubenswrapper[4805]: I0226 17:18:49.675727 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c828360-94fa-4770-95a0-d9e635cabb89-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1c828360-94fa-4770-95a0-d9e635cabb89" (UID: "1c828360-94fa-4770-95a0-d9e635cabb89"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 17:18:49 crc kubenswrapper[4805]: I0226 17:18:49.701578 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c828360-94fa-4770-95a0-d9e635cabb89-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1c828360-94fa-4770-95a0-d9e635cabb89" (UID: "1c828360-94fa-4770-95a0-d9e635cabb89"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:18:49 crc kubenswrapper[4805]: I0226 17:18:49.776942 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c828360-94fa-4770-95a0-d9e635cabb89-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 17:18:49 crc kubenswrapper[4805]: I0226 17:18:49.776981 4805 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c828360-94fa-4770-95a0-d9e635cabb89-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 26 17:18:49 crc kubenswrapper[4805]: I0226 17:18:49.938691 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-n2k99" Feb 26 17:18:50 crc kubenswrapper[4805]: I0226 17:18:50.169011 4805 generic.go:334] "Generic (PLEG): container finished" podID="3e0a98c4-d884-40b6-a094-098c07a7ca2c" containerID="daebb6c8c0b00ae1bf3604c07dcbceae4676863ca245f02a5b9370bd19f2a39f" exitCode=0 Feb 26 17:18:50 crc kubenswrapper[4805]: I0226 17:18:50.170264 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3e0a98c4-d884-40b6-a094-098c07a7ca2c","Type":"ContainerDied","Data":"daebb6c8c0b00ae1bf3604c07dcbceae4676863ca245f02a5b9370bd19f2a39f"} Feb 26 17:18:50 crc kubenswrapper[4805]: I0226 17:18:50.176958 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-wj5zq_528c9fdc-fb48-460b-a06b-a07ce3c388c4/cluster-samples-operator/0.log" Feb 26 17:18:50 crc kubenswrapper[4805]: I0226 17:18:50.177035 4805 generic.go:334] "Generic (PLEG): container finished" podID="528c9fdc-fb48-460b-a06b-a07ce3c388c4" containerID="51fa25f42d28aa0657ab20f7d9ca19df2aa2ef00c52aea170c921580a525247f" exitCode=2 Feb 26 17:18:50 crc kubenswrapper[4805]: I0226 17:18:50.177094 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wj5zq" event={"ID":"528c9fdc-fb48-460b-a06b-a07ce3c388c4","Type":"ContainerDied","Data":"51fa25f42d28aa0657ab20f7d9ca19df2aa2ef00c52aea170c921580a525247f"} Feb 26 17:18:50 crc kubenswrapper[4805]: I0226 17:18:50.177855 4805 scope.go:117] "RemoveContainer" containerID="51fa25f42d28aa0657ab20f7d9ca19df2aa2ef00c52aea170c921580a525247f" Feb 26 17:18:50 crc kubenswrapper[4805]: I0226 17:18:50.181234 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" event={"ID":"c631c898-5180-424c-8cae-922d1a709938","Type":"ContainerStarted","Data":"20e4d2e4365c766bfebae89a1bee804b406e2753f3ed41103e567b372d7a4ba8"} Feb 26 17:18:50 crc kubenswrapper[4805]: I0226 17:18:50.181364 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:18:50 crc kubenswrapper[4805]: I0226 17:18:50.188230 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"1c828360-94fa-4770-95a0-d9e635cabb89","Type":"ContainerDied","Data":"26731699884c6346273b5f90377c602dc91a4f54466be8421e278725b9404ebb"} Feb 26 17:18:50 crc kubenswrapper[4805]: I0226 17:18:50.188287 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26731699884c6346273b5f90377c602dc91a4f54466be8421e278725b9404ebb" Feb 26 17:18:50 crc kubenswrapper[4805]: I0226 17:18:50.188397 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 26 17:18:50 crc kubenswrapper[4805]: I0226 17:18:50.224917 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" podStartSLOduration=169.224896575 podStartE2EDuration="2m49.224896575s" podCreationTimestamp="2026-02-26 17:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:50.218302653 +0000 UTC m=+244.780057012" watchObservedRunningTime="2026-02-26 17:18:50.224896575 +0000 UTC m=+244.786650914" Feb 26 17:18:50 crc kubenswrapper[4805]: I0226 17:18:50.545146 4805 patch_prober.go:28] interesting pod/router-default-5444994796-bnlgh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 17:18:50 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Feb 26 17:18:50 crc kubenswrapper[4805]: [+]process-running ok Feb 26 17:18:50 crc kubenswrapper[4805]: healthz check failed Feb 26 17:18:50 crc kubenswrapper[4805]: I0226 17:18:50.545215 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bnlgh" podUID="0bc402c9-dc6d-402f-9122-c7054331f144" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 17:18:51 crc kubenswrapper[4805]: I0226 17:18:51.205694 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-wj5zq_528c9fdc-fb48-460b-a06b-a07ce3c388c4/cluster-samples-operator/0.log" Feb 26 17:18:51 crc kubenswrapper[4805]: I0226 17:18:51.206123 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-wj5zq" event={"ID":"528c9fdc-fb48-460b-a06b-a07ce3c388c4","Type":"ContainerStarted","Data":"1636cfce0b94407464c063a6d23ac196b24ac659a3ed2b7575fb4ac2290f839d"} Feb 26 17:18:51 crc kubenswrapper[4805]: I0226 17:18:51.220256 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xbznm" event={"ID":"f86d7b9c-1d60-4153-98aa-6a8985a78907","Type":"ContainerStarted","Data":"62736719fd23148e9fdd9c33a26bb37b6832cafc6ff8dfecc66720f3530db97a"} Feb 26 17:18:51 crc kubenswrapper[4805]: I0226 17:18:51.545969 4805 patch_prober.go:28] interesting pod/router-default-5444994796-bnlgh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 17:18:51 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Feb 26 17:18:51 crc kubenswrapper[4805]: [+]process-running ok Feb 26 17:18:51 crc kubenswrapper[4805]: healthz check failed Feb 26 17:18:51 crc kubenswrapper[4805]: I0226 17:18:51.546043 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bnlgh" podUID="0bc402c9-dc6d-402f-9122-c7054331f144" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 17:18:51 crc kubenswrapper[4805]: I0226 17:18:51.609468 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 17:18:51 crc kubenswrapper[4805]: I0226 17:18:51.711735 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e0a98c4-d884-40b6-a094-098c07a7ca2c-kube-api-access\") pod \"3e0a98c4-d884-40b6-a094-098c07a7ca2c\" (UID: \"3e0a98c4-d884-40b6-a094-098c07a7ca2c\") " Feb 26 17:18:51 crc kubenswrapper[4805]: I0226 17:18:51.711896 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e0a98c4-d884-40b6-a094-098c07a7ca2c-kubelet-dir\") pod \"3e0a98c4-d884-40b6-a094-098c07a7ca2c\" (UID: \"3e0a98c4-d884-40b6-a094-098c07a7ca2c\") " Feb 26 17:18:51 crc kubenswrapper[4805]: I0226 17:18:51.712289 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e0a98c4-d884-40b6-a094-098c07a7ca2c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3e0a98c4-d884-40b6-a094-098c07a7ca2c" (UID: "3e0a98c4-d884-40b6-a094-098c07a7ca2c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 17:18:51 crc kubenswrapper[4805]: I0226 17:18:51.731011 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e0a98c4-d884-40b6-a094-098c07a7ca2c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3e0a98c4-d884-40b6-a094-098c07a7ca2c" (UID: "3e0a98c4-d884-40b6-a094-098c07a7ca2c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:18:51 crc kubenswrapper[4805]: I0226 17:18:51.813156 4805 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e0a98c4-d884-40b6-a094-098c07a7ca2c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 26 17:18:51 crc kubenswrapper[4805]: I0226 17:18:51.813191 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e0a98c4-d884-40b6-a094-098c07a7ca2c-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 17:18:52 crc kubenswrapper[4805]: I0226 17:18:52.232155 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3e0a98c4-d884-40b6-a094-098c07a7ca2c","Type":"ContainerDied","Data":"225f269b2b3f748ca805aa8cbaab6e36e30197f2cc53f8e371c63e40aa8b02ba"} Feb 26 17:18:52 crc kubenswrapper[4805]: I0226 17:18:52.232205 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="225f269b2b3f748ca805aa8cbaab6e36e30197f2cc53f8e371c63e40aa8b02ba" Feb 26 17:18:52 crc kubenswrapper[4805]: I0226 17:18:52.232214 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 26 17:18:52 crc kubenswrapper[4805]: I0226 17:18:52.257964 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-xbznm" podStartSLOduration=21.257946313 podStartE2EDuration="21.257946313s" podCreationTimestamp="2026-02-26 17:18:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:18:52.254862737 +0000 UTC m=+246.816617076" watchObservedRunningTime="2026-02-26 17:18:52.257946313 +0000 UTC m=+246.819700652" Feb 26 17:18:52 crc kubenswrapper[4805]: I0226 17:18:52.544579 4805 patch_prober.go:28] interesting pod/router-default-5444994796-bnlgh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 17:18:52 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Feb 26 17:18:52 crc kubenswrapper[4805]: [+]process-running ok Feb 26 17:18:52 crc kubenswrapper[4805]: healthz check failed Feb 26 17:18:52 crc kubenswrapper[4805]: I0226 17:18:52.544628 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bnlgh" podUID="0bc402c9-dc6d-402f-9122-c7054331f144" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 17:18:53 crc kubenswrapper[4805]: I0226 17:18:53.545611 4805 patch_prober.go:28] interesting pod/router-default-5444994796-bnlgh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 17:18:53 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Feb 26 17:18:53 crc kubenswrapper[4805]: [+]process-running ok Feb 26 17:18:53 crc kubenswrapper[4805]: healthz check failed Feb 26 17:18:53 crc kubenswrapper[4805]: I0226 17:18:53.545899 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bnlgh" podUID="0bc402c9-dc6d-402f-9122-c7054331f144" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 17:18:53 crc kubenswrapper[4805]: I0226 17:18:53.745066 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6e20a5b-84fd-4e2d-836c-a3891ef809dc-metrics-certs\") pod \"network-metrics-daemon-hbv6d\" (UID: \"d6e20a5b-84fd-4e2d-836c-a3891ef809dc\") " pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:18:53 crc kubenswrapper[4805]: I0226 17:18:53.747508 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 26 17:18:53 crc kubenswrapper[4805]: I0226 17:18:53.765072 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6e20a5b-84fd-4e2d-836c-a3891ef809dc-metrics-certs\") pod \"network-metrics-daemon-hbv6d\" (UID: \"d6e20a5b-84fd-4e2d-836c-a3891ef809dc\") " pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:18:54 crc kubenswrapper[4805]: I0226 17:18:54.066686 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 26 17:18:54 crc kubenswrapper[4805]: I0226 17:18:54.075328 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hbv6d" Feb 26 17:18:54 crc kubenswrapper[4805]: I0226 17:18:54.092239 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-lrc7d container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 26 17:18:54 crc kubenswrapper[4805]: I0226 17:18:54.092293 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-lrc7d container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 26 17:18:54 crc kubenswrapper[4805]: I0226 17:18:54.092358 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-lrc7d" podUID="046e32fa-0c59-4d66-9ab3-02d3ab26255b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 26 17:18:54 crc kubenswrapper[4805]: I0226 17:18:54.092611 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lrc7d" podUID="046e32fa-0c59-4d66-9ab3-02d3ab26255b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 26 17:18:54 crc kubenswrapper[4805]: I0226 17:18:54.545122 4805 patch_prober.go:28] interesting pod/router-default-5444994796-bnlgh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 26 17:18:54 crc kubenswrapper[4805]: [-]has-synced failed: reason withheld Feb 26 17:18:54 crc kubenswrapper[4805]: [+]process-running ok Feb 26 17:18:54 crc kubenswrapper[4805]: healthz check failed Feb 26 17:18:54 crc kubenswrapper[4805]: I0226 17:18:54.545399 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bnlgh" podUID="0bc402c9-dc6d-402f-9122-c7054331f144" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 26 17:18:54 crc kubenswrapper[4805]: I0226 17:18:54.727338 4805 patch_prober.go:28] interesting pod/console-f9d7485db-2dnn9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Feb 26 17:18:54 crc kubenswrapper[4805]: I0226 17:18:54.727393 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-2dnn9" podUID="dd3f2c3b-2417-44c9-bd45-02b10d68cf24" containerName="console" probeResult="failure" output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" Feb 26 17:18:55 crc kubenswrapper[4805]: I0226 17:18:55.545973 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-bnlgh" Feb 26 17:18:55 crc kubenswrapper[4805]: I0226 17:18:55.551465 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-bnlgh" Feb 26 17:18:57 crc kubenswrapper[4805]: I0226 17:18:57.592584 4805 ???:1] "http: TLS handshake error from 192.168.126.11:56156: no serving certificate available for the kubelet" Feb 26 17:19:02 crc kubenswrapper[4805]: I0226 17:19:02.015844 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6ff9c75dd6-568lh"] Feb 26 17:19:02 crc kubenswrapper[4805]: I0226 17:19:02.016438 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6ff9c75dd6-568lh" podUID="27763fd0-2213-406e-b488-fb2d67d891f8" containerName="controller-manager" containerID="cri-o://e67c1e671b704c11d87a8b085af718dcb823b7cd73b377f1b9b0b60d9a675d8b" gracePeriod=30 Feb 26 17:19:02 crc kubenswrapper[4805]: I0226 17:19:02.167072 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d79569bf6-j6hbn"] Feb 26 17:19:02 crc kubenswrapper[4805]: I0226 17:19:02.173788 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7d79569bf6-j6hbn" podUID="721f07b1-285b-438d-b9dd-13fc2112310b" containerName="route-controller-manager" containerID="cri-o://97e6970084d6018115885e78b608dbc109221a6d9f9aad3cb7b992231f2a3835" gracePeriod=30 Feb 26 17:19:02 crc kubenswrapper[4805]: I0226 17:19:02.978216 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 17:19:02 crc kubenswrapper[4805]: I0226 17:19:02.978282 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 17:19:04 crc kubenswrapper[4805]: I0226 17:19:04.090714 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-lrc7d container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 26 17:19:04 crc kubenswrapper[4805]: I0226 17:19:04.090788 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-lrc7d" podUID="046e32fa-0c59-4d66-9ab3-02d3ab26255b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 26 17:19:04 crc kubenswrapper[4805]: I0226 17:19:04.090849 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-lrc7d" Feb 26 17:19:04 crc kubenswrapper[4805]: I0226 17:19:04.091595 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"81d984c232621a71b9c796a35776e6544c64f33ea9c87e73d976b36b5e998db6"} pod="openshift-console/downloads-7954f5f757-lrc7d" containerMessage="Container download-server failed liveness probe, will be restarted" Feb 26 17:19:04 crc kubenswrapper[4805]: I0226 17:19:04.091666 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-lrc7d" podUID="046e32fa-0c59-4d66-9ab3-02d3ab26255b" containerName="download-server" containerID="cri-o://81d984c232621a71b9c796a35776e6544c64f33ea9c87e73d976b36b5e998db6" gracePeriod=2 Feb 26 17:19:04 crc kubenswrapper[4805]: I0226 17:19:04.092263 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-lrc7d container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 26 17:19:04 crc kubenswrapper[4805]: I0226 17:19:04.092307 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lrc7d" podUID="046e32fa-0c59-4d66-9ab3-02d3ab26255b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 26 17:19:04 crc kubenswrapper[4805]: I0226 17:19:04.092618 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-lrc7d container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 26 17:19:04 crc kubenswrapper[4805]: I0226 17:19:04.092649 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lrc7d" podUID="046e32fa-0c59-4d66-9ab3-02d3ab26255b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 26 17:19:04 crc kubenswrapper[4805]: I0226 17:19:04.962919 4805 patch_prober.go:28] interesting pod/controller-manager-6ff9c75dd6-568lh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Feb 26 17:19:04 crc kubenswrapper[4805]: I0226 17:19:04.963346 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6ff9c75dd6-568lh" podUID="27763fd0-2213-406e-b488-fb2d67d891f8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Feb 26 17:19:04 crc kubenswrapper[4805]: I0226 17:19:04.970944 4805 patch_prober.go:28] interesting pod/route-controller-manager-7d79569bf6-j6hbn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Feb 26 17:19:04 crc kubenswrapper[4805]: I0226 17:19:04.971108 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7d79569bf6-j6hbn" podUID="721f07b1-285b-438d-b9dd-13fc2112310b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Feb 26 17:19:05 crc kubenswrapper[4805]: I0226 17:19:05.467508 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-2dnn9" Feb 26 17:19:05 crc kubenswrapper[4805]: I0226 17:19:05.472447 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-2dnn9" Feb 26 17:19:07 crc kubenswrapper[4805]: I0226 17:19:07.334853 4805 generic.go:334] "Generic (PLEG): container finished" podID="27763fd0-2213-406e-b488-fb2d67d891f8" containerID="e67c1e671b704c11d87a8b085af718dcb823b7cd73b377f1b9b0b60d9a675d8b" exitCode=0 Feb 26 17:19:07 crc kubenswrapper[4805]: I0226 17:19:07.335107 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6ff9c75dd6-568lh" event={"ID":"27763fd0-2213-406e-b488-fb2d67d891f8","Type":"ContainerDied","Data":"e67c1e671b704c11d87a8b085af718dcb823b7cd73b377f1b9b0b60d9a675d8b"} Feb 26 17:19:08 crc kubenswrapper[4805]: I0226 17:19:08.553166 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:19:14 crc kubenswrapper[4805]: I0226 17:19:14.091962 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-lrc7d container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 26 17:19:14 crc kubenswrapper[4805]: I0226 17:19:14.092059 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lrc7d" podUID="046e32fa-0c59-4d66-9ab3-02d3ab26255b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 26 17:19:14 crc kubenswrapper[4805]: I0226 17:19:14.963492 4805 patch_prober.go:28] interesting pod/controller-manager-6ff9c75dd6-568lh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Feb 26 17:19:14 crc kubenswrapper[4805]: I0226 17:19:14.963556 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6ff9c75dd6-568lh" podUID="27763fd0-2213-406e-b488-fb2d67d891f8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Feb 26 17:19:14 crc kubenswrapper[4805]: I0226 17:19:14.970861 4805 patch_prober.go:28] interesting pod/route-controller-manager-7d79569bf6-j6hbn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Feb 26 17:19:14 crc kubenswrapper[4805]: I0226 17:19:14.970924 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7d79569bf6-j6hbn" podUID="721f07b1-285b-438d-b9dd-13fc2112310b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Feb 26 17:19:15 crc kubenswrapper[4805]: I0226 17:19:15.164257 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pzjmg" Feb 26 17:19:16 crc kubenswrapper[4805]: I0226 17:19:16.786767 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 26 17:19:16 crc kubenswrapper[4805]: E0226 17:19:16.787202 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c828360-94fa-4770-95a0-d9e635cabb89" containerName="pruner" Feb 26 17:19:16 crc kubenswrapper[4805]: I0226 17:19:16.787224 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c828360-94fa-4770-95a0-d9e635cabb89" containerName="pruner" Feb 26 17:19:16 crc kubenswrapper[4805]: E0226 17:19:16.787255 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f3f2460-fdb8-4b47-89f9-3cbb84e143e8" containerName="collect-profiles" Feb 26 17:19:16 crc kubenswrapper[4805]: I0226 17:19:16.787270 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f3f2460-fdb8-4b47-89f9-3cbb84e143e8" containerName="collect-profiles" Feb 26 17:19:16 crc kubenswrapper[4805]: E0226 17:19:16.787298 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e0a98c4-d884-40b6-a094-098c07a7ca2c" containerName="pruner" Feb 26 17:19:16 crc kubenswrapper[4805]: I0226 17:19:16.787313 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e0a98c4-d884-40b6-a094-098c07a7ca2c" containerName="pruner" Feb 26 17:19:16 crc kubenswrapper[4805]: I0226 17:19:16.787519 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e0a98c4-d884-40b6-a094-098c07a7ca2c" containerName="pruner" Feb 26 17:19:16 crc kubenswrapper[4805]: I0226 17:19:16.787553 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f3f2460-fdb8-4b47-89f9-3cbb84e143e8" containerName="collect-profiles" Feb 26 17:19:16 crc kubenswrapper[4805]: I0226 17:19:16.787583 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c828360-94fa-4770-95a0-d9e635cabb89" containerName="pruner" Feb 26 17:19:16 crc kubenswrapper[4805]: I0226 17:19:16.788307 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 17:19:16 crc kubenswrapper[4805]: I0226 17:19:16.790593 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 26 17:19:16 crc kubenswrapper[4805]: I0226 17:19:16.790813 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 26 17:19:16 crc kubenswrapper[4805]: I0226 17:19:16.802977 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 26 17:19:16 crc kubenswrapper[4805]: I0226 17:19:16.907743 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a3a66dc-90f1-4e09-bfa4-1b316b977db2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2a3a66dc-90f1-4e09-bfa4-1b316b977db2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 17:19:16 crc kubenswrapper[4805]: I0226 17:19:16.907858 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a3a66dc-90f1-4e09-bfa4-1b316b977db2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2a3a66dc-90f1-4e09-bfa4-1b316b977db2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 17:19:17 crc kubenswrapper[4805]: I0226 17:19:17.010801 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a3a66dc-90f1-4e09-bfa4-1b316b977db2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2a3a66dc-90f1-4e09-bfa4-1b316b977db2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 17:19:17 crc kubenswrapper[4805]: I0226 17:19:17.010950 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a3a66dc-90f1-4e09-bfa4-1b316b977db2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2a3a66dc-90f1-4e09-bfa4-1b316b977db2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 17:19:17 crc kubenswrapper[4805]: I0226 17:19:17.011346 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a3a66dc-90f1-4e09-bfa4-1b316b977db2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2a3a66dc-90f1-4e09-bfa4-1b316b977db2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 17:19:17 crc kubenswrapper[4805]: I0226 17:19:17.063163 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a3a66dc-90f1-4e09-bfa4-1b316b977db2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2a3a66dc-90f1-4e09-bfa4-1b316b977db2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 17:19:17 crc kubenswrapper[4805]: I0226 17:19:17.115874 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 17:19:20 crc kubenswrapper[4805]: I0226 17:19:20.781129 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 26 17:19:21 crc kubenswrapper[4805]: I0226 17:19:20.782214 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 26 17:19:21 crc kubenswrapper[4805]: I0226 17:19:20.803824 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 26 17:19:21 crc kubenswrapper[4805]: I0226 17:19:20.868514 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fabf0d12-f3a0-4562-a606-4b0cedcc6cd7-var-lock\") pod \"installer-9-crc\" (UID: \"fabf0d12-f3a0-4562-a606-4b0cedcc6cd7\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 17:19:21 crc kubenswrapper[4805]: I0226 17:19:20.868548 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fabf0d12-f3a0-4562-a606-4b0cedcc6cd7-kube-api-access\") pod \"installer-9-crc\" (UID: \"fabf0d12-f3a0-4562-a606-4b0cedcc6cd7\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 17:19:21 crc kubenswrapper[4805]: I0226 17:19:20.868859 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fabf0d12-f3a0-4562-a606-4b0cedcc6cd7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"fabf0d12-f3a0-4562-a606-4b0cedcc6cd7\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 17:19:21 crc kubenswrapper[4805]: I0226 17:19:20.971145 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fabf0d12-f3a0-4562-a606-4b0cedcc6cd7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"fabf0d12-f3a0-4562-a606-4b0cedcc6cd7\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 17:19:21 crc kubenswrapper[4805]: I0226 17:19:20.971334 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fabf0d12-f3a0-4562-a606-4b0cedcc6cd7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"fabf0d12-f3a0-4562-a606-4b0cedcc6cd7\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 17:19:21 crc kubenswrapper[4805]: I0226 17:19:20.973482 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fabf0d12-f3a0-4562-a606-4b0cedcc6cd7-var-lock\") pod \"installer-9-crc\" (UID: \"fabf0d12-f3a0-4562-a606-4b0cedcc6cd7\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 17:19:21 crc kubenswrapper[4805]: I0226 17:19:20.973554 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fabf0d12-f3a0-4562-a606-4b0cedcc6cd7-kube-api-access\") pod \"installer-9-crc\" (UID: \"fabf0d12-f3a0-4562-a606-4b0cedcc6cd7\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 17:19:21 crc kubenswrapper[4805]: I0226 17:19:20.973663 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fabf0d12-f3a0-4562-a606-4b0cedcc6cd7-var-lock\") pod \"installer-9-crc\" (UID: \"fabf0d12-f3a0-4562-a606-4b0cedcc6cd7\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 17:19:21 crc kubenswrapper[4805]: I0226 17:19:20.999730 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fabf0d12-f3a0-4562-a606-4b0cedcc6cd7-kube-api-access\") pod \"installer-9-crc\" (UID: \"fabf0d12-f3a0-4562-a606-4b0cedcc6cd7\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 26 17:19:21 crc kubenswrapper[4805]: I0226 17:19:21.116580 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 26 17:19:24 crc kubenswrapper[4805]: I0226 17:19:24.090813 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-lrc7d container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 26 17:19:24 crc kubenswrapper[4805]: I0226 17:19:24.091130 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lrc7d" podUID="046e32fa-0c59-4d66-9ab3-02d3ab26255b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 26 17:19:24 crc kubenswrapper[4805]: I0226 17:19:24.962820 4805 patch_prober.go:28] interesting pod/controller-manager-6ff9c75dd6-568lh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Feb 26 17:19:24 crc kubenswrapper[4805]: I0226 17:19:24.962911 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6ff9c75dd6-568lh" podUID="27763fd0-2213-406e-b488-fb2d67d891f8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Feb 26 17:19:24 crc kubenswrapper[4805]: I0226 17:19:24.970928 4805 patch_prober.go:28] interesting pod/route-controller-manager-7d79569bf6-j6hbn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Feb 26 17:19:24 crc kubenswrapper[4805]: I0226 17:19:24.971008 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7d79569bf6-j6hbn" podUID="721f07b1-285b-438d-b9dd-13fc2112310b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Feb 26 17:19:32 crc kubenswrapper[4805]: I0226 17:19:32.978290 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 17:19:32 crc kubenswrapper[4805]: I0226 17:19:32.978913 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 17:19:33 crc kubenswrapper[4805]: I0226 17:19:33.487774 4805 generic.go:334] "Generic (PLEG): container finished" podID="721f07b1-285b-438d-b9dd-13fc2112310b" containerID="97e6970084d6018115885e78b608dbc109221a6d9f9aad3cb7b992231f2a3835" exitCode=0 Feb 26 17:19:33 crc kubenswrapper[4805]: I0226 17:19:33.487840 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d79569bf6-j6hbn" event={"ID":"721f07b1-285b-438d-b9dd-13fc2112310b","Type":"ContainerDied","Data":"97e6970084d6018115885e78b608dbc109221a6d9f9aad3cb7b992231f2a3835"} Feb 26 17:19:34 crc kubenswrapper[4805]: I0226 17:19:34.093257 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-lrc7d container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 26 17:19:34 crc kubenswrapper[4805]: I0226 17:19:34.093339 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lrc7d" podUID="046e32fa-0c59-4d66-9ab3-02d3ab26255b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 26 17:19:34 crc kubenswrapper[4805]: I0226 17:19:34.499722 4805 generic.go:334] "Generic (PLEG): container finished" podID="046e32fa-0c59-4d66-9ab3-02d3ab26255b" containerID="81d984c232621a71b9c796a35776e6544c64f33ea9c87e73d976b36b5e998db6" exitCode=0 Feb 26 17:19:34 crc kubenswrapper[4805]: I0226 17:19:34.499789 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-lrc7d" event={"ID":"046e32fa-0c59-4d66-9ab3-02d3ab26255b","Type":"ContainerDied","Data":"81d984c232621a71b9c796a35776e6544c64f33ea9c87e73d976b36b5e998db6"} Feb 26 17:19:34 crc kubenswrapper[4805]: I0226 17:19:34.963140 4805 patch_prober.go:28] interesting pod/controller-manager-6ff9c75dd6-568lh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Feb 26 17:19:34 crc kubenswrapper[4805]: I0226 17:19:34.963200 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6ff9c75dd6-568lh" podUID="27763fd0-2213-406e-b488-fb2d67d891f8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Feb 26 17:19:34 crc kubenswrapper[4805]: I0226 17:19:34.973298 4805 patch_prober.go:28] interesting pod/route-controller-manager-7d79569bf6-j6hbn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Feb 26 17:19:34 crc kubenswrapper[4805]: I0226 17:19:34.973375 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7d79569bf6-j6hbn" podUID="721f07b1-285b-438d-b9dd-13fc2112310b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Feb 26 17:19:37 crc kubenswrapper[4805]: E0226 17:19:37.980137 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:2086b7801d96d309e48e1c678789d95541de89bbae905e6f5a8de845927ca051: Get \"https://registry.redhat.io/v2/redhat/redhat-marketplace-index/blobs/sha256:2086b7801d96d309e48e1c678789d95541de89bbae905e6f5a8de845927ca051\": context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 26 17:19:37 crc kubenswrapper[4805]: E0226 17:19:37.980805 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n2qjc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-2xsss_openshift-marketplace(7a876494-42c5-4be9-aad1-46d8ce3c68bb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:2086b7801d96d309e48e1c678789d95541de89bbae905e6f5a8de845927ca051: Get \"https://registry.redhat.io/v2/redhat/redhat-marketplace-index/blobs/sha256:2086b7801d96d309e48e1c678789d95541de89bbae905e6f5a8de845927ca051\": context canceled" logger="UnhandledError" Feb 26 17:19:37 crc kubenswrapper[4805]: E0226 17:19:37.982866 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:2086b7801d96d309e48e1c678789d95541de89bbae905e6f5a8de845927ca051: Get \\\"https://registry.redhat.io/v2/redhat/redhat-marketplace-index/blobs/sha256:2086b7801d96d309e48e1c678789d95541de89bbae905e6f5a8de845927ca051\\\": context canceled\"" pod="openshift-marketplace/redhat-marketplace-2xsss" podUID="7a876494-42c5-4be9-aad1-46d8ce3c68bb" Feb 26 17:19:38 crc kubenswrapper[4805]: I0226 17:19:38.576637 4805 ???:1] "http: TLS handshake error from 192.168.126.11:32992: no serving certificate available for the kubelet" Feb 26 17:19:41 crc kubenswrapper[4805]: E0226 17:19:41.368583 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2xsss" podUID="7a876494-42c5-4be9-aad1-46d8ce3c68bb" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.426760 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d79569bf6-j6hbn" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.433076 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff9c75dd6-568lh" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.457094 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz"] Feb 26 17:19:41 crc kubenswrapper[4805]: E0226 17:19:41.457362 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="721f07b1-285b-438d-b9dd-13fc2112310b" containerName="route-controller-manager" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.457378 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="721f07b1-285b-438d-b9dd-13fc2112310b" containerName="route-controller-manager" Feb 26 17:19:41 crc kubenswrapper[4805]: E0226 17:19:41.457396 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27763fd0-2213-406e-b488-fb2d67d891f8" containerName="controller-manager" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.457402 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="27763fd0-2213-406e-b488-fb2d67d891f8" containerName="controller-manager" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.457511 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="27763fd0-2213-406e-b488-fb2d67d891f8" containerName="controller-manager" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.457527 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="721f07b1-285b-438d-b9dd-13fc2112310b" containerName="route-controller-manager" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.457944 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.460489 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bll7m\" (UniqueName: \"kubernetes.io/projected/27763fd0-2213-406e-b488-fb2d67d891f8-kube-api-access-bll7m\") pod \"27763fd0-2213-406e-b488-fb2d67d891f8\" (UID: \"27763fd0-2213-406e-b488-fb2d67d891f8\") " Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.460574 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27763fd0-2213-406e-b488-fb2d67d891f8-serving-cert\") pod \"27763fd0-2213-406e-b488-fb2d67d891f8\" (UID: \"27763fd0-2213-406e-b488-fb2d67d891f8\") " Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.460606 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/721f07b1-285b-438d-b9dd-13fc2112310b-client-ca\") pod \"721f07b1-285b-438d-b9dd-13fc2112310b\" (UID: \"721f07b1-285b-438d-b9dd-13fc2112310b\") " Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.460636 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/721f07b1-285b-438d-b9dd-13fc2112310b-config\") pod \"721f07b1-285b-438d-b9dd-13fc2112310b\" (UID: \"721f07b1-285b-438d-b9dd-13fc2112310b\") " Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.460668 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/721f07b1-285b-438d-b9dd-13fc2112310b-serving-cert\") pod \"721f07b1-285b-438d-b9dd-13fc2112310b\" (UID: \"721f07b1-285b-438d-b9dd-13fc2112310b\") " Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.460722 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27763fd0-2213-406e-b488-fb2d67d891f8-client-ca\") pod \"27763fd0-2213-406e-b488-fb2d67d891f8\" (UID: \"27763fd0-2213-406e-b488-fb2d67d891f8\") " Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.460757 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27763fd0-2213-406e-b488-fb2d67d891f8-proxy-ca-bundles\") pod \"27763fd0-2213-406e-b488-fb2d67d891f8\" (UID: \"27763fd0-2213-406e-b488-fb2d67d891f8\") " Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.460787 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27763fd0-2213-406e-b488-fb2d67d891f8-config\") pod \"27763fd0-2213-406e-b488-fb2d67d891f8\" (UID: \"27763fd0-2213-406e-b488-fb2d67d891f8\") " Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.460818 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnjfs\" (UniqueName: \"kubernetes.io/projected/721f07b1-285b-438d-b9dd-13fc2112310b-kube-api-access-vnjfs\") pod \"721f07b1-285b-438d-b9dd-13fc2112310b\" (UID: \"721f07b1-285b-438d-b9dd-13fc2112310b\") " Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.460967 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c783f4e5-c0b1-4832-9988-3ad7d2a38129-serving-cert\") pod \"route-controller-manager-79c9b785f5-sp7zz\" (UID: \"c783f4e5-c0b1-4832-9988-3ad7d2a38129\") " pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.461087 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c783f4e5-c0b1-4832-9988-3ad7d2a38129-config\") pod \"route-controller-manager-79c9b785f5-sp7zz\" (UID: \"c783f4e5-c0b1-4832-9988-3ad7d2a38129\") " pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.461162 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhsth\" (UniqueName: \"kubernetes.io/projected/c783f4e5-c0b1-4832-9988-3ad7d2a38129-kube-api-access-qhsth\") pod \"route-controller-manager-79c9b785f5-sp7zz\" (UID: \"c783f4e5-c0b1-4832-9988-3ad7d2a38129\") " pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.461196 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c783f4e5-c0b1-4832-9988-3ad7d2a38129-client-ca\") pod \"route-controller-manager-79c9b785f5-sp7zz\" (UID: \"c783f4e5-c0b1-4832-9988-3ad7d2a38129\") " pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.462382 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/721f07b1-285b-438d-b9dd-13fc2112310b-client-ca" (OuterVolumeSpecName: "client-ca") pod "721f07b1-285b-438d-b9dd-13fc2112310b" (UID: "721f07b1-285b-438d-b9dd-13fc2112310b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.462724 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27763fd0-2213-406e-b488-fb2d67d891f8-config" (OuterVolumeSpecName: "config") pod "27763fd0-2213-406e-b488-fb2d67d891f8" (UID: "27763fd0-2213-406e-b488-fb2d67d891f8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.463171 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/721f07b1-285b-438d-b9dd-13fc2112310b-config" (OuterVolumeSpecName: "config") pod "721f07b1-285b-438d-b9dd-13fc2112310b" (UID: "721f07b1-285b-438d-b9dd-13fc2112310b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.463832 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27763fd0-2213-406e-b488-fb2d67d891f8-client-ca" (OuterVolumeSpecName: "client-ca") pod "27763fd0-2213-406e-b488-fb2d67d891f8" (UID: "27763fd0-2213-406e-b488-fb2d67d891f8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.464072 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27763fd0-2213-406e-b488-fb2d67d891f8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "27763fd0-2213-406e-b488-fb2d67d891f8" (UID: "27763fd0-2213-406e-b488-fb2d67d891f8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.482399 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/721f07b1-285b-438d-b9dd-13fc2112310b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "721f07b1-285b-438d-b9dd-13fc2112310b" (UID: "721f07b1-285b-438d-b9dd-13fc2112310b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.482425 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27763fd0-2213-406e-b488-fb2d67d891f8-kube-api-access-bll7m" (OuterVolumeSpecName: "kube-api-access-bll7m") pod "27763fd0-2213-406e-b488-fb2d67d891f8" (UID: "27763fd0-2213-406e-b488-fb2d67d891f8"). InnerVolumeSpecName "kube-api-access-bll7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.482596 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27763fd0-2213-406e-b488-fb2d67d891f8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "27763fd0-2213-406e-b488-fb2d67d891f8" (UID: "27763fd0-2213-406e-b488-fb2d67d891f8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.482938 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/721f07b1-285b-438d-b9dd-13fc2112310b-kube-api-access-vnjfs" (OuterVolumeSpecName: "kube-api-access-vnjfs") pod "721f07b1-285b-438d-b9dd-13fc2112310b" (UID: "721f07b1-285b-438d-b9dd-13fc2112310b"). InnerVolumeSpecName "kube-api-access-vnjfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.507332 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz"] Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.549008 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d79569bf6-j6hbn" event={"ID":"721f07b1-285b-438d-b9dd-13fc2112310b","Type":"ContainerDied","Data":"db594adead13f6557c97c9950b3b0b14b7cba52a2881a04286b3ef163b52ef83"} Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.549110 4805 scope.go:117] "RemoveContainer" containerID="97e6970084d6018115885e78b608dbc109221a6d9f9aad3cb7b992231f2a3835" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.549240 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d79569bf6-j6hbn" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.556641 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6ff9c75dd6-568lh" event={"ID":"27763fd0-2213-406e-b488-fb2d67d891f8","Type":"ContainerDied","Data":"2ec662f55270b3ea71d78b3489a447df9351bce9bd49f60dfdc8843b22449738"} Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.556944 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6ff9c75dd6-568lh" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.562034 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhsth\" (UniqueName: \"kubernetes.io/projected/c783f4e5-c0b1-4832-9988-3ad7d2a38129-kube-api-access-qhsth\") pod \"route-controller-manager-79c9b785f5-sp7zz\" (UID: \"c783f4e5-c0b1-4832-9988-3ad7d2a38129\") " pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.562066 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c783f4e5-c0b1-4832-9988-3ad7d2a38129-client-ca\") pod \"route-controller-manager-79c9b785f5-sp7zz\" (UID: \"c783f4e5-c0b1-4832-9988-3ad7d2a38129\") " pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.562124 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c783f4e5-c0b1-4832-9988-3ad7d2a38129-serving-cert\") pod \"route-controller-manager-79c9b785f5-sp7zz\" (UID: \"c783f4e5-c0b1-4832-9988-3ad7d2a38129\") " pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.562172 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c783f4e5-c0b1-4832-9988-3ad7d2a38129-config\") pod \"route-controller-manager-79c9b785f5-sp7zz\" (UID: \"c783f4e5-c0b1-4832-9988-3ad7d2a38129\") " pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.562239 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bll7m\" (UniqueName: \"kubernetes.io/projected/27763fd0-2213-406e-b488-fb2d67d891f8-kube-api-access-bll7m\") on node \"crc\" DevicePath \"\"" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.562256 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27763fd0-2213-406e-b488-fb2d67d891f8-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.562267 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/721f07b1-285b-438d-b9dd-13fc2112310b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.562278 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/721f07b1-285b-438d-b9dd-13fc2112310b-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.562291 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/721f07b1-285b-438d-b9dd-13fc2112310b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.562301 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/27763fd0-2213-406e-b488-fb2d67d891f8-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.562310 4805 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/27763fd0-2213-406e-b488-fb2d67d891f8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.563428 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c783f4e5-c0b1-4832-9988-3ad7d2a38129-config\") pod \"route-controller-manager-79c9b785f5-sp7zz\" (UID: \"c783f4e5-c0b1-4832-9988-3ad7d2a38129\") " pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.564944 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27763fd0-2213-406e-b488-fb2d67d891f8-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.564965 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnjfs\" (UniqueName: \"kubernetes.io/projected/721f07b1-285b-438d-b9dd-13fc2112310b-kube-api-access-vnjfs\") on node \"crc\" DevicePath \"\"" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.566454 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c783f4e5-c0b1-4832-9988-3ad7d2a38129-client-ca\") pod \"route-controller-manager-79c9b785f5-sp7zz\" (UID: \"c783f4e5-c0b1-4832-9988-3ad7d2a38129\") " pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.570943 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c783f4e5-c0b1-4832-9988-3ad7d2a38129-serving-cert\") pod \"route-controller-manager-79c9b785f5-sp7zz\" (UID: \"c783f4e5-c0b1-4832-9988-3ad7d2a38129\") " pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.578708 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d79569bf6-j6hbn"] Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.581633 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d79569bf6-j6hbn"] Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.583200 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhsth\" (UniqueName: \"kubernetes.io/projected/c783f4e5-c0b1-4832-9988-3ad7d2a38129-kube-api-access-qhsth\") pod \"route-controller-manager-79c9b785f5-sp7zz\" (UID: \"c783f4e5-c0b1-4832-9988-3ad7d2a38129\") " pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.605504 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6ff9c75dd6-568lh"] Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.610281 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6ff9c75dd6-568lh"] Feb 26 17:19:41 crc kubenswrapper[4805]: I0226 17:19:41.804810 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" Feb 26 17:19:42 crc kubenswrapper[4805]: I0226 17:19:42.962425 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27763fd0-2213-406e-b488-fb2d67d891f8" path="/var/lib/kubelet/pods/27763fd0-2213-406e-b488-fb2d67d891f8/volumes" Feb 26 17:19:42 crc kubenswrapper[4805]: I0226 17:19:42.963956 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="721f07b1-285b-438d-b9dd-13fc2112310b" path="/var/lib/kubelet/pods/721f07b1-285b-438d-b9dd-13fc2112310b/volumes" Feb 26 17:19:43 crc kubenswrapper[4805]: I0226 17:19:43.665412 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-565fd78f84-xlklc"] Feb 26 17:19:43 crc kubenswrapper[4805]: I0226 17:19:43.666104 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" Feb 26 17:19:43 crc kubenswrapper[4805]: I0226 17:19:43.668307 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 26 17:19:43 crc kubenswrapper[4805]: I0226 17:19:43.670781 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 26 17:19:43 crc kubenswrapper[4805]: I0226 17:19:43.671000 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 26 17:19:43 crc kubenswrapper[4805]: I0226 17:19:43.671112 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 26 17:19:43 crc kubenswrapper[4805]: I0226 17:19:43.671300 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 26 17:19:43 crc kubenswrapper[4805]: I0226 17:19:43.671448 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 26 17:19:43 crc kubenswrapper[4805]: I0226 17:19:43.680220 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 26 17:19:43 crc kubenswrapper[4805]: I0226 17:19:43.681137 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-565fd78f84-xlklc"] Feb 26 17:19:43 crc kubenswrapper[4805]: I0226 17:19:43.700804 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73cd7b83-240c-458e-88a3-7ceace9a121d-config\") pod \"controller-manager-565fd78f84-xlklc\" (UID: \"73cd7b83-240c-458e-88a3-7ceace9a121d\") " pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" Feb 26 17:19:43 crc kubenswrapper[4805]: I0226 17:19:43.700860 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/73cd7b83-240c-458e-88a3-7ceace9a121d-proxy-ca-bundles\") pod \"controller-manager-565fd78f84-xlklc\" (UID: \"73cd7b83-240c-458e-88a3-7ceace9a121d\") " pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" Feb 26 17:19:43 crc kubenswrapper[4805]: I0226 17:19:43.700984 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73cd7b83-240c-458e-88a3-7ceace9a121d-serving-cert\") pod \"controller-manager-565fd78f84-xlklc\" (UID: \"73cd7b83-240c-458e-88a3-7ceace9a121d\") " pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" Feb 26 17:19:43 crc kubenswrapper[4805]: I0226 17:19:43.701115 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/73cd7b83-240c-458e-88a3-7ceace9a121d-client-ca\") pod \"controller-manager-565fd78f84-xlklc\" (UID: \"73cd7b83-240c-458e-88a3-7ceace9a121d\") " pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" Feb 26 17:19:43 crc kubenswrapper[4805]: I0226 17:19:43.701178 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcg8w\" (UniqueName: \"kubernetes.io/projected/73cd7b83-240c-458e-88a3-7ceace9a121d-kube-api-access-lcg8w\") pod \"controller-manager-565fd78f84-xlklc\" (UID: \"73cd7b83-240c-458e-88a3-7ceace9a121d\") " pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" Feb 26 17:19:43 crc kubenswrapper[4805]: E0226 17:19:43.778750 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 26 17:19:43 crc kubenswrapper[4805]: E0226 17:19:43.778924 4805 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 17:19:43 crc kubenswrapper[4805]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 26 17:19:43 crc kubenswrapper[4805]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2djv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29535438-d8kz2_openshift-infra(ce8a1740-3334-42a4-af1d-1a7de4758c9c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Feb 26 17:19:43 crc kubenswrapper[4805]: > logger="UnhandledError" Feb 26 17:19:43 crc kubenswrapper[4805]: E0226 17:19:43.780346 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29535438-d8kz2" podUID="ce8a1740-3334-42a4-af1d-1a7de4758c9c" Feb 26 17:19:43 crc kubenswrapper[4805]: I0226 17:19:43.801780 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73cd7b83-240c-458e-88a3-7ceace9a121d-serving-cert\") pod \"controller-manager-565fd78f84-xlklc\" (UID: \"73cd7b83-240c-458e-88a3-7ceace9a121d\") " pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" Feb 26 17:19:43 crc kubenswrapper[4805]: I0226 17:19:43.801849 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/73cd7b83-240c-458e-88a3-7ceace9a121d-client-ca\") pod \"controller-manager-565fd78f84-xlklc\" (UID: \"73cd7b83-240c-458e-88a3-7ceace9a121d\") " pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" Feb 26 17:19:43 crc kubenswrapper[4805]: I0226 17:19:43.801902 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcg8w\" (UniqueName: \"kubernetes.io/projected/73cd7b83-240c-458e-88a3-7ceace9a121d-kube-api-access-lcg8w\") pod \"controller-manager-565fd78f84-xlklc\" (UID: \"73cd7b83-240c-458e-88a3-7ceace9a121d\") " pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" Feb 26 17:19:43 crc kubenswrapper[4805]: I0226 17:19:43.801970 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73cd7b83-240c-458e-88a3-7ceace9a121d-config\") pod \"controller-manager-565fd78f84-xlklc\" (UID: \"73cd7b83-240c-458e-88a3-7ceace9a121d\") " pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" Feb 26 17:19:43 crc kubenswrapper[4805]: I0226 17:19:43.801995 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/73cd7b83-240c-458e-88a3-7ceace9a121d-proxy-ca-bundles\") pod \"controller-manager-565fd78f84-xlklc\" (UID: \"73cd7b83-240c-458e-88a3-7ceace9a121d\") " pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" Feb 26 17:19:43 crc kubenswrapper[4805]: I0226 17:19:43.803269 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/73cd7b83-240c-458e-88a3-7ceace9a121d-proxy-ca-bundles\") pod \"controller-manager-565fd78f84-xlklc\" (UID: \"73cd7b83-240c-458e-88a3-7ceace9a121d\") " pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" Feb 26 17:19:43 crc kubenswrapper[4805]: I0226 17:19:43.803345 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/73cd7b83-240c-458e-88a3-7ceace9a121d-client-ca\") pod \"controller-manager-565fd78f84-xlklc\" (UID: \"73cd7b83-240c-458e-88a3-7ceace9a121d\") " pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" Feb 26 17:19:43 crc kubenswrapper[4805]: I0226 17:19:43.804256 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73cd7b83-240c-458e-88a3-7ceace9a121d-config\") pod \"controller-manager-565fd78f84-xlklc\" (UID: \"73cd7b83-240c-458e-88a3-7ceace9a121d\") " pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" Feb 26 17:19:43 crc kubenswrapper[4805]: I0226 17:19:43.820164 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcg8w\" (UniqueName: \"kubernetes.io/projected/73cd7b83-240c-458e-88a3-7ceace9a121d-kube-api-access-lcg8w\") pod \"controller-manager-565fd78f84-xlklc\" (UID: \"73cd7b83-240c-458e-88a3-7ceace9a121d\") " pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" Feb 26 17:19:43 crc kubenswrapper[4805]: I0226 17:19:43.822802 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73cd7b83-240c-458e-88a3-7ceace9a121d-serving-cert\") pod \"controller-manager-565fd78f84-xlklc\" (UID: \"73cd7b83-240c-458e-88a3-7ceace9a121d\") " pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" Feb 26 17:19:43 crc kubenswrapper[4805]: I0226 17:19:43.988727 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" Feb 26 17:19:44 crc kubenswrapper[4805]: I0226 17:19:44.090275 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-lrc7d container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 26 17:19:44 crc kubenswrapper[4805]: I0226 17:19:44.090332 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lrc7d" podUID="046e32fa-0c59-4d66-9ab3-02d3ab26255b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 26 17:19:44 crc kubenswrapper[4805]: E0226 17:19:44.573465 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29535438-d8kz2" podUID="ce8a1740-3334-42a4-af1d-1a7de4758c9c" Feb 26 17:19:46 crc kubenswrapper[4805]: I0226 17:19:46.411928 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hbv6d"] Feb 26 17:19:48 crc kubenswrapper[4805]: E0226 17:19:48.354163 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 26 17:19:48 crc kubenswrapper[4805]: E0226 17:19:48.354727 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p6lzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-gzkpq_openshift-marketplace(87196950-f6be-442b-a725-3cdee5962f55): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 17:19:48 crc kubenswrapper[4805]: E0226 17:19:48.355908 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-gzkpq" podUID="87196950-f6be-442b-a725-3cdee5962f55" Feb 26 17:19:48 crc kubenswrapper[4805]: E0226 17:19:48.377295 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 26 17:19:48 crc kubenswrapper[4805]: E0226 17:19:48.377470 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rnv8p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-rmjsl_openshift-marketplace(1f5ab03e-b223-4e5b-8c9f-3d350a66156e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 17:19:48 crc kubenswrapper[4805]: E0226 17:19:48.378639 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-rmjsl" podUID="1f5ab03e-b223-4e5b-8c9f-3d350a66156e" Feb 26 17:19:49 crc kubenswrapper[4805]: E0226 17:19:49.630606 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 26 17:19:49 crc kubenswrapper[4805]: E0226 17:19:49.631115 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnrtw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-7fvvs_openshift-marketplace(83d8504f-33ad-4812-bc1e-11233c225974): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 17:19:49 crc kubenswrapper[4805]: E0226 17:19:49.632952 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-7fvvs" podUID="83d8504f-33ad-4812-bc1e-11233c225974" Feb 26 17:19:53 crc kubenswrapper[4805]: E0226 17:19:53.147647 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 26 17:19:53 crc kubenswrapper[4805]: E0226 17:19:53.148514 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjwsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-n5l62_openshift-marketplace(884caea0-c055-415b-92c3-fae420465726): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 17:19:53 crc kubenswrapper[4805]: E0226 17:19:53.149792 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-n5l62" podUID="884caea0-c055-415b-92c3-fae420465726" Feb 26 17:19:53 crc kubenswrapper[4805]: E0226 17:19:53.173739 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 26 17:19:53 crc kubenswrapper[4805]: E0226 17:19:53.174343 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xfrzh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-tfcdt_openshift-marketplace(043bfd8c-1387-4b00-ad52-1e4efd43c942): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 17:19:53 crc kubenswrapper[4805]: E0226 17:19:53.175522 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-tfcdt" podUID="043bfd8c-1387-4b00-ad52-1e4efd43c942" Feb 26 17:19:53 crc kubenswrapper[4805]: I0226 17:19:53.271618 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 26 17:19:54 crc kubenswrapper[4805]: I0226 17:19:54.090599 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-lrc7d container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 26 17:19:54 crc kubenswrapper[4805]: I0226 17:19:54.090676 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lrc7d" podUID="046e32fa-0c59-4d66-9ab3-02d3ab26255b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 26 17:19:54 crc kubenswrapper[4805]: I0226 17:19:54.697322 4805 scope.go:117] "RemoveContainer" containerID="e67c1e671b704c11d87a8b085af718dcb823b7cd73b377f1b9b0b60d9a675d8b" Feb 26 17:19:54 crc kubenswrapper[4805]: E0226 17:19:54.708761 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rmjsl" podUID="1f5ab03e-b223-4e5b-8c9f-3d350a66156e" Feb 26 17:19:54 crc kubenswrapper[4805]: E0226 17:19:54.709614 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-gzkpq" podUID="87196950-f6be-442b-a725-3cdee5962f55" Feb 26 17:19:54 crc kubenswrapper[4805]: E0226 17:19:54.711689 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-n5l62" podUID="884caea0-c055-415b-92c3-fae420465726" Feb 26 17:19:54 crc kubenswrapper[4805]: E0226 17:19:54.711812 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-7fvvs" podUID="83d8504f-33ad-4812-bc1e-11233c225974" Feb 26 17:19:54 crc kubenswrapper[4805]: E0226 17:19:54.711854 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-tfcdt" podUID="043bfd8c-1387-4b00-ad52-1e4efd43c942" Feb 26 17:19:54 crc kubenswrapper[4805]: W0226 17:19:54.712165 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6e20a5b_84fd_4e2d_836c_a3891ef809dc.slice/crio-22232c6b7b8106642005f1d94b1199bd84f36fe031b0cc0badadeb1802ac11dd WatchSource:0}: Error finding container 22232c6b7b8106642005f1d94b1199bd84f36fe031b0cc0badadeb1802ac11dd: Status 404 returned error can't find the container with id 22232c6b7b8106642005f1d94b1199bd84f36fe031b0cc0badadeb1802ac11dd Feb 26 17:19:54 crc kubenswrapper[4805]: W0226 17:19:54.717142 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod2a3a66dc_90f1_4e09_bfa4_1b316b977db2.slice/crio-e84d1bd97b788b51a8fba02dfb329c59bb6fa64b88f247d7dfca074f0f4bacfb WatchSource:0}: Error finding container e84d1bd97b788b51a8fba02dfb329c59bb6fa64b88f247d7dfca074f0f4bacfb: Status 404 returned error can't find the container with id e84d1bd97b788b51a8fba02dfb329c59bb6fa64b88f247d7dfca074f0f4bacfb Feb 26 17:19:54 crc kubenswrapper[4805]: I0226 17:19:54.997933 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mcvr5"] Feb 26 17:19:55 crc kubenswrapper[4805]: I0226 17:19:55.072682 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 26 17:19:55 crc kubenswrapper[4805]: E0226 17:19:55.120444 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 26 17:19:55 crc kubenswrapper[4805]: E0226 17:19:55.120647 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-58lf6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-jttjv_openshift-marketplace(6239f68a-d80a-4fd4-9a6d-69bd48b1c015): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 17:19:55 crc kubenswrapper[4805]: E0226 17:19:55.122526 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-jttjv" podUID="6239f68a-d80a-4fd4-9a6d-69bd48b1c015" Feb 26 17:19:55 crc kubenswrapper[4805]: I0226 17:19:55.158912 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz"] Feb 26 17:19:55 crc kubenswrapper[4805]: I0226 17:19:55.223246 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-565fd78f84-xlklc"] Feb 26 17:19:55 crc kubenswrapper[4805]: I0226 17:19:55.645323 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" event={"ID":"73cd7b83-240c-458e-88a3-7ceace9a121d","Type":"ContainerStarted","Data":"6a5c195b073e2f11d66b90bbb1d4ff60c91fbc789d72756af24c7939cbf58508"} Feb 26 17:19:55 crc kubenswrapper[4805]: I0226 17:19:55.647789 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2a3a66dc-90f1-4e09-bfa4-1b316b977db2","Type":"ContainerStarted","Data":"e84d1bd97b788b51a8fba02dfb329c59bb6fa64b88f247d7dfca074f0f4bacfb"} Feb 26 17:19:55 crc kubenswrapper[4805]: I0226 17:19:55.649416 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"fabf0d12-f3a0-4562-a606-4b0cedcc6cd7","Type":"ContainerStarted","Data":"448f342d6db5712c0e053a979245c286ff20a9280b182d4b81a3a9ebd15cfa6c"} Feb 26 17:19:55 crc kubenswrapper[4805]: I0226 17:19:55.650428 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" event={"ID":"c783f4e5-c0b1-4832-9988-3ad7d2a38129","Type":"ContainerStarted","Data":"3cf4b783445885a4e812bf8bae45537b019b4f3c1f47314612a48db5c40e7123"} Feb 26 17:19:55 crc kubenswrapper[4805]: I0226 17:19:55.653319 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-lrc7d" event={"ID":"046e32fa-0c59-4d66-9ab3-02d3ab26255b","Type":"ContainerStarted","Data":"c9f0db402c44f42c6cf59b937e0276a47e889126e1ed75965230fac69c350cd5"} Feb 26 17:19:55 crc kubenswrapper[4805]: I0226 17:19:55.653685 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-lrc7d" Feb 26 17:19:55 crc kubenswrapper[4805]: I0226 17:19:55.653793 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-lrc7d container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 26 17:19:55 crc kubenswrapper[4805]: I0226 17:19:55.653832 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lrc7d" podUID="046e32fa-0c59-4d66-9ab3-02d3ab26255b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 26 17:19:55 crc kubenswrapper[4805]: I0226 17:19:55.656270 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hbv6d" event={"ID":"d6e20a5b-84fd-4e2d-836c-a3891ef809dc","Type":"ContainerStarted","Data":"22232c6b7b8106642005f1d94b1199bd84f36fe031b0cc0badadeb1802ac11dd"} Feb 26 17:19:55 crc kubenswrapper[4805]: E0226 17:19:55.657049 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jttjv" podUID="6239f68a-d80a-4fd4-9a6d-69bd48b1c015" Feb 26 17:19:55 crc kubenswrapper[4805]: E0226 17:19:55.789582 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 26 17:19:55 crc kubenswrapper[4805]: E0226 17:19:55.789768 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dfr46,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-wnt7l_openshift-marketplace(231a1216-2a55-4e7b-b026-104624c69857): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 17:19:55 crc kubenswrapper[4805]: E0226 17:19:55.791731 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-wnt7l" podUID="231a1216-2a55-4e7b-b026-104624c69857" Feb 26 17:19:56 crc kubenswrapper[4805]: I0226 17:19:56.661818 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hbv6d" event={"ID":"d6e20a5b-84fd-4e2d-836c-a3891ef809dc","Type":"ContainerStarted","Data":"a3b1cb6a84a2a82a50b4142c1f96c65e945f926baafa1855a08b77a5d227d0e6"} Feb 26 17:19:56 crc kubenswrapper[4805]: E0226 17:19:56.663638 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-wnt7l" podUID="231a1216-2a55-4e7b-b026-104624c69857" Feb 26 17:19:56 crc kubenswrapper[4805]: I0226 17:19:56.665596 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-lrc7d container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 26 17:19:56 crc kubenswrapper[4805]: I0226 17:19:56.665653 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lrc7d" podUID="046e32fa-0c59-4d66-9ab3-02d3ab26255b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 26 17:19:57 crc kubenswrapper[4805]: I0226 17:19:57.670921 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" event={"ID":"73cd7b83-240c-458e-88a3-7ceace9a121d","Type":"ContainerStarted","Data":"192f1eaed52fea05cc56ffcabcf69e4823210f1b9f698ca814b63068b1873038"} Feb 26 17:19:57 crc kubenswrapper[4805]: I0226 17:19:57.673358 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"fabf0d12-f3a0-4562-a606-4b0cedcc6cd7","Type":"ContainerStarted","Data":"37bdb205d9580cc28782b5404e09b57e27e1c772ea248cd8785769d89310edac"} Feb 26 17:19:57 crc kubenswrapper[4805]: I0226 17:19:57.676154 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2a3a66dc-90f1-4e09-bfa4-1b316b977db2","Type":"ContainerStarted","Data":"e1e76e81ba12069c5276fa9dc86d9a70def8b3a8880ea3c2c450f409e2c06425"} Feb 26 17:19:57 crc kubenswrapper[4805]: I0226 17:19:57.678325 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" event={"ID":"c783f4e5-c0b1-4832-9988-3ad7d2a38129","Type":"ContainerStarted","Data":"813d2b184ad1fb2e266fdef3f334757f4fa1229199fac9327abfba599be8b658"} Feb 26 17:19:57 crc kubenswrapper[4805]: I0226 17:19:57.679070 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-lrc7d container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 26 17:19:57 crc kubenswrapper[4805]: I0226 17:19:57.679170 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lrc7d" podUID="046e32fa-0c59-4d66-9ab3-02d3ab26255b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 26 17:19:58 crc kubenswrapper[4805]: I0226 17:19:58.686601 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hbv6d" event={"ID":"d6e20a5b-84fd-4e2d-836c-a3891ef809dc","Type":"ContainerStarted","Data":"30680ff765e513a5f48cb4027d6f45b3b01665cab40a9b63b04d5cca6b997af8"} Feb 26 17:19:58 crc kubenswrapper[4805]: I0226 17:19:58.689514 4805 generic.go:334] "Generic (PLEG): container finished" podID="2a3a66dc-90f1-4e09-bfa4-1b316b977db2" containerID="e1e76e81ba12069c5276fa9dc86d9a70def8b3a8880ea3c2c450f409e2c06425" exitCode=0 Feb 26 17:19:58 crc kubenswrapper[4805]: I0226 17:19:58.689667 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2a3a66dc-90f1-4e09-bfa4-1b316b977db2","Type":"ContainerDied","Data":"e1e76e81ba12069c5276fa9dc86d9a70def8b3a8880ea3c2c450f409e2c06425"} Feb 26 17:19:58 crc kubenswrapper[4805]: I0226 17:19:58.690209 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" Feb 26 17:19:58 crc kubenswrapper[4805]: I0226 17:19:58.690951 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" Feb 26 17:19:58 crc kubenswrapper[4805]: I0226 17:19:58.696365 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" Feb 26 17:19:58 crc kubenswrapper[4805]: I0226 17:19:58.696421 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" Feb 26 17:19:58 crc kubenswrapper[4805]: I0226 17:19:58.706213 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-hbv6d" podStartSLOduration=237.706192927 podStartE2EDuration="3m57.706192927s" podCreationTimestamp="2026-02-26 17:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:19:58.705382997 +0000 UTC m=+313.267137336" watchObservedRunningTime="2026-02-26 17:19:58.706192927 +0000 UTC m=+313.267947266" Feb 26 17:19:58 crc kubenswrapper[4805]: I0226 17:19:58.738010 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=38.737992815 podStartE2EDuration="38.737992815s" podCreationTimestamp="2026-02-26 17:19:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:19:58.737031691 +0000 UTC m=+313.298786030" watchObservedRunningTime="2026-02-26 17:19:58.737992815 +0000 UTC m=+313.299747154" Feb 26 17:19:58 crc kubenswrapper[4805]: I0226 17:19:58.761371 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" podStartSLOduration=36.761353176 podStartE2EDuration="36.761353176s" podCreationTimestamp="2026-02-26 17:19:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:19:58.757330818 +0000 UTC m=+313.319085177" watchObservedRunningTime="2026-02-26 17:19:58.761353176 +0000 UTC m=+313.323107515" Feb 26 17:19:58 crc kubenswrapper[4805]: I0226 17:19:58.788470 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" podStartSLOduration=36.788446639 podStartE2EDuration="36.788446639s" podCreationTimestamp="2026-02-26 17:19:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:19:58.788355266 +0000 UTC m=+313.350109605" watchObservedRunningTime="2026-02-26 17:19:58.788446639 +0000 UTC m=+313.350200978" Feb 26 17:20:00 crc kubenswrapper[4805]: I0226 17:20:00.133883 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535440-nztxm"] Feb 26 17:20:00 crc kubenswrapper[4805]: I0226 17:20:00.135300 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535440-nztxm" Feb 26 17:20:00 crc kubenswrapper[4805]: I0226 17:20:00.137527 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535440-nztxm"] Feb 26 17:20:00 crc kubenswrapper[4805]: I0226 17:20:00.138226 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 17:20:00 crc kubenswrapper[4805]: I0226 17:20:00.264792 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcmms\" (UniqueName: \"kubernetes.io/projected/3e191647-de53-41b3-b7b9-5cb11ccb9f87-kube-api-access-bcmms\") pod \"auto-csr-approver-29535440-nztxm\" (UID: \"3e191647-de53-41b3-b7b9-5cb11ccb9f87\") " pod="openshift-infra/auto-csr-approver-29535440-nztxm" Feb 26 17:20:00 crc kubenswrapper[4805]: I0226 17:20:00.365696 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcmms\" (UniqueName: \"kubernetes.io/projected/3e191647-de53-41b3-b7b9-5cb11ccb9f87-kube-api-access-bcmms\") pod \"auto-csr-approver-29535440-nztxm\" (UID: \"3e191647-de53-41b3-b7b9-5cb11ccb9f87\") " pod="openshift-infra/auto-csr-approver-29535440-nztxm" Feb 26 17:20:00 crc kubenswrapper[4805]: I0226 17:20:00.389864 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcmms\" (UniqueName: \"kubernetes.io/projected/3e191647-de53-41b3-b7b9-5cb11ccb9f87-kube-api-access-bcmms\") pod \"auto-csr-approver-29535440-nztxm\" (UID: \"3e191647-de53-41b3-b7b9-5cb11ccb9f87\") " pod="openshift-infra/auto-csr-approver-29535440-nztxm" Feb 26 17:20:00 crc kubenswrapper[4805]: I0226 17:20:00.459900 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535440-nztxm" Feb 26 17:20:02 crc kubenswrapper[4805]: I0226 17:20:02.160439 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 17:20:02 crc kubenswrapper[4805]: I0226 17:20:02.293647 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a3a66dc-90f1-4e09-bfa4-1b316b977db2-kubelet-dir\") pod \"2a3a66dc-90f1-4e09-bfa4-1b316b977db2\" (UID: \"2a3a66dc-90f1-4e09-bfa4-1b316b977db2\") " Feb 26 17:20:02 crc kubenswrapper[4805]: I0226 17:20:02.293710 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a3a66dc-90f1-4e09-bfa4-1b316b977db2-kube-api-access\") pod \"2a3a66dc-90f1-4e09-bfa4-1b316b977db2\" (UID: \"2a3a66dc-90f1-4e09-bfa4-1b316b977db2\") " Feb 26 17:20:02 crc kubenswrapper[4805]: I0226 17:20:02.293891 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a3a66dc-90f1-4e09-bfa4-1b316b977db2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2a3a66dc-90f1-4e09-bfa4-1b316b977db2" (UID: "2a3a66dc-90f1-4e09-bfa4-1b316b977db2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 17:20:02 crc kubenswrapper[4805]: I0226 17:20:02.303660 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a3a66dc-90f1-4e09-bfa4-1b316b977db2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2a3a66dc-90f1-4e09-bfa4-1b316b977db2" (UID: "2a3a66dc-90f1-4e09-bfa4-1b316b977db2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:20:02 crc kubenswrapper[4805]: I0226 17:20:02.394956 4805 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a3a66dc-90f1-4e09-bfa4-1b316b977db2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 26 17:20:02 crc kubenswrapper[4805]: I0226 17:20:02.394998 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a3a66dc-90f1-4e09-bfa4-1b316b977db2-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 17:20:02 crc kubenswrapper[4805]: I0226 17:20:02.714241 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2a3a66dc-90f1-4e09-bfa4-1b316b977db2","Type":"ContainerDied","Data":"e84d1bd97b788b51a8fba02dfb329c59bb6fa64b88f247d7dfca074f0f4bacfb"} Feb 26 17:20:02 crc kubenswrapper[4805]: I0226 17:20:02.714300 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e84d1bd97b788b51a8fba02dfb329c59bb6fa64b88f247d7dfca074f0f4bacfb" Feb 26 17:20:02 crc kubenswrapper[4805]: I0226 17:20:02.714307 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 26 17:20:02 crc kubenswrapper[4805]: I0226 17:20:02.978432 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 17:20:02 crc kubenswrapper[4805]: I0226 17:20:02.978529 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 17:20:02 crc kubenswrapper[4805]: I0226 17:20:02.978610 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" Feb 26 17:20:02 crc kubenswrapper[4805]: I0226 17:20:02.979520 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb"} pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 17:20:02 crc kubenswrapper[4805]: I0226 17:20:02.979620 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" containerID="cri-o://d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb" gracePeriod=600 Feb 26 17:20:03 crc kubenswrapper[4805]: I0226 17:20:03.722952 4805 generic.go:334] "Generic (PLEG): container finished" podID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerID="d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb" exitCode=0 Feb 26 17:20:03 crc kubenswrapper[4805]: I0226 17:20:03.723067 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" event={"ID":"25e83477-65d0-41be-8e55-fdacfc5871a8","Type":"ContainerDied","Data":"d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb"} Feb 26 17:20:04 crc kubenswrapper[4805]: I0226 17:20:04.090510 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-lrc7d container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 26 17:20:04 crc kubenswrapper[4805]: I0226 17:20:04.090985 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-lrc7d" podUID="046e32fa-0c59-4d66-9ab3-02d3ab26255b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 26 17:20:04 crc kubenswrapper[4805]: I0226 17:20:04.090670 4805 patch_prober.go:28] interesting pod/downloads-7954f5f757-lrc7d container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 26 17:20:04 crc kubenswrapper[4805]: I0226 17:20:04.091308 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-lrc7d" podUID="046e32fa-0c59-4d66-9ab3-02d3ab26255b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 26 17:20:07 crc kubenswrapper[4805]: I0226 17:20:07.849962 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535440-nztxm"] Feb 26 17:20:07 crc kubenswrapper[4805]: W0226 17:20:07.858204 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e191647_de53_41b3_b7b9_5cb11ccb9f87.slice/crio-1be33f37a4643913c6ba30218bd1dcd46b345ff548d21940b11af6faa19930ad WatchSource:0}: Error finding container 1be33f37a4643913c6ba30218bd1dcd46b345ff548d21940b11af6faa19930ad: Status 404 returned error can't find the container with id 1be33f37a4643913c6ba30218bd1dcd46b345ff548d21940b11af6faa19930ad Feb 26 17:20:08 crc kubenswrapper[4805]: I0226 17:20:08.751335 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535440-nztxm" event={"ID":"3e191647-de53-41b3-b7b9-5cb11ccb9f87","Type":"ContainerStarted","Data":"1be33f37a4643913c6ba30218bd1dcd46b345ff548d21940b11af6faa19930ad"} Feb 26 17:20:08 crc kubenswrapper[4805]: I0226 17:20:08.754592 4805 generic.go:334] "Generic (PLEG): container finished" podID="7a876494-42c5-4be9-aad1-46d8ce3c68bb" containerID="7e61f7441131b794c97fef61acf5d277700c6085d200d0f3b5cdcc7e7a20395b" exitCode=0 Feb 26 17:20:08 crc kubenswrapper[4805]: I0226 17:20:08.754669 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2xsss" event={"ID":"7a876494-42c5-4be9-aad1-46d8ce3c68bb","Type":"ContainerDied","Data":"7e61f7441131b794c97fef61acf5d277700c6085d200d0f3b5cdcc7e7a20395b"} Feb 26 17:20:08 crc kubenswrapper[4805]: I0226 17:20:08.756268 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535438-d8kz2" event={"ID":"ce8a1740-3334-42a4-af1d-1a7de4758c9c","Type":"ContainerStarted","Data":"6a678337576ab19bb9ec69028e8af9e0e6aac50abc487291596e0d465fff2805"} Feb 26 17:20:08 crc kubenswrapper[4805]: I0226 17:20:08.759925 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" event={"ID":"25e83477-65d0-41be-8e55-fdacfc5871a8","Type":"ContainerStarted","Data":"042dcdf4837fac5099eaa927fbb96bd6244e875ff2c0526c5bf517e80e23ce1d"} Feb 26 17:20:08 crc kubenswrapper[4805]: I0226 17:20:08.919001 4805 csr.go:261] certificate signing request csr-2jvf8 is approved, waiting to be issued Feb 26 17:20:08 crc kubenswrapper[4805]: I0226 17:20:08.930623 4805 csr.go:257] certificate signing request csr-2jvf8 is issued Feb 26 17:20:09 crc kubenswrapper[4805]: I0226 17:20:09.784655 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535438-d8kz2" podStartSLOduration=39.253669399 podStartE2EDuration="2m9.784639986s" podCreationTimestamp="2026-02-26 17:18:00 +0000 UTC" firstStartedPulling="2026-02-26 17:18:36.887542826 +0000 UTC m=+231.449297165" lastFinishedPulling="2026-02-26 17:20:07.418513413 +0000 UTC m=+321.980267752" observedRunningTime="2026-02-26 17:20:09.782937314 +0000 UTC m=+324.344691653" watchObservedRunningTime="2026-02-26 17:20:09.784639986 +0000 UTC m=+324.346394325" Feb 26 17:20:09 crc kubenswrapper[4805]: I0226 17:20:09.932524 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2027-01-12 12:11:09.27107037 +0000 UTC Feb 26 17:20:09 crc kubenswrapper[4805]: I0226 17:20:09.932603 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7674h50m59.338477203s for next certificate rotation Feb 26 17:20:10 crc kubenswrapper[4805]: I0226 17:20:10.773192 4805 generic.go:334] "Generic (PLEG): container finished" podID="ce8a1740-3334-42a4-af1d-1a7de4758c9c" containerID="6a678337576ab19bb9ec69028e8af9e0e6aac50abc487291596e0d465fff2805" exitCode=0 Feb 26 17:20:10 crc kubenswrapper[4805]: I0226 17:20:10.773320 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535438-d8kz2" event={"ID":"ce8a1740-3334-42a4-af1d-1a7de4758c9c","Type":"ContainerDied","Data":"6a678337576ab19bb9ec69028e8af9e0e6aac50abc487291596e0d465fff2805"} Feb 26 17:20:10 crc kubenswrapper[4805]: I0226 17:20:10.936212 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-29 07:43:12.82494101 +0000 UTC Feb 26 17:20:10 crc kubenswrapper[4805]: I0226 17:20:10.936249 4805 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6614h23m1.88869371s for next certificate rotation Feb 26 17:20:14 crc kubenswrapper[4805]: I0226 17:20:14.110593 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-lrc7d" Feb 26 17:20:20 crc kubenswrapper[4805]: I0226 17:20:20.074111 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:20:20 crc kubenswrapper[4805]: I0226 17:20:20.074835 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:20:20 crc kubenswrapper[4805]: I0226 17:20:20.074955 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:20:20 crc kubenswrapper[4805]: I0226 17:20:20.075084 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:20:20 crc kubenswrapper[4805]: I0226 17:20:20.077440 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 26 17:20:20 crc kubenswrapper[4805]: I0226 17:20:20.077459 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 26 17:20:20 crc kubenswrapper[4805]: I0226 17:20:20.077978 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 26 17:20:20 crc kubenswrapper[4805]: I0226 17:20:20.088423 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 26 17:20:20 crc kubenswrapper[4805]: I0226 17:20:20.092535 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:20:20 crc kubenswrapper[4805]: I0226 17:20:20.100366 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:20:20 crc kubenswrapper[4805]: I0226 17:20:20.102959 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:20:20 crc kubenswrapper[4805]: I0226 17:20:20.111975 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" podUID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" containerName="oauth-openshift" containerID="cri-o://f352fc047e35d3cbddb4bbdebabaf0b17e8ec8f40f668a5f1752a35e0449b8ef" gracePeriod=15 Feb 26 17:20:20 crc kubenswrapper[4805]: I0226 17:20:20.128754 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:20:20 crc kubenswrapper[4805]: I0226 17:20:20.144640 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535438-d8kz2" Feb 26 17:20:20 crc kubenswrapper[4805]: I0226 17:20:20.271491 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:20:20 crc kubenswrapper[4805]: I0226 17:20:20.278960 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:20:20 crc kubenswrapper[4805]: I0226 17:20:20.279281 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2djv8\" (UniqueName: \"kubernetes.io/projected/ce8a1740-3334-42a4-af1d-1a7de4758c9c-kube-api-access-2djv8\") pod \"ce8a1740-3334-42a4-af1d-1a7de4758c9c\" (UID: \"ce8a1740-3334-42a4-af1d-1a7de4758c9c\") " Feb 26 17:20:20 crc kubenswrapper[4805]: I0226 17:20:20.283753 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce8a1740-3334-42a4-af1d-1a7de4758c9c-kube-api-access-2djv8" (OuterVolumeSpecName: "kube-api-access-2djv8") pod "ce8a1740-3334-42a4-af1d-1a7de4758c9c" (UID: "ce8a1740-3334-42a4-af1d-1a7de4758c9c"). InnerVolumeSpecName "kube-api-access-2djv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:20:20 crc kubenswrapper[4805]: I0226 17:20:20.285648 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:20:20 crc kubenswrapper[4805]: I0226 17:20:20.380942 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2djv8\" (UniqueName: \"kubernetes.io/projected/ce8a1740-3334-42a4-af1d-1a7de4758c9c-kube-api-access-2djv8\") on node \"crc\" DevicePath \"\"" Feb 26 17:20:20 crc kubenswrapper[4805]: I0226 17:20:20.836803 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535438-d8kz2" event={"ID":"ce8a1740-3334-42a4-af1d-1a7de4758c9c","Type":"ContainerDied","Data":"39cb649c80d4dcec4829bcfb53f972b0c82e025d1454908e435876e33183651d"} Feb 26 17:20:20 crc kubenswrapper[4805]: I0226 17:20:20.836867 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535438-d8kz2" Feb 26 17:20:20 crc kubenswrapper[4805]: I0226 17:20:20.836875 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39cb649c80d4dcec4829bcfb53f972b0c82e025d1454908e435876e33183651d" Feb 26 17:20:21 crc kubenswrapper[4805]: I0226 17:20:21.845098 4805 generic.go:334] "Generic (PLEG): container finished" podID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" containerID="f352fc047e35d3cbddb4bbdebabaf0b17e8ec8f40f668a5f1752a35e0449b8ef" exitCode=0 Feb 26 17:20:21 crc kubenswrapper[4805]: I0226 17:20:21.845161 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" event={"ID":"2f7f215f-544a-4b8a-814d-5e6ecd814b2d","Type":"ContainerDied","Data":"f352fc047e35d3cbddb4bbdebabaf0b17e8ec8f40f668a5f1752a35e0449b8ef"} Feb 26 17:20:22 crc kubenswrapper[4805]: I0226 17:20:22.021781 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-565fd78f84-xlklc"] Feb 26 17:20:22 crc kubenswrapper[4805]: I0226 17:20:22.022076 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" podUID="73cd7b83-240c-458e-88a3-7ceace9a121d" containerName="controller-manager" containerID="cri-o://192f1eaed52fea05cc56ffcabcf69e4823210f1b9f698ca814b63068b1873038" gracePeriod=30 Feb 26 17:20:22 crc kubenswrapper[4805]: I0226 17:20:22.120116 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz"] Feb 26 17:20:22 crc kubenswrapper[4805]: I0226 17:20:22.120335 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" podUID="c783f4e5-c0b1-4832-9988-3ad7d2a38129" containerName="route-controller-manager" containerID="cri-o://813d2b184ad1fb2e266fdef3f334757f4fa1229199fac9327abfba599be8b658" gracePeriod=30 Feb 26 17:20:23 crc kubenswrapper[4805]: I0226 17:20:23.593856 4805 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-mcvr5 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.20:6443/healthz\": dial tcp 10.217.0.20:6443: connect: connection refused" start-of-body= Feb 26 17:20:23 crc kubenswrapper[4805]: I0226 17:20:23.594355 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" podUID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.20:6443/healthz\": dial tcp 10.217.0.20:6443: connect: connection refused" Feb 26 17:20:23 crc kubenswrapper[4805]: I0226 17:20:23.857758 4805 generic.go:334] "Generic (PLEG): container finished" podID="73cd7b83-240c-458e-88a3-7ceace9a121d" containerID="192f1eaed52fea05cc56ffcabcf69e4823210f1b9f698ca814b63068b1873038" exitCode=0 Feb 26 17:20:23 crc kubenswrapper[4805]: I0226 17:20:23.857827 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" event={"ID":"73cd7b83-240c-458e-88a3-7ceace9a121d","Type":"ContainerDied","Data":"192f1eaed52fea05cc56ffcabcf69e4823210f1b9f698ca814b63068b1873038"} Feb 26 17:20:23 crc kubenswrapper[4805]: I0226 17:20:23.989681 4805 patch_prober.go:28] interesting pod/controller-manager-565fd78f84-xlklc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" start-of-body= Feb 26 17:20:23 crc kubenswrapper[4805]: I0226 17:20:23.989968 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" podUID="73cd7b83-240c-458e-88a3-7ceace9a121d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" Feb 26 17:20:24 crc kubenswrapper[4805]: I0226 17:20:24.866253 4805 generic.go:334] "Generic (PLEG): container finished" podID="c783f4e5-c0b1-4832-9988-3ad7d2a38129" containerID="813d2b184ad1fb2e266fdef3f334757f4fa1229199fac9327abfba599be8b658" exitCode=0 Feb 26 17:20:24 crc kubenswrapper[4805]: I0226 17:20:24.866320 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" event={"ID":"c783f4e5-c0b1-4832-9988-3ad7d2a38129","Type":"ContainerDied","Data":"813d2b184ad1fb2e266fdef3f334757f4fa1229199fac9327abfba599be8b658"} Feb 26 17:20:31 crc kubenswrapper[4805]: I0226 17:20:31.806083 4805 patch_prober.go:28] interesting pod/route-controller-manager-79c9b785f5-sp7zz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Feb 26 17:20:31 crc kubenswrapper[4805]: I0226 17:20:31.806476 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" podUID="c783f4e5-c0b1-4832-9988-3ad7d2a38129" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Feb 26 17:20:33 crc kubenswrapper[4805]: I0226 17:20:33.594120 4805 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-mcvr5 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.20:6443/healthz\": dial tcp 10.217.0.20:6443: connect: connection refused" start-of-body= Feb 26 17:20:33 crc kubenswrapper[4805]: I0226 17:20:33.594205 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" podUID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.20:6443/healthz\": dial tcp 10.217.0.20:6443: connect: connection refused" Feb 26 17:20:33 crc kubenswrapper[4805]: I0226 17:20:33.990330 4805 patch_prober.go:28] interesting pod/controller-manager-565fd78f84-xlklc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" start-of-body= Feb 26 17:20:33 crc kubenswrapper[4805]: I0226 17:20:33.990431 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" podUID="73cd7b83-240c-458e-88a3-7ceace9a121d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" Feb 26 17:20:35 crc kubenswrapper[4805]: E0226 17:20:35.168420 4805 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml\": /etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.265323 4805 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 26 17:20:35 crc kubenswrapper[4805]: E0226 17:20:35.265695 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a3a66dc-90f1-4e09-bfa4-1b316b977db2" containerName="pruner" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.265730 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a3a66dc-90f1-4e09-bfa4-1b316b977db2" containerName="pruner" Feb 26 17:20:35 crc kubenswrapper[4805]: E0226 17:20:35.265752 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce8a1740-3334-42a4-af1d-1a7de4758c9c" containerName="oc" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.265760 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce8a1740-3334-42a4-af1d-1a7de4758c9c" containerName="oc" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.265883 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a3a66dc-90f1-4e09-bfa4-1b316b977db2" containerName="pruner" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.265901 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce8a1740-3334-42a4-af1d-1a7de4758c9c" containerName="oc" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.266348 4805 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.266499 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.266900 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389" gracePeriod=15 Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.266975 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd" gracePeriod=15 Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.267010 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504" gracePeriod=15 Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.267037 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c" gracePeriod=15 Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.266946 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb" gracePeriod=15 Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.267293 4805 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 26 17:20:35 crc kubenswrapper[4805]: E0226 17:20:35.267447 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.267458 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 17:20:35 crc kubenswrapper[4805]: E0226 17:20:35.267483 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.267491 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 26 17:20:35 crc kubenswrapper[4805]: E0226 17:20:35.267504 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.267511 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 26 17:20:35 crc kubenswrapper[4805]: E0226 17:20:35.267520 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.267527 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 26 17:20:35 crc kubenswrapper[4805]: E0226 17:20:35.267538 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.267546 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 17:20:35 crc kubenswrapper[4805]: E0226 17:20:35.267554 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.267561 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 26 17:20:35 crc kubenswrapper[4805]: E0226 17:20:35.267571 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.267577 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 26 17:20:35 crc kubenswrapper[4805]: E0226 17:20:35.267589 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.267596 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.267720 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.267733 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.267745 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.267753 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.267761 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.267772 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.267784 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.267796 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 26 17:20:35 crc kubenswrapper[4805]: E0226 17:20:35.268004 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.268034 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 17:20:35 crc kubenswrapper[4805]: E0226 17:20:35.268046 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.268053 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.268153 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 26 17:20:35 crc kubenswrapper[4805]: E0226 17:20:35.324926 4805 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.194:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.375504 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.375554 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.375584 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.375609 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.375640 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.375656 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.375671 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.375688 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.477430 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.477479 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.477519 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.477522 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.477576 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.477546 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.477610 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.477630 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.477654 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.477676 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.477687 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.477699 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.477655 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.477734 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.477793 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.477823 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.626249 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.927159 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.928570 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.929821 4805 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb" exitCode=0 Feb 26 17:20:35 crc kubenswrapper[4805]: I0226 17:20:35.929855 4805 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504" exitCode=2 Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.449407 4805 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.449764 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.901534 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.912964 4805 status_manager.go:851] "Failed to get status for pod" podUID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-mcvr5\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.913448 4805 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.935924 4805 generic.go:334] "Generic (PLEG): container finished" podID="fabf0d12-f3a0-4562-a606-4b0cedcc6cd7" containerID="37bdb205d9580cc28782b5404e09b57e27e1c772ea248cd8785769d89310edac" exitCode=0 Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.936027 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"fabf0d12-f3a0-4562-a606-4b0cedcc6cd7","Type":"ContainerDied","Data":"37bdb205d9580cc28782b5404e09b57e27e1c772ea248cd8785769d89310edac"} Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.937449 4805 status_manager.go:851] "Failed to get status for pod" podUID="fabf0d12-f3a0-4562-a606-4b0cedcc6cd7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.938184 4805 status_manager.go:851] "Failed to get status for pod" podUID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-mcvr5\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.938526 4805 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.938569 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.939942 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.940585 4805 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c" exitCode=0 Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.940607 4805 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd" exitCode=0 Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.940666 4805 scope.go:117] "RemoveContainer" containerID="8e5577bab1f2eb7af85c70fd7c50dcecce2a9547193adf1f4b4b47b9526cbde4" Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.942285 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" event={"ID":"2f7f215f-544a-4b8a-814d-5e6ecd814b2d","Type":"ContainerDied","Data":"d8b59acb36316aa0fcd78b27458d7994098f3d5e0d302c24eac3ede7a52f170e"} Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.942376 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.942902 4805 status_manager.go:851] "Failed to get status for pod" podUID="fabf0d12-f3a0-4562-a606-4b0cedcc6cd7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.943235 4805 status_manager.go:851] "Failed to get status for pod" podUID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-mcvr5\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.943473 4805 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.955895 4805 status_manager.go:851] "Failed to get status for pod" podUID="fabf0d12-f3a0-4562-a606-4b0cedcc6cd7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.956196 4805 status_manager.go:851] "Failed to get status for pod" podUID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-mcvr5\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.956421 4805 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:36 crc kubenswrapper[4805]: E0226 17:20:36.960716 4805 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.194:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-tfcdt.1897db9af4064ee2 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-tfcdt,UID:043bfd8c-1387-4b00-ad52-1e4efd43c942,APIVersion:v1,ResourceVersion:28134,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/community-operator-index:v4.18\" in 29.751s (29.751s including waiting). Image size: 1215588055 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:20:36.959080162 +0000 UTC m=+351.520834501,LastTimestamp:2026-02-26 17:20:36.959080162 +0000 UTC m=+351.520834501,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.995388 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-audit-policies\") pod \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.995435 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-cliconfig\") pod \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.995464 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-router-certs\") pod \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.995501 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-audit-dir\") pod \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.995528 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-ocp-branding-template\") pod \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.995600 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "2f7f215f-544a-4b8a-814d-5e6ecd814b2d" (UID: "2f7f215f-544a-4b8a-814d-5e6ecd814b2d"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.995547 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-user-template-login\") pod \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.995660 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-user-template-provider-selection\") pod \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.995704 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-trusted-ca-bundle\") pod \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.996406 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "2f7f215f-544a-4b8a-814d-5e6ecd814b2d" (UID: "2f7f215f-544a-4b8a-814d-5e6ecd814b2d"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.996422 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "2f7f215f-544a-4b8a-814d-5e6ecd814b2d" (UID: "2f7f215f-544a-4b8a-814d-5e6ecd814b2d"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.996438 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "2f7f215f-544a-4b8a-814d-5e6ecd814b2d" (UID: "2f7f215f-544a-4b8a-814d-5e6ecd814b2d"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.996687 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-user-idp-0-file-data\") pod \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.996738 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-user-template-error\") pod \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.996778 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-serving-cert\") pod \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.996819 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-service-ca\") pod \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.996879 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vns2j\" (UniqueName: \"kubernetes.io/projected/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-kube-api-access-vns2j\") pod \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.996902 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-session\") pod \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\" (UID: \"2f7f215f-544a-4b8a-814d-5e6ecd814b2d\") " Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.997195 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.997218 4805 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.997230 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 26 17:20:36 crc kubenswrapper[4805]: I0226 17:20:36.997241 4805 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.009131 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "2f7f215f-544a-4b8a-814d-5e6ecd814b2d" (UID: "2f7f215f-544a-4b8a-814d-5e6ecd814b2d"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:36.997342 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "2f7f215f-544a-4b8a-814d-5e6ecd814b2d" (UID: "2f7f215f-544a-4b8a-814d-5e6ecd814b2d"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.009380 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-kube-api-access-vns2j" (OuterVolumeSpecName: "kube-api-access-vns2j") pod "2f7f215f-544a-4b8a-814d-5e6ecd814b2d" (UID: "2f7f215f-544a-4b8a-814d-5e6ecd814b2d"). InnerVolumeSpecName "kube-api-access-vns2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.009470 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "2f7f215f-544a-4b8a-814d-5e6ecd814b2d" (UID: "2f7f215f-544a-4b8a-814d-5e6ecd814b2d"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.009724 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "2f7f215f-544a-4b8a-814d-5e6ecd814b2d" (UID: "2f7f215f-544a-4b8a-814d-5e6ecd814b2d"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.009885 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "2f7f215f-544a-4b8a-814d-5e6ecd814b2d" (UID: "2f7f215f-544a-4b8a-814d-5e6ecd814b2d"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.014012 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "2f7f215f-544a-4b8a-814d-5e6ecd814b2d" (UID: "2f7f215f-544a-4b8a-814d-5e6ecd814b2d"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.014311 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "2f7f215f-544a-4b8a-814d-5e6ecd814b2d" (UID: "2f7f215f-544a-4b8a-814d-5e6ecd814b2d"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.014569 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "2f7f215f-544a-4b8a-814d-5e6ecd814b2d" (UID: "2f7f215f-544a-4b8a-814d-5e6ecd814b2d"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.014707 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "2f7f215f-544a-4b8a-814d-5e6ecd814b2d" (UID: "2f7f215f-544a-4b8a-814d-5e6ecd814b2d"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.098488 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.098536 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.098550 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.098562 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.098575 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.098585 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.098597 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.098608 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.098619 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vns2j\" (UniqueName: \"kubernetes.io/projected/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-kube-api-access-vns2j\") on node \"crc\" DevicePath \"\"" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.098630 4805 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2f7f215f-544a-4b8a-814d-5e6ecd814b2d-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.175325 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.176423 4805 status_manager.go:851] "Failed to get status for pod" podUID="fabf0d12-f3a0-4562-a606-4b0cedcc6cd7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.176956 4805 status_manager.go:851] "Failed to get status for pod" podUID="c783f4e5-c0b1-4832-9988-3ad7d2a38129" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-79c9b785f5-sp7zz\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.177314 4805 status_manager.go:851] "Failed to get status for pod" podUID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-mcvr5\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.189787 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.190401 4805 status_manager.go:851] "Failed to get status for pod" podUID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-mcvr5\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.190710 4805 status_manager.go:851] "Failed to get status for pod" podUID="fabf0d12-f3a0-4562-a606-4b0cedcc6cd7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.190869 4805 status_manager.go:851] "Failed to get status for pod" podUID="c783f4e5-c0b1-4832-9988-3ad7d2a38129" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-79c9b785f5-sp7zz\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.191103 4805 status_manager.go:851] "Failed to get status for pod" podUID="73cd7b83-240c-458e-88a3-7ceace9a121d" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-565fd78f84-xlklc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.291069 4805 status_manager.go:851] "Failed to get status for pod" podUID="fabf0d12-f3a0-4562-a606-4b0cedcc6cd7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.293248 4805 status_manager.go:851] "Failed to get status for pod" podUID="c783f4e5-c0b1-4832-9988-3ad7d2a38129" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-79c9b785f5-sp7zz\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.293629 4805 status_manager.go:851] "Failed to get status for pod" podUID="73cd7b83-240c-458e-88a3-7ceace9a121d" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-565fd78f84-xlklc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.293934 4805 status_manager.go:851] "Failed to get status for pod" podUID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-mcvr5\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.300492 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/73cd7b83-240c-458e-88a3-7ceace9a121d-client-ca\") pod \"73cd7b83-240c-458e-88a3-7ceace9a121d\" (UID: \"73cd7b83-240c-458e-88a3-7ceace9a121d\") " Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.300577 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c783f4e5-c0b1-4832-9988-3ad7d2a38129-serving-cert\") pod \"c783f4e5-c0b1-4832-9988-3ad7d2a38129\" (UID: \"c783f4e5-c0b1-4832-9988-3ad7d2a38129\") " Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.300621 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcg8w\" (UniqueName: \"kubernetes.io/projected/73cd7b83-240c-458e-88a3-7ceace9a121d-kube-api-access-lcg8w\") pod \"73cd7b83-240c-458e-88a3-7ceace9a121d\" (UID: \"73cd7b83-240c-458e-88a3-7ceace9a121d\") " Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.300649 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73cd7b83-240c-458e-88a3-7ceace9a121d-config\") pod \"73cd7b83-240c-458e-88a3-7ceace9a121d\" (UID: \"73cd7b83-240c-458e-88a3-7ceace9a121d\") " Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.300681 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/73cd7b83-240c-458e-88a3-7ceace9a121d-proxy-ca-bundles\") pod \"73cd7b83-240c-458e-88a3-7ceace9a121d\" (UID: \"73cd7b83-240c-458e-88a3-7ceace9a121d\") " Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.300716 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c783f4e5-c0b1-4832-9988-3ad7d2a38129-client-ca\") pod \"c783f4e5-c0b1-4832-9988-3ad7d2a38129\" (UID: \"c783f4e5-c0b1-4832-9988-3ad7d2a38129\") " Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.300763 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c783f4e5-c0b1-4832-9988-3ad7d2a38129-config\") pod \"c783f4e5-c0b1-4832-9988-3ad7d2a38129\" (UID: \"c783f4e5-c0b1-4832-9988-3ad7d2a38129\") " Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.300786 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73cd7b83-240c-458e-88a3-7ceace9a121d-serving-cert\") pod \"73cd7b83-240c-458e-88a3-7ceace9a121d\" (UID: \"73cd7b83-240c-458e-88a3-7ceace9a121d\") " Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.300808 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhsth\" (UniqueName: \"kubernetes.io/projected/c783f4e5-c0b1-4832-9988-3ad7d2a38129-kube-api-access-qhsth\") pod \"c783f4e5-c0b1-4832-9988-3ad7d2a38129\" (UID: \"c783f4e5-c0b1-4832-9988-3ad7d2a38129\") " Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.302072 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c783f4e5-c0b1-4832-9988-3ad7d2a38129-client-ca" (OuterVolumeSpecName: "client-ca") pod "c783f4e5-c0b1-4832-9988-3ad7d2a38129" (UID: "c783f4e5-c0b1-4832-9988-3ad7d2a38129"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.302101 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c783f4e5-c0b1-4832-9988-3ad7d2a38129-config" (OuterVolumeSpecName: "config") pod "c783f4e5-c0b1-4832-9988-3ad7d2a38129" (UID: "c783f4e5-c0b1-4832-9988-3ad7d2a38129"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.302807 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73cd7b83-240c-458e-88a3-7ceace9a121d-config" (OuterVolumeSpecName: "config") pod "73cd7b83-240c-458e-88a3-7ceace9a121d" (UID: "73cd7b83-240c-458e-88a3-7ceace9a121d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.303138 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73cd7b83-240c-458e-88a3-7ceace9a121d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "73cd7b83-240c-458e-88a3-7ceace9a121d" (UID: "73cd7b83-240c-458e-88a3-7ceace9a121d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.303607 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73cd7b83-240c-458e-88a3-7ceace9a121d-client-ca" (OuterVolumeSpecName: "client-ca") pod "73cd7b83-240c-458e-88a3-7ceace9a121d" (UID: "73cd7b83-240c-458e-88a3-7ceace9a121d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.312944 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c783f4e5-c0b1-4832-9988-3ad7d2a38129-kube-api-access-qhsth" (OuterVolumeSpecName: "kube-api-access-qhsth") pod "c783f4e5-c0b1-4832-9988-3ad7d2a38129" (UID: "c783f4e5-c0b1-4832-9988-3ad7d2a38129"). InnerVolumeSpecName "kube-api-access-qhsth". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.315599 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c783f4e5-c0b1-4832-9988-3ad7d2a38129-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c783f4e5-c0b1-4832-9988-3ad7d2a38129" (UID: "c783f4e5-c0b1-4832-9988-3ad7d2a38129"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.361731 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73cd7b83-240c-458e-88a3-7ceace9a121d-kube-api-access-lcg8w" (OuterVolumeSpecName: "kube-api-access-lcg8w") pod "73cd7b83-240c-458e-88a3-7ceace9a121d" (UID: "73cd7b83-240c-458e-88a3-7ceace9a121d"). InnerVolumeSpecName "kube-api-access-lcg8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.366427 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73cd7b83-240c-458e-88a3-7ceace9a121d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "73cd7b83-240c-458e-88a3-7ceace9a121d" (UID: "73cd7b83-240c-458e-88a3-7ceace9a121d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.402590 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c783f4e5-c0b1-4832-9988-3ad7d2a38129-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.402637 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhsth\" (UniqueName: \"kubernetes.io/projected/c783f4e5-c0b1-4832-9988-3ad7d2a38129-kube-api-access-qhsth\") on node \"crc\" DevicePath \"\"" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.402653 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73cd7b83-240c-458e-88a3-7ceace9a121d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.402665 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/73cd7b83-240c-458e-88a3-7ceace9a121d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.402676 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c783f4e5-c0b1-4832-9988-3ad7d2a38129-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.402686 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcg8w\" (UniqueName: \"kubernetes.io/projected/73cd7b83-240c-458e-88a3-7ceace9a121d-kube-api-access-lcg8w\") on node \"crc\" DevicePath \"\"" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.402695 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73cd7b83-240c-458e-88a3-7ceace9a121d-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.402705 4805 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/73cd7b83-240c-458e-88a3-7ceace9a121d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.402716 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c783f4e5-c0b1-4832-9988-3ad7d2a38129-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 17:20:37 crc kubenswrapper[4805]: E0226 17:20:37.717246 4805 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 26 17:20:37 crc kubenswrapper[4805]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_networking-console-plugin-85b44fc459-gdk6g_openshift-network-console_5fe485a1-e14f-4c09-b5b9-f252bc42b7e8_0(9fa6d30509359c994f2703dce7dd6f6c7aa621aa43be3fc6f18ed19c349c0eed): error adding pod openshift-network-console_networking-console-plugin-85b44fc459-gdk6g to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9fa6d30509359c994f2703dce7dd6f6c7aa621aa43be3fc6f18ed19c349c0eed" Netns:"/var/run/netns/386168c4-c6dd-4f40-ae92-7adfb68cdb19" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-console;K8S_POD_NAME=networking-console-plugin-85b44fc459-gdk6g;K8S_POD_INFRA_CONTAINER_ID=9fa6d30509359c994f2703dce7dd6f6c7aa621aa43be3fc6f18ed19c349c0eed;K8S_POD_UID=5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Path:"" ERRORED: error configuring pod [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] networking: Multus: [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: SetNetworkStatus: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-85b44fc459-gdk6g?timeout=1m0s": dial tcp 38.102.83.194:6443: connect: connection refused Feb 26 17:20:37 crc kubenswrapper[4805]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 26 17:20:37 crc kubenswrapper[4805]: > Feb 26 17:20:37 crc kubenswrapper[4805]: E0226 17:20:37.717751 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 26 17:20:37 crc kubenswrapper[4805]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_networking-console-plugin-85b44fc459-gdk6g_openshift-network-console_5fe485a1-e14f-4c09-b5b9-f252bc42b7e8_0(9fa6d30509359c994f2703dce7dd6f6c7aa621aa43be3fc6f18ed19c349c0eed): error adding pod openshift-network-console_networking-console-plugin-85b44fc459-gdk6g to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9fa6d30509359c994f2703dce7dd6f6c7aa621aa43be3fc6f18ed19c349c0eed" Netns:"/var/run/netns/386168c4-c6dd-4f40-ae92-7adfb68cdb19" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-console;K8S_POD_NAME=networking-console-plugin-85b44fc459-gdk6g;K8S_POD_INFRA_CONTAINER_ID=9fa6d30509359c994f2703dce7dd6f6c7aa621aa43be3fc6f18ed19c349c0eed;K8S_POD_UID=5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Path:"" ERRORED: error configuring pod [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] networking: Multus: [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: SetNetworkStatus: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-85b44fc459-gdk6g?timeout=1m0s": dial tcp 38.102.83.194:6443: connect: connection refused Feb 26 17:20:37 crc kubenswrapper[4805]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 26 17:20:37 crc kubenswrapper[4805]: > pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:20:37 crc kubenswrapper[4805]: E0226 17:20:37.717826 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 26 17:20:37 crc kubenswrapper[4805]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_networking-console-plugin-85b44fc459-gdk6g_openshift-network-console_5fe485a1-e14f-4c09-b5b9-f252bc42b7e8_0(9fa6d30509359c994f2703dce7dd6f6c7aa621aa43be3fc6f18ed19c349c0eed): error adding pod openshift-network-console_networking-console-plugin-85b44fc459-gdk6g to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9fa6d30509359c994f2703dce7dd6f6c7aa621aa43be3fc6f18ed19c349c0eed" Netns:"/var/run/netns/386168c4-c6dd-4f40-ae92-7adfb68cdb19" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-console;K8S_POD_NAME=networking-console-plugin-85b44fc459-gdk6g;K8S_POD_INFRA_CONTAINER_ID=9fa6d30509359c994f2703dce7dd6f6c7aa621aa43be3fc6f18ed19c349c0eed;K8S_POD_UID=5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Path:"" ERRORED: error configuring pod [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] networking: Multus: [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: SetNetworkStatus: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-85b44fc459-gdk6g?timeout=1m0s": dial tcp 38.102.83.194:6443: connect: connection refused Feb 26 17:20:37 crc kubenswrapper[4805]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 26 17:20:37 crc kubenswrapper[4805]: > pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:20:37 crc kubenswrapper[4805]: E0226 17:20:37.717903 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"networking-console-plugin-85b44fc459-gdk6g_openshift-network-console(5fe485a1-e14f-4c09-b5b9-f252bc42b7e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"networking-console-plugin-85b44fc459-gdk6g_openshift-network-console(5fe485a1-e14f-4c09-b5b9-f252bc42b7e8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_networking-console-plugin-85b44fc459-gdk6g_openshift-network-console_5fe485a1-e14f-4c09-b5b9-f252bc42b7e8_0(9fa6d30509359c994f2703dce7dd6f6c7aa621aa43be3fc6f18ed19c349c0eed): error adding pod openshift-network-console_networking-console-plugin-85b44fc459-gdk6g to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"9fa6d30509359c994f2703dce7dd6f6c7aa621aa43be3fc6f18ed19c349c0eed\\\" Netns:\\\"/var/run/netns/386168c4-c6dd-4f40-ae92-7adfb68cdb19\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-console;K8S_POD_NAME=networking-console-plugin-85b44fc459-gdk6g;K8S_POD_INFRA_CONTAINER_ID=9fa6d30509359c994f2703dce7dd6f6c7aa621aa43be3fc6f18ed19c349c0eed;K8S_POD_UID=5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] networking: Multus: [openshift-network-console/networking-console-plugin-85b44fc459-gdk6g/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: SetNetworkStatus: failed to update the pod networking-console-plugin-85b44fc459-gdk6g in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-85b44fc459-gdk6g?timeout=1m0s\\\": dial tcp 38.102.83.194:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.730242 4805 scope.go:117] "RemoveContainer" containerID="f352fc047e35d3cbddb4bbdebabaf0b17e8ec8f40f668a5f1752a35e0449b8ef" Feb 26 17:20:37 crc kubenswrapper[4805]: W0226 17:20:37.735539 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-15b88a040a280e1275e9ab8166f0b514a0d794c57ee0be802ddcaef621eb7e26 WatchSource:0}: Error finding container 15b88a040a280e1275e9ab8166f0b514a0d794c57ee0be802ddcaef621eb7e26: Status 404 returned error can't find the container with id 15b88a040a280e1275e9ab8166f0b514a0d794c57ee0be802ddcaef621eb7e26 Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.758911 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.760195 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.761281 4805 status_manager.go:851] "Failed to get status for pod" podUID="73cd7b83-240c-458e-88a3-7ceace9a121d" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-565fd78f84-xlklc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.761558 4805 status_manager.go:851] "Failed to get status for pod" podUID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-mcvr5\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.761912 4805 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.762386 4805 status_manager.go:851] "Failed to get status for pod" podUID="fabf0d12-f3a0-4562-a606-4b0cedcc6cd7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.762607 4805 status_manager.go:851] "Failed to get status for pod" podUID="c783f4e5-c0b1-4832-9988-3ad7d2a38129" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-79c9b785f5-sp7zz\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.807193 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.807261 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.807283 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.807344 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.807412 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.807605 4805 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.807626 4805 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.807673 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.910337 4805 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.960676 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.960722 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" event={"ID":"c783f4e5-c0b1-4832-9988-3ad7d2a38129","Type":"ContainerDied","Data":"3cf4b783445885a4e812bf8bae45537b019b4f3c1f47314612a48db5c40e7123"} Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.960984 4805 scope.go:117] "RemoveContainer" containerID="813d2b184ad1fb2e266fdef3f334757f4fa1229199fac9327abfba599be8b658" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.961870 4805 status_manager.go:851] "Failed to get status for pod" podUID="fabf0d12-f3a0-4562-a606-4b0cedcc6cd7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.962507 4805 status_manager.go:851] "Failed to get status for pod" podUID="c783f4e5-c0b1-4832-9988-3ad7d2a38129" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-79c9b785f5-sp7zz\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.962740 4805 status_manager.go:851] "Failed to get status for pod" podUID="73cd7b83-240c-458e-88a3-7ceace9a121d" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-565fd78f84-xlklc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.962907 4805 status_manager.go:851] "Failed to get status for pod" podUID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-mcvr5\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.963139 4805 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.984171 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.985259 4805 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389" exitCode=0 Feb 26 17:20:37 crc kubenswrapper[4805]: I0226 17:20:37.985643 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.000963 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"15b88a040a280e1275e9ab8166f0b514a0d794c57ee0be802ddcaef621eb7e26"} Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.018946 4805 status_manager.go:851] "Failed to get status for pod" podUID="fabf0d12-f3a0-4562-a606-4b0cedcc6cd7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.019213 4805 status_manager.go:851] "Failed to get status for pod" podUID="c783f4e5-c0b1-4832-9988-3ad7d2a38129" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-79c9b785f5-sp7zz\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.021125 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.022191 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" event={"ID":"73cd7b83-240c-458e-88a3-7ceace9a121d","Type":"ContainerDied","Data":"6a5c195b073e2f11d66b90bbb1d4ff60c91fbc789d72756af24c7939cbf58508"} Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.025051 4805 status_manager.go:851] "Failed to get status for pod" podUID="73cd7b83-240c-458e-88a3-7ceace9a121d" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-565fd78f84-xlklc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.026650 4805 status_manager.go:851] "Failed to get status for pod" podUID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-mcvr5\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.027159 4805 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.027613 4805 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.027888 4805 status_manager.go:851] "Failed to get status for pod" podUID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-mcvr5\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.028126 4805 status_manager.go:851] "Failed to get status for pod" podUID="fabf0d12-f3a0-4562-a606-4b0cedcc6cd7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.028368 4805 status_manager.go:851] "Failed to get status for pod" podUID="c783f4e5-c0b1-4832-9988-3ad7d2a38129" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-79c9b785f5-sp7zz\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.028624 4805 status_manager.go:851] "Failed to get status for pod" podUID="73cd7b83-240c-458e-88a3-7ceace9a121d" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-565fd78f84-xlklc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.035836 4805 status_manager.go:851] "Failed to get status for pod" podUID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-mcvr5\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.036801 4805 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.037266 4805 status_manager.go:851] "Failed to get status for pod" podUID="fabf0d12-f3a0-4562-a606-4b0cedcc6cd7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.037442 4805 status_manager.go:851] "Failed to get status for pod" podUID="c783f4e5-c0b1-4832-9988-3ad7d2a38129" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-79c9b785f5-sp7zz\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.037593 4805 status_manager.go:851] "Failed to get status for pod" podUID="73cd7b83-240c-458e-88a3-7ceace9a121d" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-565fd78f84-xlklc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:38 crc kubenswrapper[4805]: E0226 17:20:38.056299 4805 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 26 17:20:38 crc kubenswrapper[4805]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-source-55646444c4-trplf_openshift-network-diagnostics_9d751cbb-f2e2-430d-9754-c882a5e924a5_0(c38d0d42b7a0586eb7c0e0c159e583fcb121b94b445e939f36c95a73f43e4853): error adding pod openshift-network-diagnostics_network-check-source-55646444c4-trplf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c38d0d42b7a0586eb7c0e0c159e583fcb121b94b445e939f36c95a73f43e4853" Netns:"/var/run/netns/e46641e0-a5e8-4a27-960c-34297ece93b6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-55646444c4-trplf;K8S_POD_INFRA_CONTAINER_ID=c38d0d42b7a0586eb7c0e0c159e583fcb121b94b445e939f36c95a73f43e4853;K8S_POD_UID=9d751cbb-f2e2-430d-9754-c882a5e924a5" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-source-55646444c4-trplf] networking: Multus: [openshift-network-diagnostics/network-check-source-55646444c4-trplf/9d751cbb-f2e2-430d-9754-c882a5e924a5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-55646444c4-trplf?timeout=1m0s": dial tcp 38.102.83.194:6443: connect: connection refused Feb 26 17:20:38 crc kubenswrapper[4805]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 26 17:20:38 crc kubenswrapper[4805]: > Feb 26 17:20:38 crc kubenswrapper[4805]: E0226 17:20:38.056362 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 26 17:20:38 crc kubenswrapper[4805]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-source-55646444c4-trplf_openshift-network-diagnostics_9d751cbb-f2e2-430d-9754-c882a5e924a5_0(c38d0d42b7a0586eb7c0e0c159e583fcb121b94b445e939f36c95a73f43e4853): error adding pod openshift-network-diagnostics_network-check-source-55646444c4-trplf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c38d0d42b7a0586eb7c0e0c159e583fcb121b94b445e939f36c95a73f43e4853" Netns:"/var/run/netns/e46641e0-a5e8-4a27-960c-34297ece93b6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-55646444c4-trplf;K8S_POD_INFRA_CONTAINER_ID=c38d0d42b7a0586eb7c0e0c159e583fcb121b94b445e939f36c95a73f43e4853;K8S_POD_UID=9d751cbb-f2e2-430d-9754-c882a5e924a5" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-source-55646444c4-trplf] networking: Multus: [openshift-network-diagnostics/network-check-source-55646444c4-trplf/9d751cbb-f2e2-430d-9754-c882a5e924a5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-55646444c4-trplf?timeout=1m0s": dial tcp 38.102.83.194:6443: connect: connection refused Feb 26 17:20:38 crc kubenswrapper[4805]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 26 17:20:38 crc kubenswrapper[4805]: > pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:20:38 crc kubenswrapper[4805]: E0226 17:20:38.056381 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 26 17:20:38 crc kubenswrapper[4805]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-source-55646444c4-trplf_openshift-network-diagnostics_9d751cbb-f2e2-430d-9754-c882a5e924a5_0(c38d0d42b7a0586eb7c0e0c159e583fcb121b94b445e939f36c95a73f43e4853): error adding pod openshift-network-diagnostics_network-check-source-55646444c4-trplf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c38d0d42b7a0586eb7c0e0c159e583fcb121b94b445e939f36c95a73f43e4853" Netns:"/var/run/netns/e46641e0-a5e8-4a27-960c-34297ece93b6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-55646444c4-trplf;K8S_POD_INFRA_CONTAINER_ID=c38d0d42b7a0586eb7c0e0c159e583fcb121b94b445e939f36c95a73f43e4853;K8S_POD_UID=9d751cbb-f2e2-430d-9754-c882a5e924a5" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-source-55646444c4-trplf] networking: Multus: [openshift-network-diagnostics/network-check-source-55646444c4-trplf/9d751cbb-f2e2-430d-9754-c882a5e924a5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-55646444c4-trplf?timeout=1m0s": dial tcp 38.102.83.194:6443: connect: connection refused Feb 26 17:20:38 crc kubenswrapper[4805]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 26 17:20:38 crc kubenswrapper[4805]: > pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:20:38 crc kubenswrapper[4805]: E0226 17:20:38.056432 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-source-55646444c4-trplf_openshift-network-diagnostics_9d751cbb-f2e2-430d-9754-c882a5e924a5_0(c38d0d42b7a0586eb7c0e0c159e583fcb121b94b445e939f36c95a73f43e4853): error adding pod openshift-network-diagnostics_network-check-source-55646444c4-trplf to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"c38d0d42b7a0586eb7c0e0c159e583fcb121b94b445e939f36c95a73f43e4853\\\" Netns:\\\"/var/run/netns/e46641e0-a5e8-4a27-960c-34297ece93b6\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-source-55646444c4-trplf;K8S_POD_INFRA_CONTAINER_ID=c38d0d42b7a0586eb7c0e0c159e583fcb121b94b445e939f36c95a73f43e4853;K8S_POD_UID=9d751cbb-f2e2-430d-9754-c882a5e924a5\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-source-55646444c4-trplf] networking: Multus: [openshift-network-diagnostics/network-check-source-55646444c4-trplf/9d751cbb-f2e2-430d-9754-c882a5e924a5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-source-55646444c4-trplf in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-55646444c4-trplf?timeout=1m0s\\\": dial tcp 38.102.83.194:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.082642 4805 scope.go:117] "RemoveContainer" containerID="fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c" Feb 26 17:20:38 crc kubenswrapper[4805]: E0226 17:20:38.097859 4805 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 26 17:20:38 crc kubenswrapper[4805]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-xd92c_openshift-network-diagnostics_3b6479f0-333b-4a96-9adf-2099afdc2447_0(f94e268f82cc81d26c95b157224dadc0741ebb3fd0eaeaca14c9e546c0bad8f7): error adding pod openshift-network-diagnostics_network-check-target-xd92c to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f94e268f82cc81d26c95b157224dadc0741ebb3fd0eaeaca14c9e546c0bad8f7" Netns:"/var/run/netns/b52cd2c1-ebc5-48c2-ab91-c349e272266f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-xd92c;K8S_POD_INFRA_CONTAINER_ID=f94e268f82cc81d26c95b157224dadc0741ebb3fd0eaeaca14c9e546c0bad8f7;K8S_POD_UID=3b6479f0-333b-4a96-9adf-2099afdc2447" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-target-xd92c] networking: Multus: [openshift-network-diagnostics/network-check-target-xd92c/3b6479f0-333b-4a96-9adf-2099afdc2447]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-target-xd92c in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-target-xd92c in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c?timeout=1m0s": dial tcp 38.102.83.194:6443: connect: connection refused Feb 26 17:20:38 crc kubenswrapper[4805]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 26 17:20:38 crc kubenswrapper[4805]: > Feb 26 17:20:38 crc kubenswrapper[4805]: E0226 17:20:38.097913 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 26 17:20:38 crc kubenswrapper[4805]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-xd92c_openshift-network-diagnostics_3b6479f0-333b-4a96-9adf-2099afdc2447_0(f94e268f82cc81d26c95b157224dadc0741ebb3fd0eaeaca14c9e546c0bad8f7): error adding pod openshift-network-diagnostics_network-check-target-xd92c to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f94e268f82cc81d26c95b157224dadc0741ebb3fd0eaeaca14c9e546c0bad8f7" Netns:"/var/run/netns/b52cd2c1-ebc5-48c2-ab91-c349e272266f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-xd92c;K8S_POD_INFRA_CONTAINER_ID=f94e268f82cc81d26c95b157224dadc0741ebb3fd0eaeaca14c9e546c0bad8f7;K8S_POD_UID=3b6479f0-333b-4a96-9adf-2099afdc2447" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-target-xd92c] networking: Multus: [openshift-network-diagnostics/network-check-target-xd92c/3b6479f0-333b-4a96-9adf-2099afdc2447]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-target-xd92c in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-target-xd92c in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c?timeout=1m0s": dial tcp 38.102.83.194:6443: connect: connection refused Feb 26 17:20:38 crc kubenswrapper[4805]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 26 17:20:38 crc kubenswrapper[4805]: > pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:20:38 crc kubenswrapper[4805]: E0226 17:20:38.097932 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 26 17:20:38 crc kubenswrapper[4805]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-xd92c_openshift-network-diagnostics_3b6479f0-333b-4a96-9adf-2099afdc2447_0(f94e268f82cc81d26c95b157224dadc0741ebb3fd0eaeaca14c9e546c0bad8f7): error adding pod openshift-network-diagnostics_network-check-target-xd92c to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f94e268f82cc81d26c95b157224dadc0741ebb3fd0eaeaca14c9e546c0bad8f7" Netns:"/var/run/netns/b52cd2c1-ebc5-48c2-ab91-c349e272266f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-xd92c;K8S_POD_INFRA_CONTAINER_ID=f94e268f82cc81d26c95b157224dadc0741ebb3fd0eaeaca14c9e546c0bad8f7;K8S_POD_UID=3b6479f0-333b-4a96-9adf-2099afdc2447" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-target-xd92c] networking: Multus: [openshift-network-diagnostics/network-check-target-xd92c/3b6479f0-333b-4a96-9adf-2099afdc2447]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-target-xd92c in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-target-xd92c in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c?timeout=1m0s": dial tcp 38.102.83.194:6443: connect: connection refused Feb 26 17:20:38 crc kubenswrapper[4805]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 26 17:20:38 crc kubenswrapper[4805]: > pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:20:38 crc kubenswrapper[4805]: E0226 17:20:38.097979 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-xd92c_openshift-network-diagnostics(3b6479f0-333b-4a96-9adf-2099afdc2447)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-xd92c_openshift-network-diagnostics(3b6479f0-333b-4a96-9adf-2099afdc2447)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-xd92c_openshift-network-diagnostics_3b6479f0-333b-4a96-9adf-2099afdc2447_0(f94e268f82cc81d26c95b157224dadc0741ebb3fd0eaeaca14c9e546c0bad8f7): error adding pod openshift-network-diagnostics_network-check-target-xd92c to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"f94e268f82cc81d26c95b157224dadc0741ebb3fd0eaeaca14c9e546c0bad8f7\\\" Netns:\\\"/var/run/netns/b52cd2c1-ebc5-48c2-ab91-c349e272266f\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-xd92c;K8S_POD_INFRA_CONTAINER_ID=f94e268f82cc81d26c95b157224dadc0741ebb3fd0eaeaca14c9e546c0bad8f7;K8S_POD_UID=3b6479f0-333b-4a96-9adf-2099afdc2447\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-target-xd92c] networking: Multus: [openshift-network-diagnostics/network-check-target-xd92c/3b6479f0-333b-4a96-9adf-2099afdc2447]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-check-target-xd92c in out of cluster comm: SetNetworkStatus: failed to update the pod network-check-target-xd92c in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c?timeout=1m0s\\\": dial tcp 38.102.83.194:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.117963 4805 scope.go:117] "RemoveContainer" containerID="78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.164930 4805 scope.go:117] "RemoveContainer" containerID="86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.222491 4805 scope.go:117] "RemoveContainer" containerID="685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.271856 4805 scope.go:117] "RemoveContainer" containerID="8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.318334 4805 scope.go:117] "RemoveContainer" containerID="373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.358438 4805 scope.go:117] "RemoveContainer" containerID="fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c" Feb 26 17:20:38 crc kubenswrapper[4805]: E0226 17:20:38.360044 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\": container with ID starting with fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c not found: ID does not exist" containerID="fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.360087 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c"} err="failed to get container status \"fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\": rpc error: code = NotFound desc = could not find container \"fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c\": container with ID starting with fbe2859c9b2321853b34650acd2cebf1e671bc155ec7402cd098808299018b0c not found: ID does not exist" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.360115 4805 scope.go:117] "RemoveContainer" containerID="78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb" Feb 26 17:20:38 crc kubenswrapper[4805]: E0226 17:20:38.363203 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\": container with ID starting with 78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb not found: ID does not exist" containerID="78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.363244 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb"} err="failed to get container status \"78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\": rpc error: code = NotFound desc = could not find container \"78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb\": container with ID starting with 78290e0a84b193c0d6699eaa4dc8f3a9fbba937a930514cccf9415735f3409bb not found: ID does not exist" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.363271 4805 scope.go:117] "RemoveContainer" containerID="86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd" Feb 26 17:20:38 crc kubenswrapper[4805]: E0226 17:20:38.363595 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\": container with ID starting with 86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd not found: ID does not exist" containerID="86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.363626 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd"} err="failed to get container status \"86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\": rpc error: code = NotFound desc = could not find container \"86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd\": container with ID starting with 86a29faf84cc870194f5405c7218a1d3f41df80a1d81dc9fe383de0f2e41f3fd not found: ID does not exist" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.363645 4805 scope.go:117] "RemoveContainer" containerID="685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504" Feb 26 17:20:38 crc kubenswrapper[4805]: E0226 17:20:38.363893 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\": container with ID starting with 685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504 not found: ID does not exist" containerID="685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.363924 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504"} err="failed to get container status \"685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\": rpc error: code = NotFound desc = could not find container \"685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504\": container with ID starting with 685dcab5ae8859ee21e0033ab80087dc1bb847ef500ebc2dad280e6744696504 not found: ID does not exist" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.363943 4805 scope.go:117] "RemoveContainer" containerID="8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389" Feb 26 17:20:38 crc kubenswrapper[4805]: E0226 17:20:38.364229 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\": container with ID starting with 8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389 not found: ID does not exist" containerID="8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.364259 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389"} err="failed to get container status \"8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\": rpc error: code = NotFound desc = could not find container \"8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389\": container with ID starting with 8d5ed7272cac0e8906b594c8f82f4d2c952dfba948b271c84894e0275e9f7389 not found: ID does not exist" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.364281 4805 scope.go:117] "RemoveContainer" containerID="373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315" Feb 26 17:20:38 crc kubenswrapper[4805]: E0226 17:20:38.364779 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\": container with ID starting with 373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315 not found: ID does not exist" containerID="373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.364808 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315"} err="failed to get container status \"373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\": rpc error: code = NotFound desc = could not find container \"373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315\": container with ID starting with 373462e1d77cc4af67b69c785a5d099abeac62c2376d21c8c566ec56aa684315 not found: ID does not exist" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.364825 4805 scope.go:117] "RemoveContainer" containerID="192f1eaed52fea05cc56ffcabcf69e4823210f1b9f698ca814b63068b1873038" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.388709 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.389184 4805 status_manager.go:851] "Failed to get status for pod" podUID="fabf0d12-f3a0-4562-a606-4b0cedcc6cd7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.389366 4805 status_manager.go:851] "Failed to get status for pod" podUID="c783f4e5-c0b1-4832-9988-3ad7d2a38129" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-79c9b785f5-sp7zz\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.389528 4805 status_manager.go:851] "Failed to get status for pod" podUID="73cd7b83-240c-458e-88a3-7ceace9a121d" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-565fd78f84-xlklc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.389685 4805 status_manager.go:851] "Failed to get status for pod" podUID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-mcvr5\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.390554 4805 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.524623 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fabf0d12-f3a0-4562-a606-4b0cedcc6cd7-kube-api-access\") pod \"fabf0d12-f3a0-4562-a606-4b0cedcc6cd7\" (UID: \"fabf0d12-f3a0-4562-a606-4b0cedcc6cd7\") " Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.524755 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fabf0d12-f3a0-4562-a606-4b0cedcc6cd7-kubelet-dir\") pod \"fabf0d12-f3a0-4562-a606-4b0cedcc6cd7\" (UID: \"fabf0d12-f3a0-4562-a606-4b0cedcc6cd7\") " Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.524812 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fabf0d12-f3a0-4562-a606-4b0cedcc6cd7-var-lock\") pod \"fabf0d12-f3a0-4562-a606-4b0cedcc6cd7\" (UID: \"fabf0d12-f3a0-4562-a606-4b0cedcc6cd7\") " Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.525119 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fabf0d12-f3a0-4562-a606-4b0cedcc6cd7-var-lock" (OuterVolumeSpecName: "var-lock") pod "fabf0d12-f3a0-4562-a606-4b0cedcc6cd7" (UID: "fabf0d12-f3a0-4562-a606-4b0cedcc6cd7"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.526398 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fabf0d12-f3a0-4562-a606-4b0cedcc6cd7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fabf0d12-f3a0-4562-a606-4b0cedcc6cd7" (UID: "fabf0d12-f3a0-4562-a606-4b0cedcc6cd7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.538318 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fabf0d12-f3a0-4562-a606-4b0cedcc6cd7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fabf0d12-f3a0-4562-a606-4b0cedcc6cd7" (UID: "fabf0d12-f3a0-4562-a606-4b0cedcc6cd7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.626311 4805 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fabf0d12-f3a0-4562-a606-4b0cedcc6cd7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.626364 4805 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fabf0d12-f3a0-4562-a606-4b0cedcc6cd7-var-lock\") on node \"crc\" DevicePath \"\"" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.626384 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fabf0d12-f3a0-4562-a606-4b0cedcc6cd7-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 26 17:20:38 crc kubenswrapper[4805]: I0226 17:20:38.960818 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.027975 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"fabf0d12-f3a0-4562-a606-4b0cedcc6cd7","Type":"ContainerDied","Data":"448f342d6db5712c0e053a979245c286ff20a9280b182d4b81a3a9ebd15cfa6c"} Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.028007 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.028030 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="448f342d6db5712c0e053a979245c286ff20a9280b182d4b81a3a9ebd15cfa6c" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.030206 4805 generic.go:334] "Generic (PLEG): container finished" podID="6239f68a-d80a-4fd4-9a6d-69bd48b1c015" containerID="d9bbc138e66f5c98f6f5a1fcf76fc09d0f02d9825a1105212c886ddb1b227584" exitCode=0 Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.030262 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jttjv" event={"ID":"6239f68a-d80a-4fd4-9a6d-69bd48b1c015","Type":"ContainerDied","Data":"d9bbc138e66f5c98f6f5a1fcf76fc09d0f02d9825a1105212c886ddb1b227584"} Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.032372 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzkpq" event={"ID":"87196950-f6be-442b-a725-3cdee5962f55","Type":"ContainerStarted","Data":"403d8a011c5965e3e1b078336abff0f9cc7b71924078653cfc32bd1dbb6ba5ad"} Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.033128 4805 status_manager.go:851] "Failed to get status for pod" podUID="87196950-f6be-442b-a725-3cdee5962f55" pod="openshift-marketplace/certified-operators-gzkpq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzkpq\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.033323 4805 status_manager.go:851] "Failed to get status for pod" podUID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-mcvr5\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.033574 4805 status_manager.go:851] "Failed to get status for pod" podUID="fabf0d12-f3a0-4562-a606-4b0cedcc6cd7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.033917 4805 status_manager.go:851] "Failed to get status for pod" podUID="c783f4e5-c0b1-4832-9988-3ad7d2a38129" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-79c9b785f5-sp7zz\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.034126 4805 status_manager.go:851] "Failed to get status for pod" podUID="73cd7b83-240c-458e-88a3-7ceace9a121d" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-565fd78f84-xlklc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.035661 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2xsss" event={"ID":"7a876494-42c5-4be9-aad1-46d8ce3c68bb","Type":"ContainerStarted","Data":"34529b69203a5f40a94eb522b498519d9054d9622241ed114174b1dda1953af4"} Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.036270 4805 status_manager.go:851] "Failed to get status for pod" podUID="87196950-f6be-442b-a725-3cdee5962f55" pod="openshift-marketplace/certified-operators-gzkpq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzkpq\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.036595 4805 status_manager.go:851] "Failed to get status for pod" podUID="7a876494-42c5-4be9-aad1-46d8ce3c68bb" pod="openshift-marketplace/redhat-marketplace-2xsss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2xsss\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.036878 4805 status_manager.go:851] "Failed to get status for pod" podUID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-mcvr5\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.037198 4805 status_manager.go:851] "Failed to get status for pod" podUID="fabf0d12-f3a0-4562-a606-4b0cedcc6cd7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.037484 4805 status_manager.go:851] "Failed to get status for pod" podUID="c783f4e5-c0b1-4832-9988-3ad7d2a38129" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-79c9b785f5-sp7zz\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.037706 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5l62" event={"ID":"884caea0-c055-415b-92c3-fae420465726","Type":"ContainerStarted","Data":"0d48a1f95e5e3c8aa906c61ed59ea8a92bb7baf529f7b10c77c7ab8958f736eb"} Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.037781 4805 status_manager.go:851] "Failed to get status for pod" podUID="73cd7b83-240c-458e-88a3-7ceace9a121d" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-565fd78f84-xlklc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.038445 4805 status_manager.go:851] "Failed to get status for pod" podUID="fabf0d12-f3a0-4562-a606-4b0cedcc6cd7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.038767 4805 status_manager.go:851] "Failed to get status for pod" podUID="c783f4e5-c0b1-4832-9988-3ad7d2a38129" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-79c9b785f5-sp7zz\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.038989 4805 status_manager.go:851] "Failed to get status for pod" podUID="73cd7b83-240c-458e-88a3-7ceace9a121d" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-565fd78f84-xlklc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.039341 4805 status_manager.go:851] "Failed to get status for pod" podUID="884caea0-c055-415b-92c3-fae420465726" pod="openshift-marketplace/redhat-operators-n5l62" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-n5l62\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.039517 4805 status_manager.go:851] "Failed to get status for pod" podUID="87196950-f6be-442b-a725-3cdee5962f55" pod="openshift-marketplace/certified-operators-gzkpq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzkpq\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.039656 4805 status_manager.go:851] "Failed to get status for pod" podUID="7a876494-42c5-4be9-aad1-46d8ce3c68bb" pod="openshift-marketplace/redhat-marketplace-2xsss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2xsss\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.039854 4805 status_manager.go:851] "Failed to get status for pod" podUID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-mcvr5\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.040291 4805 generic.go:334] "Generic (PLEG): container finished" podID="83d8504f-33ad-4812-bc1e-11233c225974" containerID="a2c139a491e284663312701882ad39d995220f59d3c83bfa33aca8d267ceec9c" exitCode=0 Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.040313 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fvvs" event={"ID":"83d8504f-33ad-4812-bc1e-11233c225974","Type":"ContainerDied","Data":"a2c139a491e284663312701882ad39d995220f59d3c83bfa33aca8d267ceec9c"} Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.040760 4805 status_manager.go:851] "Failed to get status for pod" podUID="83d8504f-33ad-4812-bc1e-11233c225974" pod="openshift-marketplace/community-operators-7fvvs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-7fvvs\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.041032 4805 status_manager.go:851] "Failed to get status for pod" podUID="fabf0d12-f3a0-4562-a606-4b0cedcc6cd7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.041299 4805 status_manager.go:851] "Failed to get status for pod" podUID="c783f4e5-c0b1-4832-9988-3ad7d2a38129" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-79c9b785f5-sp7zz\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.041604 4805 status_manager.go:851] "Failed to get status for pod" podUID="73cd7b83-240c-458e-88a3-7ceace9a121d" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-565fd78f84-xlklc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.041782 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wnt7l" event={"ID":"231a1216-2a55-4e7b-b026-104624c69857","Type":"ContainerStarted","Data":"06da329148ebaf712961f029d5244fbffa55525cfaf1ecf5efb77f1a0aa1dba7"} Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.041878 4805 status_manager.go:851] "Failed to get status for pod" podUID="884caea0-c055-415b-92c3-fae420465726" pod="openshift-marketplace/redhat-operators-n5l62" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-n5l62\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.042189 4805 status_manager.go:851] "Failed to get status for pod" podUID="7a876494-42c5-4be9-aad1-46d8ce3c68bb" pod="openshift-marketplace/redhat-marketplace-2xsss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2xsss\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.042461 4805 status_manager.go:851] "Failed to get status for pod" podUID="87196950-f6be-442b-a725-3cdee5962f55" pod="openshift-marketplace/certified-operators-gzkpq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzkpq\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.042848 4805 status_manager.go:851] "Failed to get status for pod" podUID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-mcvr5\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.043435 4805 status_manager.go:851] "Failed to get status for pod" podUID="884caea0-c055-415b-92c3-fae420465726" pod="openshift-marketplace/redhat-operators-n5l62" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-n5l62\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.043686 4805 status_manager.go:851] "Failed to get status for pod" podUID="87196950-f6be-442b-a725-3cdee5962f55" pod="openshift-marketplace/certified-operators-gzkpq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzkpq\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.043928 4805 status_manager.go:851] "Failed to get status for pod" podUID="7a876494-42c5-4be9-aad1-46d8ce3c68bb" pod="openshift-marketplace/redhat-marketplace-2xsss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2xsss\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.044341 4805 status_manager.go:851] "Failed to get status for pod" podUID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-mcvr5\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.044588 4805 status_manager.go:851] "Failed to get status for pod" podUID="83d8504f-33ad-4812-bc1e-11233c225974" pod="openshift-marketplace/community-operators-7fvvs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-7fvvs\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.044947 4805 status_manager.go:851] "Failed to get status for pod" podUID="fabf0d12-f3a0-4562-a606-4b0cedcc6cd7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.045407 4805 status_manager.go:851] "Failed to get status for pod" podUID="c783f4e5-c0b1-4832-9988-3ad7d2a38129" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-79c9b785f5-sp7zz\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.045730 4805 status_manager.go:851] "Failed to get status for pod" podUID="231a1216-2a55-4e7b-b026-104624c69857" pod="openshift-marketplace/redhat-operators-wnt7l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnt7l\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.045998 4805 status_manager.go:851] "Failed to get status for pod" podUID="73cd7b83-240c-458e-88a3-7ceace9a121d" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-565fd78f84-xlklc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.046956 4805 generic.go:334] "Generic (PLEG): container finished" podID="043bfd8c-1387-4b00-ad52-1e4efd43c942" containerID="15d22ea6c47330e7a8396ff371731ae774dec0447f8abb5d78a23596fc8f949b" exitCode=0 Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.047057 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfcdt" event={"ID":"043bfd8c-1387-4b00-ad52-1e4efd43c942","Type":"ContainerDied","Data":"15d22ea6c47330e7a8396ff371731ae774dec0447f8abb5d78a23596fc8f949b"} Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.047949 4805 status_manager.go:851] "Failed to get status for pod" podUID="043bfd8c-1387-4b00-ad52-1e4efd43c942" pod="openshift-marketplace/community-operators-tfcdt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tfcdt\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.048223 4805 status_manager.go:851] "Failed to get status for pod" podUID="83d8504f-33ad-4812-bc1e-11233c225974" pod="openshift-marketplace/community-operators-7fvvs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-7fvvs\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.048407 4805 status_manager.go:851] "Failed to get status for pod" podUID="fabf0d12-f3a0-4562-a606-4b0cedcc6cd7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.048626 4805 status_manager.go:851] "Failed to get status for pod" podUID="c783f4e5-c0b1-4832-9988-3ad7d2a38129" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-79c9b785f5-sp7zz\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.048875 4805 status_manager.go:851] "Failed to get status for pod" podUID="231a1216-2a55-4e7b-b026-104624c69857" pod="openshift-marketplace/redhat-operators-wnt7l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnt7l\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.049135 4805 status_manager.go:851] "Failed to get status for pod" podUID="73cd7b83-240c-458e-88a3-7ceace9a121d" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-565fd78f84-xlklc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.049359 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rmjsl" event={"ID":"1f5ab03e-b223-4e5b-8c9f-3d350a66156e","Type":"ContainerStarted","Data":"aa08eff57de18a39f44f9ca390e4d05711ab0e1d58e229554386dfd26958b4b8"} Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.049436 4805 status_manager.go:851] "Failed to get status for pod" podUID="884caea0-c055-415b-92c3-fae420465726" pod="openshift-marketplace/redhat-operators-n5l62" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-n5l62\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.049655 4805 status_manager.go:851] "Failed to get status for pod" podUID="87196950-f6be-442b-a725-3cdee5962f55" pod="openshift-marketplace/certified-operators-gzkpq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzkpq\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.049850 4805 status_manager.go:851] "Failed to get status for pod" podUID="7a876494-42c5-4be9-aad1-46d8ce3c68bb" pod="openshift-marketplace/redhat-marketplace-2xsss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2xsss\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.050288 4805 status_manager.go:851] "Failed to get status for pod" podUID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-mcvr5\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.082651 4805 status_manager.go:851] "Failed to get status for pod" podUID="87196950-f6be-442b-a725-3cdee5962f55" pod="openshift-marketplace/certified-operators-gzkpq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzkpq\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.083096 4805 status_manager.go:851] "Failed to get status for pod" podUID="7a876494-42c5-4be9-aad1-46d8ce3c68bb" pod="openshift-marketplace/redhat-marketplace-2xsss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2xsss\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.084045 4805 status_manager.go:851] "Failed to get status for pod" podUID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-mcvr5\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.084487 4805 status_manager.go:851] "Failed to get status for pod" podUID="043bfd8c-1387-4b00-ad52-1e4efd43c942" pod="openshift-marketplace/community-operators-tfcdt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tfcdt\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.084865 4805 status_manager.go:851] "Failed to get status for pod" podUID="83d8504f-33ad-4812-bc1e-11233c225974" pod="openshift-marketplace/community-operators-7fvvs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-7fvvs\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.085156 4805 status_manager.go:851] "Failed to get status for pod" podUID="fabf0d12-f3a0-4562-a606-4b0cedcc6cd7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.085540 4805 status_manager.go:851] "Failed to get status for pod" podUID="c783f4e5-c0b1-4832-9988-3ad7d2a38129" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-79c9b785f5-sp7zz\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.085808 4805 status_manager.go:851] "Failed to get status for pod" podUID="231a1216-2a55-4e7b-b026-104624c69857" pod="openshift-marketplace/redhat-operators-wnt7l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnt7l\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.086084 4805 status_manager.go:851] "Failed to get status for pod" podUID="73cd7b83-240c-458e-88a3-7ceace9a121d" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-565fd78f84-xlklc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: I0226 17:20:39.086325 4805 status_manager.go:851] "Failed to get status for pod" podUID="884caea0-c055-415b-92c3-fae420465726" pod="openshift-marketplace/redhat-operators-n5l62" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-n5l62\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:39 crc kubenswrapper[4805]: E0226 17:20:39.617331 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod884caea0_c055_415b_92c3_fae420465726.slice/crio-conmon-0d48a1f95e5e3c8aa906c61ed59ea8a92bb7baf529f7b10c77c7ab8958f736eb.scope\": RecentStats: unable to find data in memory cache]" Feb 26 17:20:40 crc kubenswrapper[4805]: I0226 17:20:40.057147 4805 generic.go:334] "Generic (PLEG): container finished" podID="1f5ab03e-b223-4e5b-8c9f-3d350a66156e" containerID="aa08eff57de18a39f44f9ca390e4d05711ab0e1d58e229554386dfd26958b4b8" exitCode=0 Feb 26 17:20:40 crc kubenswrapper[4805]: I0226 17:20:40.057227 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rmjsl" event={"ID":"1f5ab03e-b223-4e5b-8c9f-3d350a66156e","Type":"ContainerDied","Data":"aa08eff57de18a39f44f9ca390e4d05711ab0e1d58e229554386dfd26958b4b8"} Feb 26 17:20:40 crc kubenswrapper[4805]: I0226 17:20:40.058597 4805 generic.go:334] "Generic (PLEG): container finished" podID="884caea0-c055-415b-92c3-fae420465726" containerID="0d48a1f95e5e3c8aa906c61ed59ea8a92bb7baf529f7b10c77c7ab8958f736eb" exitCode=0 Feb 26 17:20:40 crc kubenswrapper[4805]: I0226 17:20:40.058655 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5l62" event={"ID":"884caea0-c055-415b-92c3-fae420465726","Type":"ContainerDied","Data":"0d48a1f95e5e3c8aa906c61ed59ea8a92bb7baf529f7b10c77c7ab8958f736eb"} Feb 26 17:20:40 crc kubenswrapper[4805]: I0226 17:20:40.059681 4805 status_manager.go:851] "Failed to get status for pod" podUID="83d8504f-33ad-4812-bc1e-11233c225974" pod="openshift-marketplace/community-operators-7fvvs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-7fvvs\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:40 crc kubenswrapper[4805]: I0226 17:20:40.059971 4805 status_manager.go:851] "Failed to get status for pod" podUID="fabf0d12-f3a0-4562-a606-4b0cedcc6cd7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:40 crc kubenswrapper[4805]: I0226 17:20:40.060209 4805 status_manager.go:851] "Failed to get status for pod" podUID="c783f4e5-c0b1-4832-9988-3ad7d2a38129" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-79c9b785f5-sp7zz\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:40 crc kubenswrapper[4805]: I0226 17:20:40.060493 4805 status_manager.go:851] "Failed to get status for pod" podUID="231a1216-2a55-4e7b-b026-104624c69857" pod="openshift-marketplace/redhat-operators-wnt7l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnt7l\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:40 crc kubenswrapper[4805]: I0226 17:20:40.060718 4805 status_manager.go:851] "Failed to get status for pod" podUID="73cd7b83-240c-458e-88a3-7ceace9a121d" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-565fd78f84-xlklc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:40 crc kubenswrapper[4805]: I0226 17:20:40.060901 4805 status_manager.go:851] "Failed to get status for pod" podUID="884caea0-c055-415b-92c3-fae420465726" pod="openshift-marketplace/redhat-operators-n5l62" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-n5l62\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:40 crc kubenswrapper[4805]: I0226 17:20:40.061113 4805 status_manager.go:851] "Failed to get status for pod" podUID="7a876494-42c5-4be9-aad1-46d8ce3c68bb" pod="openshift-marketplace/redhat-marketplace-2xsss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2xsss\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:40 crc kubenswrapper[4805]: I0226 17:20:40.061377 4805 generic.go:334] "Generic (PLEG): container finished" podID="87196950-f6be-442b-a725-3cdee5962f55" containerID="403d8a011c5965e3e1b078336abff0f9cc7b71924078653cfc32bd1dbb6ba5ad" exitCode=0 Feb 26 17:20:40 crc kubenswrapper[4805]: I0226 17:20:40.061460 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzkpq" event={"ID":"87196950-f6be-442b-a725-3cdee5962f55","Type":"ContainerDied","Data":"403d8a011c5965e3e1b078336abff0f9cc7b71924078653cfc32bd1dbb6ba5ad"} Feb 26 17:20:40 crc kubenswrapper[4805]: I0226 17:20:40.061590 4805 status_manager.go:851] "Failed to get status for pod" podUID="87196950-f6be-442b-a725-3cdee5962f55" pod="openshift-marketplace/certified-operators-gzkpq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzkpq\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:40 crc kubenswrapper[4805]: I0226 17:20:40.062311 4805 status_manager.go:851] "Failed to get status for pod" podUID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-mcvr5\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:40 crc kubenswrapper[4805]: I0226 17:20:40.062550 4805 status_manager.go:851] "Failed to get status for pod" podUID="043bfd8c-1387-4b00-ad52-1e4efd43c942" pod="openshift-marketplace/community-operators-tfcdt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tfcdt\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:40 crc kubenswrapper[4805]: I0226 17:20:40.062897 4805 status_manager.go:851] "Failed to get status for pod" podUID="884caea0-c055-415b-92c3-fae420465726" pod="openshift-marketplace/redhat-operators-n5l62" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-n5l62\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:40 crc kubenswrapper[4805]: I0226 17:20:40.063814 4805 status_manager.go:851] "Failed to get status for pod" podUID="87196950-f6be-442b-a725-3cdee5962f55" pod="openshift-marketplace/certified-operators-gzkpq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzkpq\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:40 crc kubenswrapper[4805]: I0226 17:20:40.065114 4805 status_manager.go:851] "Failed to get status for pod" podUID="7a876494-42c5-4be9-aad1-46d8ce3c68bb" pod="openshift-marketplace/redhat-marketplace-2xsss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2xsss\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:40 crc kubenswrapper[4805]: I0226 17:20:40.065335 4805 status_manager.go:851] "Failed to get status for pod" podUID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-mcvr5\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:40 crc kubenswrapper[4805]: I0226 17:20:40.065518 4805 status_manager.go:851] "Failed to get status for pod" podUID="043bfd8c-1387-4b00-ad52-1e4efd43c942" pod="openshift-marketplace/community-operators-tfcdt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tfcdt\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:40 crc kubenswrapper[4805]: I0226 17:20:40.065776 4805 status_manager.go:851] "Failed to get status for pod" podUID="83d8504f-33ad-4812-bc1e-11233c225974" pod="openshift-marketplace/community-operators-7fvvs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-7fvvs\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:40 crc kubenswrapper[4805]: I0226 17:20:40.065911 4805 status_manager.go:851] "Failed to get status for pod" podUID="fabf0d12-f3a0-4562-a606-4b0cedcc6cd7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:40 crc kubenswrapper[4805]: I0226 17:20:40.066123 4805 status_manager.go:851] "Failed to get status for pod" podUID="c783f4e5-c0b1-4832-9988-3ad7d2a38129" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-79c9b785f5-sp7zz\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:40 crc kubenswrapper[4805]: I0226 17:20:40.066320 4805 status_manager.go:851] "Failed to get status for pod" podUID="6239f68a-d80a-4fd4-9a6d-69bd48b1c015" pod="openshift-marketplace/redhat-marketplace-jttjv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jttjv\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:40 crc kubenswrapper[4805]: I0226 17:20:40.066501 4805 status_manager.go:851] "Failed to get status for pod" podUID="231a1216-2a55-4e7b-b026-104624c69857" pod="openshift-marketplace/redhat-operators-wnt7l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnt7l\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:40 crc kubenswrapper[4805]: I0226 17:20:40.066705 4805 status_manager.go:851] "Failed to get status for pod" podUID="73cd7b83-240c-458e-88a3-7ceace9a121d" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-565fd78f84-xlklc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:41 crc kubenswrapper[4805]: I0226 17:20:41.067161 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535440-nztxm" event={"ID":"3e191647-de53-41b3-b7b9-5cb11ccb9f87","Type":"ContainerStarted","Data":"0a52605714e6f534fc7ed16c2450ffcae0ceeeda0f1395acca87897d11095cdd"} Feb 26 17:20:41 crc kubenswrapper[4805]: I0226 17:20:41.068662 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"d0682160bc5a5d23ae35e6999fb4630e74af294a6e56bcc044918da56611b8be"} Feb 26 17:20:41 crc kubenswrapper[4805]: I0226 17:20:41.069342 4805 status_manager.go:851] "Failed to get status for pod" podUID="73cd7b83-240c-458e-88a3-7ceace9a121d" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-565fd78f84-xlklc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:41 crc kubenswrapper[4805]: E0226 17:20:41.069454 4805 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.194:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 17:20:41 crc kubenswrapper[4805]: I0226 17:20:41.069697 4805 status_manager.go:851] "Failed to get status for pod" podUID="884caea0-c055-415b-92c3-fae420465726" pod="openshift-marketplace/redhat-operators-n5l62" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-n5l62\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:41 crc kubenswrapper[4805]: I0226 17:20:41.070045 4805 status_manager.go:851] "Failed to get status for pod" podUID="87196950-f6be-442b-a725-3cdee5962f55" pod="openshift-marketplace/certified-operators-gzkpq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzkpq\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:41 crc kubenswrapper[4805]: I0226 17:20:41.070406 4805 status_manager.go:851] "Failed to get status for pod" podUID="7a876494-42c5-4be9-aad1-46d8ce3c68bb" pod="openshift-marketplace/redhat-marketplace-2xsss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2xsss\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:41 crc kubenswrapper[4805]: I0226 17:20:41.070692 4805 status_manager.go:851] "Failed to get status for pod" podUID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-mcvr5\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:41 crc kubenswrapper[4805]: I0226 17:20:41.070904 4805 status_manager.go:851] "Failed to get status for pod" podUID="043bfd8c-1387-4b00-ad52-1e4efd43c942" pod="openshift-marketplace/community-operators-tfcdt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tfcdt\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:41 crc kubenswrapper[4805]: I0226 17:20:41.070934 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wnt7l" event={"ID":"231a1216-2a55-4e7b-b026-104624c69857","Type":"ContainerDied","Data":"06da329148ebaf712961f029d5244fbffa55525cfaf1ecf5efb77f1a0aa1dba7"} Feb 26 17:20:41 crc kubenswrapper[4805]: I0226 17:20:41.070915 4805 generic.go:334] "Generic (PLEG): container finished" podID="231a1216-2a55-4e7b-b026-104624c69857" containerID="06da329148ebaf712961f029d5244fbffa55525cfaf1ecf5efb77f1a0aa1dba7" exitCode=0 Feb 26 17:20:41 crc kubenswrapper[4805]: I0226 17:20:41.071167 4805 status_manager.go:851] "Failed to get status for pod" podUID="83d8504f-33ad-4812-bc1e-11233c225974" pod="openshift-marketplace/community-operators-7fvvs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-7fvvs\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:41 crc kubenswrapper[4805]: I0226 17:20:41.071489 4805 status_manager.go:851] "Failed to get status for pod" podUID="fabf0d12-f3a0-4562-a606-4b0cedcc6cd7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:41 crc kubenswrapper[4805]: I0226 17:20:41.071808 4805 status_manager.go:851] "Failed to get status for pod" podUID="c783f4e5-c0b1-4832-9988-3ad7d2a38129" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-79c9b785f5-sp7zz\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:41 crc kubenswrapper[4805]: I0226 17:20:41.072273 4805 status_manager.go:851] "Failed to get status for pod" podUID="6239f68a-d80a-4fd4-9a6d-69bd48b1c015" pod="openshift-marketplace/redhat-marketplace-jttjv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jttjv\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:41 crc kubenswrapper[4805]: I0226 17:20:41.072653 4805 status_manager.go:851] "Failed to get status for pod" podUID="231a1216-2a55-4e7b-b026-104624c69857" pod="openshift-marketplace/redhat-operators-wnt7l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnt7l\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:41 crc kubenswrapper[4805]: I0226 17:20:41.073247 4805 status_manager.go:851] "Failed to get status for pod" podUID="6239f68a-d80a-4fd4-9a6d-69bd48b1c015" pod="openshift-marketplace/redhat-marketplace-jttjv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jttjv\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:41 crc kubenswrapper[4805]: I0226 17:20:41.073533 4805 status_manager.go:851] "Failed to get status for pod" podUID="231a1216-2a55-4e7b-b026-104624c69857" pod="openshift-marketplace/redhat-operators-wnt7l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnt7l\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:41 crc kubenswrapper[4805]: I0226 17:20:41.073844 4805 status_manager.go:851] "Failed to get status for pod" podUID="73cd7b83-240c-458e-88a3-7ceace9a121d" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-565fd78f84-xlklc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:41 crc kubenswrapper[4805]: I0226 17:20:41.074200 4805 status_manager.go:851] "Failed to get status for pod" podUID="884caea0-c055-415b-92c3-fae420465726" pod="openshift-marketplace/redhat-operators-n5l62" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-n5l62\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:41 crc kubenswrapper[4805]: I0226 17:20:41.074460 4805 status_manager.go:851] "Failed to get status for pod" podUID="87196950-f6be-442b-a725-3cdee5962f55" pod="openshift-marketplace/certified-operators-gzkpq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzkpq\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:41 crc kubenswrapper[4805]: I0226 17:20:41.074767 4805 status_manager.go:851] "Failed to get status for pod" podUID="7a876494-42c5-4be9-aad1-46d8ce3c68bb" pod="openshift-marketplace/redhat-marketplace-2xsss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2xsss\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:41 crc kubenswrapper[4805]: I0226 17:20:41.074946 4805 status_manager.go:851] "Failed to get status for pod" podUID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-mcvr5\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:41 crc kubenswrapper[4805]: I0226 17:20:41.075109 4805 status_manager.go:851] "Failed to get status for pod" podUID="043bfd8c-1387-4b00-ad52-1e4efd43c942" pod="openshift-marketplace/community-operators-tfcdt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tfcdt\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:41 crc kubenswrapper[4805]: I0226 17:20:41.075248 4805 status_manager.go:851] "Failed to get status for pod" podUID="83d8504f-33ad-4812-bc1e-11233c225974" pod="openshift-marketplace/community-operators-7fvvs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-7fvvs\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:41 crc kubenswrapper[4805]: I0226 17:20:41.075396 4805 status_manager.go:851] "Failed to get status for pod" podUID="fabf0d12-f3a0-4562-a606-4b0cedcc6cd7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:41 crc kubenswrapper[4805]: I0226 17:20:41.075539 4805 status_manager.go:851] "Failed to get status for pod" podUID="c783f4e5-c0b1-4832-9988-3ad7d2a38129" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-79c9b785f5-sp7zz\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:42 crc kubenswrapper[4805]: E0226 17:20:42.077973 4805 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.194:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 17:20:42 crc kubenswrapper[4805]: I0226 17:20:42.078567 4805 status_manager.go:851] "Failed to get status for pod" podUID="83d8504f-33ad-4812-bc1e-11233c225974" pod="openshift-marketplace/community-operators-7fvvs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-7fvvs\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:42 crc kubenswrapper[4805]: I0226 17:20:42.079315 4805 status_manager.go:851] "Failed to get status for pod" podUID="fabf0d12-f3a0-4562-a606-4b0cedcc6cd7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:42 crc kubenswrapper[4805]: I0226 17:20:42.079758 4805 status_manager.go:851] "Failed to get status for pod" podUID="c783f4e5-c0b1-4832-9988-3ad7d2a38129" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-79c9b785f5-sp7zz\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:42 crc kubenswrapper[4805]: I0226 17:20:42.080007 4805 status_manager.go:851] "Failed to get status for pod" podUID="231a1216-2a55-4e7b-b026-104624c69857" pod="openshift-marketplace/redhat-operators-wnt7l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnt7l\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:42 crc kubenswrapper[4805]: I0226 17:20:42.080430 4805 status_manager.go:851] "Failed to get status for pod" podUID="6239f68a-d80a-4fd4-9a6d-69bd48b1c015" pod="openshift-marketplace/redhat-marketplace-jttjv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jttjv\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:42 crc kubenswrapper[4805]: I0226 17:20:42.081080 4805 status_manager.go:851] "Failed to get status for pod" podUID="73cd7b83-240c-458e-88a3-7ceace9a121d" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-565fd78f84-xlklc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:42 crc kubenswrapper[4805]: I0226 17:20:42.081365 4805 status_manager.go:851] "Failed to get status for pod" podUID="1f5ab03e-b223-4e5b-8c9f-3d350a66156e" pod="openshift-marketplace/certified-operators-rmjsl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rmjsl\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:42 crc kubenswrapper[4805]: I0226 17:20:42.081745 4805 status_manager.go:851] "Failed to get status for pod" podUID="884caea0-c055-415b-92c3-fae420465726" pod="openshift-marketplace/redhat-operators-n5l62" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-n5l62\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:42 crc kubenswrapper[4805]: I0226 17:20:42.082464 4805 status_manager.go:851] "Failed to get status for pod" podUID="7a876494-42c5-4be9-aad1-46d8ce3c68bb" pod="openshift-marketplace/redhat-marketplace-2xsss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2xsss\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:42 crc kubenswrapper[4805]: I0226 17:20:42.082747 4805 status_manager.go:851] "Failed to get status for pod" podUID="87196950-f6be-442b-a725-3cdee5962f55" pod="openshift-marketplace/certified-operators-gzkpq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzkpq\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:42 crc kubenswrapper[4805]: I0226 17:20:42.083160 4805 status_manager.go:851] "Failed to get status for pod" podUID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-mcvr5\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:42 crc kubenswrapper[4805]: I0226 17:20:42.083663 4805 status_manager.go:851] "Failed to get status for pod" podUID="043bfd8c-1387-4b00-ad52-1e4efd43c942" pod="openshift-marketplace/community-operators-tfcdt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tfcdt\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:42 crc kubenswrapper[4805]: I0226 17:20:42.131334 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2xsss" Feb 26 17:20:42 crc kubenswrapper[4805]: I0226 17:20:42.131424 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2xsss" Feb 26 17:20:43 crc kubenswrapper[4805]: I0226 17:20:43.083452 4805 generic.go:334] "Generic (PLEG): container finished" podID="3e191647-de53-41b3-b7b9-5cb11ccb9f87" containerID="0a52605714e6f534fc7ed16c2450ffcae0ceeeda0f1395acca87897d11095cdd" exitCode=0 Feb 26 17:20:43 crc kubenswrapper[4805]: I0226 17:20:43.083515 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535440-nztxm" event={"ID":"3e191647-de53-41b3-b7b9-5cb11ccb9f87","Type":"ContainerDied","Data":"0a52605714e6f534fc7ed16c2450ffcae0ceeeda0f1395acca87897d11095cdd"} Feb 26 17:20:43 crc kubenswrapper[4805]: I0226 17:20:43.084072 4805 status_manager.go:851] "Failed to get status for pod" podUID="043bfd8c-1387-4b00-ad52-1e4efd43c942" pod="openshift-marketplace/community-operators-tfcdt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tfcdt\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:43 crc kubenswrapper[4805]: I0226 17:20:43.084406 4805 status_manager.go:851] "Failed to get status for pod" podUID="83d8504f-33ad-4812-bc1e-11233c225974" pod="openshift-marketplace/community-operators-7fvvs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-7fvvs\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:43 crc kubenswrapper[4805]: I0226 17:20:43.084785 4805 status_manager.go:851] "Failed to get status for pod" podUID="fabf0d12-f3a0-4562-a606-4b0cedcc6cd7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:43 crc kubenswrapper[4805]: I0226 17:20:43.085052 4805 status_manager.go:851] "Failed to get status for pod" podUID="c783f4e5-c0b1-4832-9988-3ad7d2a38129" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-79c9b785f5-sp7zz\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:43 crc kubenswrapper[4805]: I0226 17:20:43.085295 4805 status_manager.go:851] "Failed to get status for pod" podUID="6239f68a-d80a-4fd4-9a6d-69bd48b1c015" pod="openshift-marketplace/redhat-marketplace-jttjv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jttjv\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:43 crc kubenswrapper[4805]: I0226 17:20:43.085519 4805 status_manager.go:851] "Failed to get status for pod" podUID="231a1216-2a55-4e7b-b026-104624c69857" pod="openshift-marketplace/redhat-operators-wnt7l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnt7l\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:43 crc kubenswrapper[4805]: I0226 17:20:43.085756 4805 status_manager.go:851] "Failed to get status for pod" podUID="73cd7b83-240c-458e-88a3-7ceace9a121d" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-565fd78f84-xlklc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:43 crc kubenswrapper[4805]: I0226 17:20:43.086002 4805 status_manager.go:851] "Failed to get status for pod" podUID="1f5ab03e-b223-4e5b-8c9f-3d350a66156e" pod="openshift-marketplace/certified-operators-rmjsl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rmjsl\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:43 crc kubenswrapper[4805]: I0226 17:20:43.086251 4805 status_manager.go:851] "Failed to get status for pod" podUID="884caea0-c055-415b-92c3-fae420465726" pod="openshift-marketplace/redhat-operators-n5l62" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-n5l62\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:43 crc kubenswrapper[4805]: I0226 17:20:43.086508 4805 status_manager.go:851] "Failed to get status for pod" podUID="87196950-f6be-442b-a725-3cdee5962f55" pod="openshift-marketplace/certified-operators-gzkpq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzkpq\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:43 crc kubenswrapper[4805]: I0226 17:20:43.086743 4805 status_manager.go:851] "Failed to get status for pod" podUID="7a876494-42c5-4be9-aad1-46d8ce3c68bb" pod="openshift-marketplace/redhat-marketplace-2xsss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2xsss\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:43 crc kubenswrapper[4805]: I0226 17:20:43.086987 4805 status_manager.go:851] "Failed to get status for pod" podUID="3e191647-de53-41b3-b7b9-5cb11ccb9f87" pod="openshift-infra/auto-csr-approver-29535440-nztxm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535440-nztxm\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:43 crc kubenswrapper[4805]: I0226 17:20:43.087294 4805 status_manager.go:851] "Failed to get status for pod" podUID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-mcvr5\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:43 crc kubenswrapper[4805]: I0226 17:20:43.862386 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-2xsss" podUID="7a876494-42c5-4be9-aad1-46d8ce3c68bb" containerName="registry-server" probeResult="failure" output=< Feb 26 17:20:43 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 26 17:20:43 crc kubenswrapper[4805]: > Feb 26 17:20:43 crc kubenswrapper[4805]: E0226 17:20:43.976462 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:20:43Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:20:43Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:20:43Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-26T17:20:43Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[],\\\"sizeBytes\\\":1259074778},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:d0f5facf1d0e6c487de9741d96bd2ca8f5d0bd808390ab8f986f9930acbf9d13\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1216936646},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:7059bd937d94a0855e8ba1a48f19dee586ff3e9a053fe5dce38035b39c25028d\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:795760eaf4fabf83f7e660810f5f96cf5e652cb834e1316acb59598cbe3f610a\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1215588055},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-cli@sha256:69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9\\\",\\\"registry.redhat.io/openshift4/ose-cli@sha256:ef83967297f619f45075e7fd1428a1eb981622a6c174c46fb53b158ed24bed85\\\",\\\"registry.redhat.io/openshift4/ose-cli:latest\\\"],\\\"sizeBytes\\\":584351326},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:43 crc kubenswrapper[4805]: E0226 17:20:43.977045 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:43 crc kubenswrapper[4805]: E0226 17:20:43.977543 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:43 crc kubenswrapper[4805]: E0226 17:20:43.977881 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:43 crc kubenswrapper[4805]: E0226 17:20:43.978250 4805 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:43 crc kubenswrapper[4805]: E0226 17:20:43.978278 4805 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 26 17:20:44 crc kubenswrapper[4805]: I0226 17:20:44.303438 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535440-nztxm" Feb 26 17:20:44 crc kubenswrapper[4805]: I0226 17:20:44.303999 4805 status_manager.go:851] "Failed to get status for pod" podUID="83d8504f-33ad-4812-bc1e-11233c225974" pod="openshift-marketplace/community-operators-7fvvs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-7fvvs\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:44 crc kubenswrapper[4805]: I0226 17:20:44.304167 4805 status_manager.go:851] "Failed to get status for pod" podUID="fabf0d12-f3a0-4562-a606-4b0cedcc6cd7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:44 crc kubenswrapper[4805]: I0226 17:20:44.304450 4805 status_manager.go:851] "Failed to get status for pod" podUID="c783f4e5-c0b1-4832-9988-3ad7d2a38129" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-79c9b785f5-sp7zz\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:44 crc kubenswrapper[4805]: I0226 17:20:44.304870 4805 status_manager.go:851] "Failed to get status for pod" podUID="231a1216-2a55-4e7b-b026-104624c69857" pod="openshift-marketplace/redhat-operators-wnt7l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnt7l\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:44 crc kubenswrapper[4805]: I0226 17:20:44.305098 4805 status_manager.go:851] "Failed to get status for pod" podUID="6239f68a-d80a-4fd4-9a6d-69bd48b1c015" pod="openshift-marketplace/redhat-marketplace-jttjv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jttjv\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:44 crc kubenswrapper[4805]: I0226 17:20:44.305271 4805 status_manager.go:851] "Failed to get status for pod" podUID="73cd7b83-240c-458e-88a3-7ceace9a121d" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-565fd78f84-xlklc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:44 crc kubenswrapper[4805]: I0226 17:20:44.305439 4805 status_manager.go:851] "Failed to get status for pod" podUID="1f5ab03e-b223-4e5b-8c9f-3d350a66156e" pod="openshift-marketplace/certified-operators-rmjsl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rmjsl\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:44 crc kubenswrapper[4805]: I0226 17:20:44.305584 4805 status_manager.go:851] "Failed to get status for pod" podUID="884caea0-c055-415b-92c3-fae420465726" pod="openshift-marketplace/redhat-operators-n5l62" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-n5l62\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:44 crc kubenswrapper[4805]: I0226 17:20:44.305733 4805 status_manager.go:851] "Failed to get status for pod" podUID="7a876494-42c5-4be9-aad1-46d8ce3c68bb" pod="openshift-marketplace/redhat-marketplace-2xsss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2xsss\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:44 crc kubenswrapper[4805]: I0226 17:20:44.305878 4805 status_manager.go:851] "Failed to get status for pod" podUID="87196950-f6be-442b-a725-3cdee5962f55" pod="openshift-marketplace/certified-operators-gzkpq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzkpq\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:44 crc kubenswrapper[4805]: I0226 17:20:44.306028 4805 status_manager.go:851] "Failed to get status for pod" podUID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-mcvr5\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:44 crc kubenswrapper[4805]: I0226 17:20:44.306225 4805 status_manager.go:851] "Failed to get status for pod" podUID="3e191647-de53-41b3-b7b9-5cb11ccb9f87" pod="openshift-infra/auto-csr-approver-29535440-nztxm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535440-nztxm\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:44 crc kubenswrapper[4805]: I0226 17:20:44.306388 4805 status_manager.go:851] "Failed to get status for pod" podUID="043bfd8c-1387-4b00-ad52-1e4efd43c942" pod="openshift-marketplace/community-operators-tfcdt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tfcdt\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:44 crc kubenswrapper[4805]: I0226 17:20:44.396493 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcmms\" (UniqueName: \"kubernetes.io/projected/3e191647-de53-41b3-b7b9-5cb11ccb9f87-kube-api-access-bcmms\") pod \"3e191647-de53-41b3-b7b9-5cb11ccb9f87\" (UID: \"3e191647-de53-41b3-b7b9-5cb11ccb9f87\") " Feb 26 17:20:44 crc kubenswrapper[4805]: I0226 17:20:44.409330 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e191647-de53-41b3-b7b9-5cb11ccb9f87-kube-api-access-bcmms" (OuterVolumeSpecName: "kube-api-access-bcmms") pod "3e191647-de53-41b3-b7b9-5cb11ccb9f87" (UID: "3e191647-de53-41b3-b7b9-5cb11ccb9f87"). InnerVolumeSpecName "kube-api-access-bcmms". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:20:44 crc kubenswrapper[4805]: I0226 17:20:44.497647 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcmms\" (UniqueName: \"kubernetes.io/projected/3e191647-de53-41b3-b7b9-5cb11ccb9f87-kube-api-access-bcmms\") on node \"crc\" DevicePath \"\"" Feb 26 17:20:44 crc kubenswrapper[4805]: E0226 17:20:44.743317 4805 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:44 crc kubenswrapper[4805]: E0226 17:20:44.743750 4805 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:44 crc kubenswrapper[4805]: E0226 17:20:44.743975 4805 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:44 crc kubenswrapper[4805]: E0226 17:20:44.744255 4805 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:44 crc kubenswrapper[4805]: E0226 17:20:44.744497 4805 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:44 crc kubenswrapper[4805]: I0226 17:20:44.744525 4805 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 26 17:20:44 crc kubenswrapper[4805]: E0226 17:20:44.744703 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="200ms" Feb 26 17:20:44 crc kubenswrapper[4805]: E0226 17:20:44.945375 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="400ms" Feb 26 17:20:45 crc kubenswrapper[4805]: I0226 17:20:45.092863 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535440-nztxm" event={"ID":"3e191647-de53-41b3-b7b9-5cb11ccb9f87","Type":"ContainerDied","Data":"1be33f37a4643913c6ba30218bd1dcd46b345ff548d21940b11af6faa19930ad"} Feb 26 17:20:45 crc kubenswrapper[4805]: I0226 17:20:45.092909 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1be33f37a4643913c6ba30218bd1dcd46b345ff548d21940b11af6faa19930ad" Feb 26 17:20:45 crc kubenswrapper[4805]: I0226 17:20:45.092977 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535440-nztxm" Feb 26 17:20:45 crc kubenswrapper[4805]: I0226 17:20:45.096102 4805 status_manager.go:851] "Failed to get status for pod" podUID="884caea0-c055-415b-92c3-fae420465726" pod="openshift-marketplace/redhat-operators-n5l62" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-n5l62\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:45 crc kubenswrapper[4805]: I0226 17:20:45.096359 4805 status_manager.go:851] "Failed to get status for pod" podUID="7a876494-42c5-4be9-aad1-46d8ce3c68bb" pod="openshift-marketplace/redhat-marketplace-2xsss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2xsss\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:45 crc kubenswrapper[4805]: I0226 17:20:45.096547 4805 status_manager.go:851] "Failed to get status for pod" podUID="87196950-f6be-442b-a725-3cdee5962f55" pod="openshift-marketplace/certified-operators-gzkpq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzkpq\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:45 crc kubenswrapper[4805]: I0226 17:20:45.096736 4805 status_manager.go:851] "Failed to get status for pod" podUID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-mcvr5\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:45 crc kubenswrapper[4805]: I0226 17:20:45.096924 4805 status_manager.go:851] "Failed to get status for pod" podUID="3e191647-de53-41b3-b7b9-5cb11ccb9f87" pod="openshift-infra/auto-csr-approver-29535440-nztxm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535440-nztxm\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:45 crc kubenswrapper[4805]: I0226 17:20:45.097132 4805 status_manager.go:851] "Failed to get status for pod" podUID="043bfd8c-1387-4b00-ad52-1e4efd43c942" pod="openshift-marketplace/community-operators-tfcdt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tfcdt\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:45 crc kubenswrapper[4805]: I0226 17:20:45.097325 4805 status_manager.go:851] "Failed to get status for pod" podUID="83d8504f-33ad-4812-bc1e-11233c225974" pod="openshift-marketplace/community-operators-7fvvs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-7fvvs\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:45 crc kubenswrapper[4805]: I0226 17:20:45.097511 4805 status_manager.go:851] "Failed to get status for pod" podUID="fabf0d12-f3a0-4562-a606-4b0cedcc6cd7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:45 crc kubenswrapper[4805]: I0226 17:20:45.097703 4805 status_manager.go:851] "Failed to get status for pod" podUID="c783f4e5-c0b1-4832-9988-3ad7d2a38129" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-79c9b785f5-sp7zz\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:45 crc kubenswrapper[4805]: I0226 17:20:45.097909 4805 status_manager.go:851] "Failed to get status for pod" podUID="231a1216-2a55-4e7b-b026-104624c69857" pod="openshift-marketplace/redhat-operators-wnt7l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnt7l\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:45 crc kubenswrapper[4805]: I0226 17:20:45.098116 4805 status_manager.go:851] "Failed to get status for pod" podUID="6239f68a-d80a-4fd4-9a6d-69bd48b1c015" pod="openshift-marketplace/redhat-marketplace-jttjv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jttjv\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:45 crc kubenswrapper[4805]: I0226 17:20:45.098303 4805 status_manager.go:851] "Failed to get status for pod" podUID="73cd7b83-240c-458e-88a3-7ceace9a121d" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-565fd78f84-xlklc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:45 crc kubenswrapper[4805]: I0226 17:20:45.098485 4805 status_manager.go:851] "Failed to get status for pod" podUID="1f5ab03e-b223-4e5b-8c9f-3d350a66156e" pod="openshift-marketplace/certified-operators-rmjsl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rmjsl\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:45 crc kubenswrapper[4805]: E0226 17:20:45.346848 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="800ms" Feb 26 17:20:45 crc kubenswrapper[4805]: E0226 17:20:45.430166 4805 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.194:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-tfcdt.1897db9af4064ee2 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-tfcdt,UID:043bfd8c-1387-4b00-ad52-1e4efd43c942,APIVersion:v1,ResourceVersion:28134,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/community-operator-index:v4.18\" in 29.751s (29.751s including waiting). Image size: 1215588055 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-26 17:20:36.959080162 +0000 UTC m=+351.520834501,LastTimestamp:2026-02-26 17:20:36.959080162 +0000 UTC m=+351.520834501,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 26 17:20:45 crc kubenswrapper[4805]: I0226 17:20:45.952237 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:20:45 crc kubenswrapper[4805]: I0226 17:20:45.953334 4805 status_manager.go:851] "Failed to get status for pod" podUID="83d8504f-33ad-4812-bc1e-11233c225974" pod="openshift-marketplace/community-operators-7fvvs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-7fvvs\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:45 crc kubenswrapper[4805]: I0226 17:20:45.953878 4805 status_manager.go:851] "Failed to get status for pod" podUID="fabf0d12-f3a0-4562-a606-4b0cedcc6cd7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:45 crc kubenswrapper[4805]: I0226 17:20:45.954095 4805 status_manager.go:851] "Failed to get status for pod" podUID="c783f4e5-c0b1-4832-9988-3ad7d2a38129" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-79c9b785f5-sp7zz\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:45 crc kubenswrapper[4805]: I0226 17:20:45.954287 4805 status_manager.go:851] "Failed to get status for pod" podUID="6239f68a-d80a-4fd4-9a6d-69bd48b1c015" pod="openshift-marketplace/redhat-marketplace-jttjv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jttjv\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:45 crc kubenswrapper[4805]: I0226 17:20:45.957384 4805 status_manager.go:851] "Failed to get status for pod" podUID="231a1216-2a55-4e7b-b026-104624c69857" pod="openshift-marketplace/redhat-operators-wnt7l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnt7l\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:45 crc kubenswrapper[4805]: I0226 17:20:45.959157 4805 status_manager.go:851] "Failed to get status for pod" podUID="73cd7b83-240c-458e-88a3-7ceace9a121d" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-565fd78f84-xlklc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:45 crc kubenswrapper[4805]: I0226 17:20:45.959536 4805 status_manager.go:851] "Failed to get status for pod" podUID="1f5ab03e-b223-4e5b-8c9f-3d350a66156e" pod="openshift-marketplace/certified-operators-rmjsl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rmjsl\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:45 crc kubenswrapper[4805]: I0226 17:20:45.962178 4805 status_manager.go:851] "Failed to get status for pod" podUID="884caea0-c055-415b-92c3-fae420465726" pod="openshift-marketplace/redhat-operators-n5l62" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-n5l62\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:45 crc kubenswrapper[4805]: I0226 17:20:45.962489 4805 status_manager.go:851] "Failed to get status for pod" podUID="87196950-f6be-442b-a725-3cdee5962f55" pod="openshift-marketplace/certified-operators-gzkpq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzkpq\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:45 crc kubenswrapper[4805]: I0226 17:20:45.962770 4805 status_manager.go:851] "Failed to get status for pod" podUID="7a876494-42c5-4be9-aad1-46d8ce3c68bb" pod="openshift-marketplace/redhat-marketplace-2xsss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2xsss\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:45 crc kubenswrapper[4805]: I0226 17:20:45.963988 4805 status_manager.go:851] "Failed to get status for pod" podUID="3e191647-de53-41b3-b7b9-5cb11ccb9f87" pod="openshift-infra/auto-csr-approver-29535440-nztxm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535440-nztxm\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:45 crc kubenswrapper[4805]: I0226 17:20:45.964419 4805 status_manager.go:851] "Failed to get status for pod" podUID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-mcvr5\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:45 crc kubenswrapper[4805]: I0226 17:20:45.964632 4805 status_manager.go:851] "Failed to get status for pod" podUID="043bfd8c-1387-4b00-ad52-1e4efd43c942" pod="openshift-marketplace/community-operators-tfcdt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tfcdt\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:45 crc kubenswrapper[4805]: I0226 17:20:45.966669 4805 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c2828f36-318b-4caa-a7de-47401983f4da" Feb 26 17:20:45 crc kubenswrapper[4805]: I0226 17:20:45.966689 4805 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c2828f36-318b-4caa-a7de-47401983f4da" Feb 26 17:20:45 crc kubenswrapper[4805]: E0226 17:20:45.966930 4805 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:20:45 crc kubenswrapper[4805]: I0226 17:20:45.967441 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:20:46 crc kubenswrapper[4805]: E0226 17:20:46.149625 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="1.6s" Feb 26 17:20:46 crc kubenswrapper[4805]: W0226 17:20:46.225409 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-5eac07a9c4b90a342dab8f374e532e9d608b6632cbee8e2db343086aa6b1cdc8 WatchSource:0}: Error finding container 5eac07a9c4b90a342dab8f374e532e9d608b6632cbee8e2db343086aa6b1cdc8: Status 404 returned error can't find the container with id 5eac07a9c4b90a342dab8f374e532e9d608b6632cbee8e2db343086aa6b1cdc8 Feb 26 17:20:46 crc kubenswrapper[4805]: I0226 17:20:46.974321 4805 status_manager.go:851] "Failed to get status for pod" podUID="73cd7b83-240c-458e-88a3-7ceace9a121d" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-565fd78f84-xlklc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:46 crc kubenswrapper[4805]: I0226 17:20:46.975108 4805 status_manager.go:851] "Failed to get status for pod" podUID="1f5ab03e-b223-4e5b-8c9f-3d350a66156e" pod="openshift-marketplace/certified-operators-rmjsl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rmjsl\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:46 crc kubenswrapper[4805]: I0226 17:20:46.975504 4805 status_manager.go:851] "Failed to get status for pod" podUID="884caea0-c055-415b-92c3-fae420465726" pod="openshift-marketplace/redhat-operators-n5l62" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-n5l62\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:46 crc kubenswrapper[4805]: I0226 17:20:46.975729 4805 status_manager.go:851] "Failed to get status for pod" podUID="87196950-f6be-442b-a725-3cdee5962f55" pod="openshift-marketplace/certified-operators-gzkpq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzkpq\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:46 crc kubenswrapper[4805]: I0226 17:20:46.976027 4805 status_manager.go:851] "Failed to get status for pod" podUID="7a876494-42c5-4be9-aad1-46d8ce3c68bb" pod="openshift-marketplace/redhat-marketplace-2xsss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2xsss\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:46 crc kubenswrapper[4805]: I0226 17:20:46.976459 4805 status_manager.go:851] "Failed to get status for pod" podUID="3e191647-de53-41b3-b7b9-5cb11ccb9f87" pod="openshift-infra/auto-csr-approver-29535440-nztxm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535440-nztxm\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:46 crc kubenswrapper[4805]: I0226 17:20:46.977186 4805 status_manager.go:851] "Failed to get status for pod" podUID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-mcvr5\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:46 crc kubenswrapper[4805]: I0226 17:20:46.977638 4805 status_manager.go:851] "Failed to get status for pod" podUID="043bfd8c-1387-4b00-ad52-1e4efd43c942" pod="openshift-marketplace/community-operators-tfcdt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tfcdt\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:46 crc kubenswrapper[4805]: I0226 17:20:46.977902 4805 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:46 crc kubenswrapper[4805]: I0226 17:20:46.978161 4805 status_manager.go:851] "Failed to get status for pod" podUID="83d8504f-33ad-4812-bc1e-11233c225974" pod="openshift-marketplace/community-operators-7fvvs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-7fvvs\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:46 crc kubenswrapper[4805]: I0226 17:20:46.978546 4805 status_manager.go:851] "Failed to get status for pod" podUID="fabf0d12-f3a0-4562-a606-4b0cedcc6cd7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:46 crc kubenswrapper[4805]: I0226 17:20:46.978787 4805 status_manager.go:851] "Failed to get status for pod" podUID="c783f4e5-c0b1-4832-9988-3ad7d2a38129" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-79c9b785f5-sp7zz\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:46 crc kubenswrapper[4805]: I0226 17:20:46.979139 4805 status_manager.go:851] "Failed to get status for pod" podUID="6239f68a-d80a-4fd4-9a6d-69bd48b1c015" pod="openshift-marketplace/redhat-marketplace-jttjv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jttjv\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:46 crc kubenswrapper[4805]: I0226 17:20:46.979537 4805 status_manager.go:851] "Failed to get status for pod" podUID="231a1216-2a55-4e7b-b026-104624c69857" pod="openshift-marketplace/redhat-operators-wnt7l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnt7l\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:47 crc kubenswrapper[4805]: I0226 17:20:47.108556 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fvvs" event={"ID":"83d8504f-33ad-4812-bc1e-11233c225974","Type":"ContainerStarted","Data":"7906c38ad6133b6a391f43c2087dec2d40532a9a7374b6a8d6ed2437a083ec47"} Feb 26 17:20:47 crc kubenswrapper[4805]: I0226 17:20:47.113228 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b08f993e6ab0471938f1e192b95d33c02d4e76a0d4f888684cd1684b16f92afa"} Feb 26 17:20:47 crc kubenswrapper[4805]: I0226 17:20:47.113285 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5eac07a9c4b90a342dab8f374e532e9d608b6632cbee8e2db343086aa6b1cdc8"} Feb 26 17:20:47 crc kubenswrapper[4805]: E0226 17:20:47.750910 4805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="3.2s" Feb 26 17:20:48 crc kubenswrapper[4805]: I0226 17:20:48.121731 4805 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="b08f993e6ab0471938f1e192b95d33c02d4e76a0d4f888684cd1684b16f92afa" exitCode=0 Feb 26 17:20:48 crc kubenswrapper[4805]: I0226 17:20:48.121794 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"b08f993e6ab0471938f1e192b95d33c02d4e76a0d4f888684cd1684b16f92afa"} Feb 26 17:20:48 crc kubenswrapper[4805]: I0226 17:20:48.122084 4805 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c2828f36-318b-4caa-a7de-47401983f4da" Feb 26 17:20:48 crc kubenswrapper[4805]: I0226 17:20:48.122116 4805 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c2828f36-318b-4caa-a7de-47401983f4da" Feb 26 17:20:48 crc kubenswrapper[4805]: I0226 17:20:48.122684 4805 status_manager.go:851] "Failed to get status for pod" podUID="83d8504f-33ad-4812-bc1e-11233c225974" pod="openshift-marketplace/community-operators-7fvvs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-7fvvs\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:48 crc kubenswrapper[4805]: E0226 17:20:48.122860 4805 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:20:48 crc kubenswrapper[4805]: I0226 17:20:48.123088 4805 status_manager.go:851] "Failed to get status for pod" podUID="fabf0d12-f3a0-4562-a606-4b0cedcc6cd7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:48 crc kubenswrapper[4805]: I0226 17:20:48.123531 4805 status_manager.go:851] "Failed to get status for pod" podUID="c783f4e5-c0b1-4832-9988-3ad7d2a38129" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-79c9b785f5-sp7zz\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:48 crc kubenswrapper[4805]: I0226 17:20:48.123924 4805 status_manager.go:851] "Failed to get status for pod" podUID="231a1216-2a55-4e7b-b026-104624c69857" pod="openshift-marketplace/redhat-operators-wnt7l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnt7l\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:48 crc kubenswrapper[4805]: I0226 17:20:48.124344 4805 status_manager.go:851] "Failed to get status for pod" podUID="6239f68a-d80a-4fd4-9a6d-69bd48b1c015" pod="openshift-marketplace/redhat-marketplace-jttjv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jttjv\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:48 crc kubenswrapper[4805]: I0226 17:20:48.124652 4805 status_manager.go:851] "Failed to get status for pod" podUID="73cd7b83-240c-458e-88a3-7ceace9a121d" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-565fd78f84-xlklc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:48 crc kubenswrapper[4805]: I0226 17:20:48.124901 4805 status_manager.go:851] "Failed to get status for pod" podUID="1f5ab03e-b223-4e5b-8c9f-3d350a66156e" pod="openshift-marketplace/certified-operators-rmjsl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rmjsl\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:48 crc kubenswrapper[4805]: I0226 17:20:48.125212 4805 status_manager.go:851] "Failed to get status for pod" podUID="884caea0-c055-415b-92c3-fae420465726" pod="openshift-marketplace/redhat-operators-n5l62" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-n5l62\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:48 crc kubenswrapper[4805]: I0226 17:20:48.125530 4805 status_manager.go:851] "Failed to get status for pod" podUID="7a876494-42c5-4be9-aad1-46d8ce3c68bb" pod="openshift-marketplace/redhat-marketplace-2xsss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2xsss\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:48 crc kubenswrapper[4805]: I0226 17:20:48.125772 4805 status_manager.go:851] "Failed to get status for pod" podUID="87196950-f6be-442b-a725-3cdee5962f55" pod="openshift-marketplace/certified-operators-gzkpq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzkpq\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:48 crc kubenswrapper[4805]: I0226 17:20:48.126055 4805 status_manager.go:851] "Failed to get status for pod" podUID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-mcvr5\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:48 crc kubenswrapper[4805]: I0226 17:20:48.126353 4805 status_manager.go:851] "Failed to get status for pod" podUID="3e191647-de53-41b3-b7b9-5cb11ccb9f87" pod="openshift-infra/auto-csr-approver-29535440-nztxm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535440-nztxm\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:48 crc kubenswrapper[4805]: I0226 17:20:48.126636 4805 status_manager.go:851] "Failed to get status for pod" podUID="043bfd8c-1387-4b00-ad52-1e4efd43c942" pod="openshift-marketplace/community-operators-tfcdt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tfcdt\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:48 crc kubenswrapper[4805]: I0226 17:20:48.126923 4805 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:48 crc kubenswrapper[4805]: I0226 17:20:48.127336 4805 status_manager.go:851] "Failed to get status for pod" podUID="fabf0d12-f3a0-4562-a606-4b0cedcc6cd7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:48 crc kubenswrapper[4805]: I0226 17:20:48.127570 4805 status_manager.go:851] "Failed to get status for pod" podUID="83d8504f-33ad-4812-bc1e-11233c225974" pod="openshift-marketplace/community-operators-7fvvs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-7fvvs\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:48 crc kubenswrapper[4805]: I0226 17:20:48.127833 4805 status_manager.go:851] "Failed to get status for pod" podUID="c783f4e5-c0b1-4832-9988-3ad7d2a38129" pod="openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-79c9b785f5-sp7zz\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:48 crc kubenswrapper[4805]: I0226 17:20:48.128245 4805 status_manager.go:851] "Failed to get status for pod" podUID="6239f68a-d80a-4fd4-9a6d-69bd48b1c015" pod="openshift-marketplace/redhat-marketplace-jttjv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jttjv\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:48 crc kubenswrapper[4805]: I0226 17:20:48.129440 4805 status_manager.go:851] "Failed to get status for pod" podUID="231a1216-2a55-4e7b-b026-104624c69857" pod="openshift-marketplace/redhat-operators-wnt7l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-wnt7l\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:48 crc kubenswrapper[4805]: I0226 17:20:48.130297 4805 status_manager.go:851] "Failed to get status for pod" podUID="73cd7b83-240c-458e-88a3-7ceace9a121d" pod="openshift-controller-manager/controller-manager-565fd78f84-xlklc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-565fd78f84-xlklc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:48 crc kubenswrapper[4805]: I0226 17:20:48.130646 4805 status_manager.go:851] "Failed to get status for pod" podUID="1f5ab03e-b223-4e5b-8c9f-3d350a66156e" pod="openshift-marketplace/certified-operators-rmjsl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-rmjsl\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:48 crc kubenswrapper[4805]: I0226 17:20:48.131056 4805 status_manager.go:851] "Failed to get status for pod" podUID="884caea0-c055-415b-92c3-fae420465726" pod="openshift-marketplace/redhat-operators-n5l62" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-n5l62\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:48 crc kubenswrapper[4805]: I0226 17:20:48.131475 4805 status_manager.go:851] "Failed to get status for pod" podUID="87196950-f6be-442b-a725-3cdee5962f55" pod="openshift-marketplace/certified-operators-gzkpq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gzkpq\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:48 crc kubenswrapper[4805]: I0226 17:20:48.131758 4805 status_manager.go:851] "Failed to get status for pod" podUID="7a876494-42c5-4be9-aad1-46d8ce3c68bb" pod="openshift-marketplace/redhat-marketplace-2xsss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-2xsss\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:48 crc kubenswrapper[4805]: I0226 17:20:48.132049 4805 status_manager.go:851] "Failed to get status for pod" podUID="3e191647-de53-41b3-b7b9-5cb11ccb9f87" pod="openshift-infra/auto-csr-approver-29535440-nztxm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29535440-nztxm\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:48 crc kubenswrapper[4805]: I0226 17:20:48.132420 4805 status_manager.go:851] "Failed to get status for pod" podUID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" pod="openshift-authentication/oauth-openshift-558db77b4-mcvr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-mcvr5\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:48 crc kubenswrapper[4805]: I0226 17:20:48.133554 4805 status_manager.go:851] "Failed to get status for pod" podUID="043bfd8c-1387-4b00-ad52-1e4efd43c942" pod="openshift-marketplace/community-operators-tfcdt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tfcdt\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:48 crc kubenswrapper[4805]: I0226 17:20:48.134354 4805 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Feb 26 17:20:49 crc kubenswrapper[4805]: I0226 17:20:49.953091 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:20:49 crc kubenswrapper[4805]: I0226 17:20:49.953132 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:20:49 crc kubenswrapper[4805]: I0226 17:20:49.954017 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 26 17:20:49 crc kubenswrapper[4805]: I0226 17:20:49.953309 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:20:49 crc kubenswrapper[4805]: I0226 17:20:49.954572 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:20:49 crc kubenswrapper[4805]: I0226 17:20:49.954780 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 26 17:20:50 crc kubenswrapper[4805]: I0226 17:20:50.019958 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7fvvs" Feb 26 17:20:50 crc kubenswrapper[4805]: I0226 17:20:50.020483 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7fvvs" Feb 26 17:20:50 crc kubenswrapper[4805]: I0226 17:20:50.114743 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7fvvs" Feb 26 17:20:50 crc kubenswrapper[4805]: I0226 17:20:50.136724 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wnt7l" event={"ID":"231a1216-2a55-4e7b-b026-104624c69857","Type":"ContainerStarted","Data":"10b9cc55c2f4f19742b566d3a2fe4577649c716fb7f2830c87a9aaee835557f7"} Feb 26 17:20:50 crc kubenswrapper[4805]: I0226 17:20:50.138929 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"81a1ffa878b6fec718d27b3bb60557f97aeb283c9c7f11b277e7dc7025423dea"} Feb 26 17:20:50 crc kubenswrapper[4805]: I0226 17:20:50.141463 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 26 17:20:50 crc kubenswrapper[4805]: I0226 17:20:50.143105 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 26 17:20:50 crc kubenswrapper[4805]: I0226 17:20:50.143149 4805 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="8661d5f04ab119c311367b9e13c65671370a488a8d37c63aa6e37f2e1556688b" exitCode=1 Feb 26 17:20:50 crc kubenswrapper[4805]: I0226 17:20:50.143267 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"8661d5f04ab119c311367b9e13c65671370a488a8d37c63aa6e37f2e1556688b"} Feb 26 17:20:50 crc kubenswrapper[4805]: I0226 17:20:50.143718 4805 scope.go:117] "RemoveContainer" containerID="8661d5f04ab119c311367b9e13c65671370a488a8d37c63aa6e37f2e1556688b" Feb 26 17:20:51 crc kubenswrapper[4805]: I0226 17:20:51.229557 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7fvvs" Feb 26 17:20:52 crc kubenswrapper[4805]: I0226 17:20:52.157785 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f2d88c13743c0eef61aedc3c8944aa9a6c05144a01c5917860a1cf8b0e7ee79e"} Feb 26 17:20:52 crc kubenswrapper[4805]: I0226 17:20:52.176199 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2xsss" Feb 26 17:20:52 crc kubenswrapper[4805]: I0226 17:20:52.213740 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2xsss" Feb 26 17:20:53 crc kubenswrapper[4805]: I0226 17:20:53.034644 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wnt7l" Feb 26 17:20:53 crc kubenswrapper[4805]: I0226 17:20:53.034701 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wnt7l" Feb 26 17:20:53 crc kubenswrapper[4805]: I0226 17:20:53.165166 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 26 17:20:53 crc kubenswrapper[4805]: I0226 17:20:53.167041 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 26 17:20:53 crc kubenswrapper[4805]: I0226 17:20:53.167184 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1715e7dd93558f4dbe006f03b545a383b0491c34e670307f4e469b563e0d26a6"} Feb 26 17:20:53 crc kubenswrapper[4805]: I0226 17:20:53.169827 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfcdt" event={"ID":"043bfd8c-1387-4b00-ad52-1e4efd43c942","Type":"ContainerStarted","Data":"d64c8f94627f0c9dc4487a64f520771b64fe673ec6d8fdb33fe6ef7bd536aca3"} Feb 26 17:20:54 crc kubenswrapper[4805]: I0226 17:20:54.072811 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wnt7l" podUID="231a1216-2a55-4e7b-b026-104624c69857" containerName="registry-server" probeResult="failure" output=< Feb 26 17:20:54 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 26 17:20:54 crc kubenswrapper[4805]: > Feb 26 17:20:54 crc kubenswrapper[4805]: I0226 17:20:54.166644 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 17:20:57 crc kubenswrapper[4805]: W0226 17:20:57.319135 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-0871e8f9fe6a05502b0d7aac27d0c24c7edfc4220569b74a0c4085a6ebf7a7af WatchSource:0}: Error finding container 0871e8f9fe6a05502b0d7aac27d0c24c7edfc4220569b74a0c4085a6ebf7a7af: Status 404 returned error can't find the container with id 0871e8f9fe6a05502b0d7aac27d0c24c7edfc4220569b74a0c4085a6ebf7a7af Feb 26 17:20:58 crc kubenswrapper[4805]: I0226 17:20:58.019127 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 17:20:58 crc kubenswrapper[4805]: I0226 17:20:58.196432 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"319de7161c148f0548489bdb78cede7ce0bfa94a17e456f2bb4ed471e86f1fc1"} Feb 26 17:20:58 crc kubenswrapper[4805]: I0226 17:20:58.197696 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"0871e8f9fe6a05502b0d7aac27d0c24c7edfc4220569b74a0c4085a6ebf7a7af"} Feb 26 17:20:58 crc kubenswrapper[4805]: I0226 17:20:58.198935 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"18e0d5a04f8e42f5801ed1c0dcf19928f50c07427f318b21b3a1f3e434561102"} Feb 26 17:20:59 crc kubenswrapper[4805]: I0226 17:20:59.700764 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tfcdt" Feb 26 17:20:59 crc kubenswrapper[4805]: I0226 17:20:59.700818 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tfcdt" Feb 26 17:20:59 crc kubenswrapper[4805]: I0226 17:20:59.807588 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tfcdt" Feb 26 17:21:00 crc kubenswrapper[4805]: I0226 17:21:00.247352 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tfcdt" Feb 26 17:21:03 crc kubenswrapper[4805]: I0226 17:21:03.082430 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wnt7l" Feb 26 17:21:03 crc kubenswrapper[4805]: I0226 17:21:03.121851 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wnt7l" Feb 26 17:21:04 crc kubenswrapper[4805]: I0226 17:21:04.165838 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 17:21:04 crc kubenswrapper[4805]: I0226 17:21:04.166109 4805 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 26 17:21:04 crc kubenswrapper[4805]: I0226 17:21:04.166270 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 26 17:21:06 crc kubenswrapper[4805]: I0226 17:21:06.456508 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 26 17:21:06 crc kubenswrapper[4805]: I0226 17:21:06.507895 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 26 17:21:07 crc kubenswrapper[4805]: I0226 17:21:07.106733 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 26 17:21:07 crc kubenswrapper[4805]: I0226 17:21:07.348281 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 26 17:21:07 crc kubenswrapper[4805]: I0226 17:21:07.390951 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 26 17:21:07 crc kubenswrapper[4805]: I0226 17:21:07.447230 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 26 17:21:07 crc kubenswrapper[4805]: I0226 17:21:07.805554 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 26 17:21:07 crc kubenswrapper[4805]: I0226 17:21:07.864979 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 26 17:21:07 crc kubenswrapper[4805]: I0226 17:21:07.870251 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 26 17:21:08 crc kubenswrapper[4805]: I0226 17:21:08.095705 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 26 17:21:08 crc kubenswrapper[4805]: I0226 17:21:08.200962 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 26 17:21:08 crc kubenswrapper[4805]: I0226 17:21:08.205333 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 26 17:21:08 crc kubenswrapper[4805]: I0226 17:21:08.260363 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"7c41be78c600725c8ee72727632c1edba778618fb0b8bc64c6a3891de6b394db"} Feb 26 17:21:08 crc kubenswrapper[4805]: I0226 17:21:08.321393 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 26 17:21:08 crc kubenswrapper[4805]: I0226 17:21:08.415889 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 26 17:21:08 crc kubenswrapper[4805]: I0226 17:21:08.687459 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 26 17:21:09 crc kubenswrapper[4805]: I0226 17:21:09.005255 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 26 17:21:09 crc kubenswrapper[4805]: I0226 17:21:09.020591 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 26 17:21:09 crc kubenswrapper[4805]: I0226 17:21:09.150666 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 26 17:21:09 crc kubenswrapper[4805]: I0226 17:21:09.197970 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 26 17:21:09 crc kubenswrapper[4805]: I0226 17:21:09.269698 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rmjsl" event={"ID":"1f5ab03e-b223-4e5b-8c9f-3d350a66156e","Type":"ContainerStarted","Data":"b79562286c88ab228b3c8c6104a206397cc102d7b51e066ac4575852f239913b"} Feb 26 17:21:09 crc kubenswrapper[4805]: I0226 17:21:09.274651 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5l62" event={"ID":"884caea0-c055-415b-92c3-fae420465726","Type":"ContainerStarted","Data":"49f93c1090503b3c506b09fd0a040fe039e461f141dff8d14209ba3ed77fb873"} Feb 26 17:21:09 crc kubenswrapper[4805]: I0226 17:21:09.280197 4805 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 26 17:21:09 crc kubenswrapper[4805]: I0226 17:21:09.281360 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"19199ab98362e49f6e07800bad7ec2f49e5c6c2b5b51e1b8044042bcbf4eb0b5"} Feb 26 17:21:09 crc kubenswrapper[4805]: I0226 17:21:09.284877 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jttjv" event={"ID":"6239f68a-d80a-4fd4-9a6d-69bd48b1c015","Type":"ContainerStarted","Data":"b54daf2a21d916b80c5fdffaf0326ee22fbdc32fdbcdc2b58a8c35189309de85"} Feb 26 17:21:09 crc kubenswrapper[4805]: I0226 17:21:09.286868 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzkpq" event={"ID":"87196950-f6be-442b-a725-3cdee5962f55","Type":"ContainerStarted","Data":"691a564fe17e21e2ba9b79b502179bb34e0d35e8be865c3888a5b0b1536787c1"} Feb 26 17:21:09 crc kubenswrapper[4805]: I0226 17:21:09.288201 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"d7627dd11301e0ffebe9ef188c13255a67a8abbbae85090fcc438f257973916c"} Feb 26 17:21:09 crc kubenswrapper[4805]: I0226 17:21:09.288331 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:21:09 crc kubenswrapper[4805]: I0226 17:21:09.289932 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"062a7a67f903a34491bebe31bf7373c531551a2610f17083a9c0edb750488e53"} Feb 26 17:21:09 crc kubenswrapper[4805]: I0226 17:21:09.302579 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 26 17:21:09 crc kubenswrapper[4805]: I0226 17:21:09.325964 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 26 17:21:09 crc kubenswrapper[4805]: I0226 17:21:09.359074 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 26 17:21:09 crc kubenswrapper[4805]: I0226 17:21:09.372540 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 26 17:21:09 crc kubenswrapper[4805]: I0226 17:21:09.384964 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 26 17:21:09 crc kubenswrapper[4805]: I0226 17:21:09.483415 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 26 17:21:09 crc kubenswrapper[4805]: I0226 17:21:09.496825 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 26 17:21:09 crc kubenswrapper[4805]: I0226 17:21:09.621820 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 26 17:21:09 crc kubenswrapper[4805]: I0226 17:21:09.676719 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 26 17:21:10 crc kubenswrapper[4805]: I0226 17:21:10.080284 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 26 17:21:10 crc kubenswrapper[4805]: I0226 17:21:10.147673 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 26 17:21:10 crc kubenswrapper[4805]: I0226 17:21:10.187997 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 26 17:21:10 crc kubenswrapper[4805]: I0226 17:21:10.198279 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gzkpq" Feb 26 17:21:10 crc kubenswrapper[4805]: I0226 17:21:10.198355 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gzkpq" Feb 26 17:21:10 crc kubenswrapper[4805]: I0226 17:21:10.207270 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 26 17:21:10 crc kubenswrapper[4805]: I0226 17:21:10.218987 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 26 17:21:10 crc kubenswrapper[4805]: I0226 17:21:10.279738 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 26 17:21:10 crc kubenswrapper[4805]: I0226 17:21:10.297732 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"37f19437cb4720d7419817a5f20dc22198af3c4204f7860856396539278ed2cf"} Feb 26 17:21:10 crc kubenswrapper[4805]: I0226 17:21:10.380550 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 26 17:21:10 crc kubenswrapper[4805]: I0226 17:21:10.387507 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 26 17:21:10 crc kubenswrapper[4805]: I0226 17:21:10.395822 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 26 17:21:10 crc kubenswrapper[4805]: I0226 17:21:10.400438 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 26 17:21:10 crc kubenswrapper[4805]: I0226 17:21:10.523102 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 26 17:21:10 crc kubenswrapper[4805]: I0226 17:21:10.640249 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 26 17:21:10 crc kubenswrapper[4805]: I0226 17:21:10.701882 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 26 17:21:10 crc kubenswrapper[4805]: I0226 17:21:10.706155 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 26 17:21:10 crc kubenswrapper[4805]: I0226 17:21:10.829462 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 26 17:21:10 crc kubenswrapper[4805]: I0226 17:21:10.854501 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 26 17:21:10 crc kubenswrapper[4805]: I0226 17:21:10.915545 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 26 17:21:10 crc kubenswrapper[4805]: I0226 17:21:10.924652 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 26 17:21:11 crc kubenswrapper[4805]: I0226 17:21:11.045875 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 26 17:21:11 crc kubenswrapper[4805]: I0226 17:21:11.146294 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 26 17:21:11 crc kubenswrapper[4805]: I0226 17:21:11.192069 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 26 17:21:11 crc kubenswrapper[4805]: I0226 17:21:11.237941 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-gzkpq" podUID="87196950-f6be-442b-a725-3cdee5962f55" containerName="registry-server" probeResult="failure" output=< Feb 26 17:21:11 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 26 17:21:11 crc kubenswrapper[4805]: > Feb 26 17:21:11 crc kubenswrapper[4805]: I0226 17:21:11.265768 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 26 17:21:11 crc kubenswrapper[4805]: I0226 17:21:11.307771 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ed78e3eb7fa41e0fba8560c43214ff4eeee30ca72008caf5604ac7f2e88bda1e"} Feb 26 17:21:11 crc kubenswrapper[4805]: I0226 17:21:11.398620 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 26 17:21:11 crc kubenswrapper[4805]: I0226 17:21:11.471196 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 26 17:21:11 crc kubenswrapper[4805]: I0226 17:21:11.502834 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 26 17:21:11 crc kubenswrapper[4805]: I0226 17:21:11.638599 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jttjv" Feb 26 17:21:11 crc kubenswrapper[4805]: I0226 17:21:11.639068 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jttjv" Feb 26 17:21:11 crc kubenswrapper[4805]: I0226 17:21:11.642577 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 26 17:21:11 crc kubenswrapper[4805]: I0226 17:21:11.678538 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jttjv" Feb 26 17:21:11 crc kubenswrapper[4805]: I0226 17:21:11.701578 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 26 17:21:11 crc kubenswrapper[4805]: I0226 17:21:11.735031 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 26 17:21:11 crc kubenswrapper[4805]: I0226 17:21:11.737991 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 26 17:21:11 crc kubenswrapper[4805]: I0226 17:21:11.748198 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 26 17:21:11 crc kubenswrapper[4805]: I0226 17:21:11.833785 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 26 17:21:11 crc kubenswrapper[4805]: I0226 17:21:11.863112 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 26 17:21:11 crc kubenswrapper[4805]: I0226 17:21:11.870203 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 26 17:21:11 crc kubenswrapper[4805]: I0226 17:21:11.945476 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 26 17:21:12 crc kubenswrapper[4805]: I0226 17:21:12.007452 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 26 17:21:12 crc kubenswrapper[4805]: I0226 17:21:12.023720 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 26 17:21:12 crc kubenswrapper[4805]: I0226 17:21:12.036433 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 26 17:21:12 crc kubenswrapper[4805]: I0226 17:21:12.065827 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 26 17:21:12 crc kubenswrapper[4805]: I0226 17:21:12.142394 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 26 17:21:12 crc kubenswrapper[4805]: I0226 17:21:12.159069 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 26 17:21:12 crc kubenswrapper[4805]: I0226 17:21:12.164266 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 26 17:21:12 crc kubenswrapper[4805]: I0226 17:21:12.312289 4805 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c2828f36-318b-4caa-a7de-47401983f4da" Feb 26 17:21:12 crc kubenswrapper[4805]: I0226 17:21:12.312328 4805 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c2828f36-318b-4caa-a7de-47401983f4da" Feb 26 17:21:12 crc kubenswrapper[4805]: I0226 17:21:12.326313 4805 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:21:12 crc kubenswrapper[4805]: I0226 17:21:12.342671 4805 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c2828f36-318b-4caa-a7de-47401983f4da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81a1ffa878b6fec718d27b3bb60557f97aeb283c9c7f11b277e7dc7025423dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://19199ab98362e49f6e07800bad7ec2f49e5c6c2b5b51e1b8044042bcbf4eb0b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:21:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d88c13743c0eef61aedc3c8944aa9a6c05144a01c5917860a1cf8b0e7ee79e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:20:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed78e3eb7fa41e0fba8560c43214ff4eeee30ca72008caf5604ac7f2e88bda1e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:21:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f19437cb4720d7419817a5f20dc22198af3c4204f7860856396539278ed2cf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-26T17:21:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": pods \"kube-apiserver-crc\" not found" Feb 26 17:21:12 crc kubenswrapper[4805]: I0226 17:21:12.403706 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 26 17:21:12 crc kubenswrapper[4805]: I0226 17:21:12.485301 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 26 17:21:12 crc kubenswrapper[4805]: I0226 17:21:12.523841 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 26 17:21:12 crc kubenswrapper[4805]: I0226 17:21:12.631709 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 26 17:21:12 crc kubenswrapper[4805]: I0226 17:21:12.773713 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 26 17:21:12 crc kubenswrapper[4805]: I0226 17:21:12.864826 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 26 17:21:12 crc kubenswrapper[4805]: I0226 17:21:12.873983 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 26 17:21:12 crc kubenswrapper[4805]: I0226 17:21:12.980482 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 26 17:21:12 crc kubenswrapper[4805]: I0226 17:21:12.998939 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 26 17:21:13 crc kubenswrapper[4805]: I0226 17:21:13.071237 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 26 17:21:13 crc kubenswrapper[4805]: I0226 17:21:13.197164 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 26 17:21:13 crc kubenswrapper[4805]: I0226 17:21:13.296255 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 26 17:21:13 crc kubenswrapper[4805]: I0226 17:21:13.315142 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 26 17:21:13 crc kubenswrapper[4805]: I0226 17:21:13.318859 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log" Feb 26 17:21:13 crc kubenswrapper[4805]: I0226 17:21:13.318922 4805 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="7c41be78c600725c8ee72727632c1edba778618fb0b8bc64c6a3891de6b394db" exitCode=255 Feb 26 17:21:13 crc kubenswrapper[4805]: I0226 17:21:13.319067 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"7c41be78c600725c8ee72727632c1edba778618fb0b8bc64c6a3891de6b394db"} Feb 26 17:21:13 crc kubenswrapper[4805]: I0226 17:21:13.319792 4805 scope.go:117] "RemoveContainer" containerID="7c41be78c600725c8ee72727632c1edba778618fb0b8bc64c6a3891de6b394db" Feb 26 17:21:13 crc kubenswrapper[4805]: I0226 17:21:13.331385 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 26 17:21:13 crc kubenswrapper[4805]: I0226 17:21:13.335466 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n5l62" Feb 26 17:21:13 crc kubenswrapper[4805]: I0226 17:21:13.335508 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n5l62" Feb 26 17:21:13 crc kubenswrapper[4805]: I0226 17:21:13.342454 4805 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="eafbaa41-e4d6-4092-a97f-8694635448aa" Feb 26 17:21:13 crc kubenswrapper[4805]: I0226 17:21:13.356261 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 26 17:21:13 crc kubenswrapper[4805]: I0226 17:21:13.373636 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jttjv" Feb 26 17:21:13 crc kubenswrapper[4805]: I0226 17:21:13.564516 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 26 17:21:13 crc kubenswrapper[4805]: I0226 17:21:13.686328 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 26 17:21:13 crc kubenswrapper[4805]: I0226 17:21:13.700531 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 26 17:21:13 crc kubenswrapper[4805]: I0226 17:21:13.756783 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 26 17:21:13 crc kubenswrapper[4805]: I0226 17:21:13.786546 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 26 17:21:13 crc kubenswrapper[4805]: I0226 17:21:13.791756 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 26 17:21:13 crc kubenswrapper[4805]: I0226 17:21:13.805930 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 26 17:21:13 crc kubenswrapper[4805]: I0226 17:21:13.807764 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 26 17:21:13 crc kubenswrapper[4805]: I0226 17:21:13.850843 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 26 17:21:13 crc kubenswrapper[4805]: I0226 17:21:13.885092 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 26 17:21:13 crc kubenswrapper[4805]: I0226 17:21:13.941926 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 26 17:21:13 crc kubenswrapper[4805]: I0226 17:21:13.984213 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 26 17:21:14 crc kubenswrapper[4805]: I0226 17:21:14.013061 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 26 17:21:14 crc kubenswrapper[4805]: I0226 17:21:14.097611 4805 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 26 17:21:14 crc kubenswrapper[4805]: I0226 17:21:14.166601 4805 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 26 17:21:14 crc kubenswrapper[4805]: I0226 17:21:14.166676 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 26 17:21:14 crc kubenswrapper[4805]: I0226 17:21:14.254917 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 26 17:21:14 crc kubenswrapper[4805]: I0226 17:21:14.286664 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 26 17:21:14 crc kubenswrapper[4805]: I0226 17:21:14.359490 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 26 17:21:14 crc kubenswrapper[4805]: I0226 17:21:14.370967 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 26 17:21:14 crc kubenswrapper[4805]: I0226 17:21:14.376967 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n5l62" podUID="884caea0-c055-415b-92c3-fae420465726" containerName="registry-server" probeResult="failure" output=< Feb 26 17:21:14 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 26 17:21:14 crc kubenswrapper[4805]: > Feb 26 17:21:14 crc kubenswrapper[4805]: I0226 17:21:14.467718 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 26 17:21:14 crc kubenswrapper[4805]: I0226 17:21:14.469680 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 26 17:21:14 crc kubenswrapper[4805]: I0226 17:21:14.514008 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 26 17:21:14 crc kubenswrapper[4805]: I0226 17:21:14.532465 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 26 17:21:14 crc kubenswrapper[4805]: I0226 17:21:14.702954 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 26 17:21:14 crc kubenswrapper[4805]: I0226 17:21:14.737220 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 26 17:21:14 crc kubenswrapper[4805]: I0226 17:21:14.794321 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 26 17:21:14 crc kubenswrapper[4805]: I0226 17:21:14.914813 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 26 17:21:14 crc kubenswrapper[4805]: I0226 17:21:14.923692 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 26 17:21:15 crc kubenswrapper[4805]: I0226 17:21:15.149169 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 26 17:21:15 crc kubenswrapper[4805]: I0226 17:21:15.284344 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 26 17:21:15 crc kubenswrapper[4805]: I0226 17:21:15.288569 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 26 17:21:15 crc kubenswrapper[4805]: I0226 17:21:15.307717 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 26 17:21:15 crc kubenswrapper[4805]: I0226 17:21:15.331976 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log" Feb 26 17:21:15 crc kubenswrapper[4805]: I0226 17:21:15.332067 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"1be3bc8406b07c6767ab638e176b548226a3aceb13e26bf70ac8c6913d433403"} Feb 26 17:21:15 crc kubenswrapper[4805]: I0226 17:21:15.332930 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 26 17:21:15 crc kubenswrapper[4805]: I0226 17:21:15.435999 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 26 17:21:15 crc kubenswrapper[4805]: I0226 17:21:15.436407 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 26 17:21:15 crc kubenswrapper[4805]: I0226 17:21:15.470199 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 26 17:21:15 crc kubenswrapper[4805]: I0226 17:21:15.553369 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 26 17:21:15 crc kubenswrapper[4805]: I0226 17:21:15.618719 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 26 17:21:15 crc kubenswrapper[4805]: I0226 17:21:15.623590 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 26 17:21:15 crc kubenswrapper[4805]: I0226 17:21:15.673268 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 26 17:21:15 crc kubenswrapper[4805]: I0226 17:21:15.717111 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 26 17:21:15 crc kubenswrapper[4805]: I0226 17:21:15.719053 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 26 17:21:15 crc kubenswrapper[4805]: I0226 17:21:15.788455 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 26 17:21:15 crc kubenswrapper[4805]: I0226 17:21:15.835433 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 26 17:21:15 crc kubenswrapper[4805]: I0226 17:21:15.908258 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 26 17:21:15 crc kubenswrapper[4805]: I0226 17:21:15.976079 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 26 17:21:15 crc kubenswrapper[4805]: I0226 17:21:15.997293 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 26 17:21:16 crc kubenswrapper[4805]: I0226 17:21:16.106229 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 26 17:21:16 crc kubenswrapper[4805]: I0226 17:21:16.194328 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 26 17:21:16 crc kubenswrapper[4805]: I0226 17:21:16.304565 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 26 17:21:16 crc kubenswrapper[4805]: I0226 17:21:16.339832 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Feb 26 17:21:16 crc kubenswrapper[4805]: I0226 17:21:16.340462 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/0.log" Feb 26 17:21:16 crc kubenswrapper[4805]: I0226 17:21:16.340508 4805 generic.go:334] "Generic (PLEG): container finished" podID="9d751cbb-f2e2-430d-9754-c882a5e924a5" containerID="1be3bc8406b07c6767ab638e176b548226a3aceb13e26bf70ac8c6913d433403" exitCode=255 Feb 26 17:21:16 crc kubenswrapper[4805]: I0226 17:21:16.340536 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerDied","Data":"1be3bc8406b07c6767ab638e176b548226a3aceb13e26bf70ac8c6913d433403"} Feb 26 17:21:16 crc kubenswrapper[4805]: I0226 17:21:16.340569 4805 scope.go:117] "RemoveContainer" containerID="7c41be78c600725c8ee72727632c1edba778618fb0b8bc64c6a3891de6b394db" Feb 26 17:21:16 crc kubenswrapper[4805]: I0226 17:21:16.340970 4805 scope.go:117] "RemoveContainer" containerID="1be3bc8406b07c6767ab638e176b548226a3aceb13e26bf70ac8c6913d433403" Feb 26 17:21:16 crc kubenswrapper[4805]: E0226 17:21:16.341241 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=check-endpoints pod=network-check-source-55646444c4-trplf_openshift-network-diagnostics(9d751cbb-f2e2-430d-9754-c882a5e924a5)\"" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 26 17:21:16 crc kubenswrapper[4805]: I0226 17:21:16.408802 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 26 17:21:16 crc kubenswrapper[4805]: I0226 17:21:16.465973 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 26 17:21:16 crc kubenswrapper[4805]: I0226 17:21:16.505096 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 26 17:21:16 crc kubenswrapper[4805]: I0226 17:21:16.662471 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 26 17:21:16 crc kubenswrapper[4805]: I0226 17:21:16.673728 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 26 17:21:16 crc kubenswrapper[4805]: I0226 17:21:16.676869 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 26 17:21:16 crc kubenswrapper[4805]: I0226 17:21:16.699808 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 26 17:21:16 crc kubenswrapper[4805]: I0226 17:21:16.728458 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 26 17:21:16 crc kubenswrapper[4805]: I0226 17:21:16.744565 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 26 17:21:16 crc kubenswrapper[4805]: I0226 17:21:16.797841 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 26 17:21:16 crc kubenswrapper[4805]: I0226 17:21:16.813882 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 26 17:21:16 crc kubenswrapper[4805]: I0226 17:21:16.814126 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 26 17:21:16 crc kubenswrapper[4805]: I0226 17:21:16.892781 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 26 17:21:16 crc kubenswrapper[4805]: I0226 17:21:16.923202 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 26 17:21:16 crc kubenswrapper[4805]: I0226 17:21:16.981421 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 26 17:21:17 crc kubenswrapper[4805]: I0226 17:21:17.069091 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 26 17:21:17 crc kubenswrapper[4805]: I0226 17:21:17.075159 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 26 17:21:17 crc kubenswrapper[4805]: I0226 17:21:17.081314 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 26 17:21:17 crc kubenswrapper[4805]: I0226 17:21:17.166812 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 26 17:21:17 crc kubenswrapper[4805]: I0226 17:21:17.220506 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 26 17:21:17 crc kubenswrapper[4805]: I0226 17:21:17.237010 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 26 17:21:17 crc kubenswrapper[4805]: I0226 17:21:17.345647 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Feb 26 17:21:17 crc kubenswrapper[4805]: I0226 17:21:17.358657 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 26 17:21:17 crc kubenswrapper[4805]: I0226 17:21:17.373853 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 26 17:21:17 crc kubenswrapper[4805]: I0226 17:21:17.432300 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 26 17:21:17 crc kubenswrapper[4805]: I0226 17:21:17.451060 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 26 17:21:17 crc kubenswrapper[4805]: I0226 17:21:17.501345 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 26 17:21:17 crc kubenswrapper[4805]: I0226 17:21:17.531192 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 26 17:21:17 crc kubenswrapper[4805]: I0226 17:21:17.571179 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 26 17:21:17 crc kubenswrapper[4805]: I0226 17:21:17.575483 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 26 17:21:17 crc kubenswrapper[4805]: I0226 17:21:17.811423 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 26 17:21:17 crc kubenswrapper[4805]: I0226 17:21:17.856202 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 26 17:21:17 crc kubenswrapper[4805]: I0226 17:21:17.870715 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 26 17:21:17 crc kubenswrapper[4805]: I0226 17:21:17.912890 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 26 17:21:17 crc kubenswrapper[4805]: I0226 17:21:17.962908 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 26 17:21:17 crc kubenswrapper[4805]: I0226 17:21:17.977365 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 26 17:21:18 crc kubenswrapper[4805]: I0226 17:21:18.015245 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 26 17:21:18 crc kubenswrapper[4805]: I0226 17:21:18.039097 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 26 17:21:18 crc kubenswrapper[4805]: I0226 17:21:18.060476 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 26 17:21:18 crc kubenswrapper[4805]: I0226 17:21:18.105339 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 26 17:21:18 crc kubenswrapper[4805]: I0226 17:21:18.141622 4805 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 26 17:21:18 crc kubenswrapper[4805]: I0226 17:21:18.196588 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 26 17:21:18 crc kubenswrapper[4805]: I0226 17:21:18.289979 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 26 17:21:18 crc kubenswrapper[4805]: I0226 17:21:18.290456 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 26 17:21:18 crc kubenswrapper[4805]: I0226 17:21:18.474059 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 26 17:21:18 crc kubenswrapper[4805]: I0226 17:21:18.544854 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 26 17:21:18 crc kubenswrapper[4805]: I0226 17:21:18.573539 4805 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 26 17:21:18 crc kubenswrapper[4805]: I0226 17:21:18.609793 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 26 17:21:18 crc kubenswrapper[4805]: I0226 17:21:18.648981 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 26 17:21:18 crc kubenswrapper[4805]: I0226 17:21:18.650040 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 26 17:21:18 crc kubenswrapper[4805]: I0226 17:21:18.700388 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 26 17:21:18 crc kubenswrapper[4805]: I0226 17:21:18.730180 4805 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 26 17:21:18 crc kubenswrapper[4805]: I0226 17:21:18.731011 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rmjsl" podStartSLOduration=17.042212453 podStartE2EDuration="2m39.730993357s" podCreationTimestamp="2026-02-26 17:18:39 +0000 UTC" firstStartedPulling="2026-02-26 17:18:41.709340348 +0000 UTC m=+236.271094687" lastFinishedPulling="2026-02-26 17:21:04.398121252 +0000 UTC m=+378.959875591" observedRunningTime="2026-02-26 17:21:10.321537283 +0000 UTC m=+384.883291642" watchObservedRunningTime="2026-02-26 17:21:18.730993357 +0000 UTC m=+393.292747696" Feb 26 17:21:18 crc kubenswrapper[4805]: I0226 17:21:18.731162 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2xsss" podStartSLOduration=44.147762785 podStartE2EDuration="2m37.731156941s" podCreationTimestamp="2026-02-26 17:18:41 +0000 UTC" firstStartedPulling="2026-02-26 17:18:44.024319285 +0000 UTC m=+238.586073624" lastFinishedPulling="2026-02-26 17:20:37.607713441 +0000 UTC m=+352.169467780" observedRunningTime="2026-02-26 17:20:56.218736675 +0000 UTC m=+370.780491014" watchObservedRunningTime="2026-02-26 17:21:18.731156941 +0000 UTC m=+393.292911300" Feb 26 17:21:18 crc kubenswrapper[4805]: I0226 17:21:18.731708 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jttjv" podStartSLOduration=15.835505039 podStartE2EDuration="2m37.731704934s" podCreationTimestamp="2026-02-26 17:18:41 +0000 UTC" firstStartedPulling="2026-02-26 17:18:43.976734494 +0000 UTC m=+238.538488853" lastFinishedPulling="2026-02-26 17:21:05.872934369 +0000 UTC m=+380.434688748" observedRunningTime="2026-02-26 17:21:09.32440216 +0000 UTC m=+383.886156499" watchObservedRunningTime="2026-02-26 17:21:18.731704934 +0000 UTC m=+393.293459273" Feb 26 17:21:18 crc kubenswrapper[4805]: I0226 17:21:18.732134 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gzkpq" podStartSLOduration=16.209950916 podStartE2EDuration="2m39.732127995s" podCreationTimestamp="2026-02-26 17:18:39 +0000 UTC" firstStartedPulling="2026-02-26 17:18:42.956143689 +0000 UTC m=+237.517898028" lastFinishedPulling="2026-02-26 17:21:06.478320768 +0000 UTC m=+381.040075107" observedRunningTime="2026-02-26 17:21:09.340936566 +0000 UTC m=+383.902690905" watchObservedRunningTime="2026-02-26 17:21:18.732127995 +0000 UTC m=+393.293882334" Feb 26 17:21:18 crc kubenswrapper[4805]: I0226 17:21:18.732477 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wnt7l" podStartSLOduration=33.142218641 podStartE2EDuration="2m36.732473703s" podCreationTimestamp="2026-02-26 17:18:42 +0000 UTC" firstStartedPulling="2026-02-26 17:18:45.046000315 +0000 UTC m=+239.607754654" lastFinishedPulling="2026-02-26 17:20:48.636255367 +0000 UTC m=+363.198009716" observedRunningTime="2026-02-26 17:20:56.466138377 +0000 UTC m=+371.027892716" watchObservedRunningTime="2026-02-26 17:21:18.732473703 +0000 UTC m=+393.294228042" Feb 26 17:21:18 crc kubenswrapper[4805]: I0226 17:21:18.732697 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7fvvs" podStartSLOduration=36.456269173 podStartE2EDuration="2m39.732691839s" podCreationTimestamp="2026-02-26 17:18:39 +0000 UTC" firstStartedPulling="2026-02-26 17:18:42.918094453 +0000 UTC m=+237.479848792" lastFinishedPulling="2026-02-26 17:20:46.194517119 +0000 UTC m=+360.756271458" observedRunningTime="2026-02-26 17:20:56.319797996 +0000 UTC m=+370.881552355" watchObservedRunningTime="2026-02-26 17:21:18.732691839 +0000 UTC m=+393.294446178" Feb 26 17:21:18 crc kubenswrapper[4805]: I0226 17:21:18.733102 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tfcdt" podStartSLOduration=32.186245526 podStartE2EDuration="2m39.733097809s" podCreationTimestamp="2026-02-26 17:18:39 +0000 UTC" firstStartedPulling="2026-02-26 17:18:42.944546044 +0000 UTC m=+237.506300383" lastFinishedPulling="2026-02-26 17:20:50.491398327 +0000 UTC m=+365.053152666" observedRunningTime="2026-02-26 17:20:56.433963158 +0000 UTC m=+370.995717497" watchObservedRunningTime="2026-02-26 17:21:18.733097809 +0000 UTC m=+393.294852148" Feb 26 17:21:18 crc kubenswrapper[4805]: I0226 17:21:18.733167 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n5l62" podStartSLOduration=15.28791522 podStartE2EDuration="2m36.73316479s" podCreationTimestamp="2026-02-26 17:18:42 +0000 UTC" firstStartedPulling="2026-02-26 17:18:45.045092193 +0000 UTC m=+239.606846532" lastFinishedPulling="2026-02-26 17:21:06.490341743 +0000 UTC m=+381.052096102" observedRunningTime="2026-02-26 17:21:09.308438418 +0000 UTC m=+383.870192757" watchObservedRunningTime="2026-02-26 17:21:18.73316479 +0000 UTC m=+393.294919129" Feb 26 17:21:18 crc kubenswrapper[4805]: I0226 17:21:18.734632 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79c9b785f5-sp7zz","openshift-kube-apiserver/kube-apiserver-crc","openshift-controller-manager/controller-manager-565fd78f84-xlklc","openshift-authentication/oauth-openshift-558db77b4-mcvr5"] Feb 26 17:21:18 crc kubenswrapper[4805]: I0226 17:21:18.734685 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 26 17:21:18 crc kubenswrapper[4805]: I0226 17:21:18.734996 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:21:18 crc kubenswrapper[4805]: I0226 17:21:18.750790 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=6.750772722 podStartE2EDuration="6.750772722s" podCreationTimestamp="2026-02-26 17:21:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:21:18.74943456 +0000 UTC m=+393.311188899" watchObservedRunningTime="2026-02-26 17:21:18.750772722 +0000 UTC m=+393.312527061" Feb 26 17:21:18 crc kubenswrapper[4805]: I0226 17:21:18.776337 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 26 17:21:18 crc kubenswrapper[4805]: I0226 17:21:18.889459 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 26 17:21:18 crc kubenswrapper[4805]: I0226 17:21:18.968633 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" path="/var/lib/kubelet/pods/2f7f215f-544a-4b8a-814d-5e6ecd814b2d/volumes" Feb 26 17:21:18 crc kubenswrapper[4805]: I0226 17:21:18.969606 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73cd7b83-240c-458e-88a3-7ceace9a121d" path="/var/lib/kubelet/pods/73cd7b83-240c-458e-88a3-7ceace9a121d/volumes" Feb 26 17:21:18 crc kubenswrapper[4805]: I0226 17:21:18.970232 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c783f4e5-c0b1-4832-9988-3ad7d2a38129" path="/var/lib/kubelet/pods/c783f4e5-c0b1-4832-9988-3ad7d2a38129/volumes" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.036764 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.066751 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.142782 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.148704 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.153152 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.387481 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.395397 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.460303 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-68b5756878-p2ltt"] Feb 26 17:21:19 crc kubenswrapper[4805]: E0226 17:21:19.460856 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e191647-de53-41b3-b7b9-5cb11ccb9f87" containerName="oc" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.460939 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e191647-de53-41b3-b7b9-5cb11ccb9f87" containerName="oc" Feb 26 17:21:19 crc kubenswrapper[4805]: E0226 17:21:19.460966 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" containerName="oauth-openshift" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.460984 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" containerName="oauth-openshift" Feb 26 17:21:19 crc kubenswrapper[4805]: E0226 17:21:19.461077 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fabf0d12-f3a0-4562-a606-4b0cedcc6cd7" containerName="installer" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.461100 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="fabf0d12-f3a0-4562-a606-4b0cedcc6cd7" containerName="installer" Feb 26 17:21:19 crc kubenswrapper[4805]: E0226 17:21:19.461124 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c783f4e5-c0b1-4832-9988-3ad7d2a38129" containerName="route-controller-manager" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.461140 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c783f4e5-c0b1-4832-9988-3ad7d2a38129" containerName="route-controller-manager" Feb 26 17:21:19 crc kubenswrapper[4805]: E0226 17:21:19.461167 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73cd7b83-240c-458e-88a3-7ceace9a121d" containerName="controller-manager" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.461180 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="73cd7b83-240c-458e-88a3-7ceace9a121d" containerName="controller-manager" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.461408 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="73cd7b83-240c-458e-88a3-7ceace9a121d" containerName="controller-manager" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.461450 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f7f215f-544a-4b8a-814d-5e6ecd814b2d" containerName="oauth-openshift" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.461475 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e191647-de53-41b3-b7b9-5cb11ccb9f87" containerName="oc" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.461499 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c783f4e5-c0b1-4832-9988-3ad7d2a38129" containerName="route-controller-manager" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.461517 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="fabf0d12-f3a0-4562-a606-4b0cedcc6cd7" containerName="installer" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.462359 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68b5756878-p2ltt" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.463797 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-575db9c649-6kwjs"] Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.464583 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.471602 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.472435 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.472649 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.472707 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.472839 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.472888 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.472978 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.473004 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.473040 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.473119 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.473144 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.472847 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.473364 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.473371 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.473513 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.473518 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.476602 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.476866 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.503690 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-68b5756878-p2ltt"] Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.508779 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.509621 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-575db9c649-6kwjs"] Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.510971 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.516546 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.519267 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.519521 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.531940 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c66dc115-0279-4477-a84b-d0609ca43e7b-proxy-ca-bundles\") pod \"controller-manager-68b5756878-p2ltt\" (UID: \"c66dc115-0279-4477-a84b-d0609ca43e7b\") " pod="openshift-controller-manager/controller-manager-68b5756878-p2ltt" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.531986 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/feefeca2-5088-4ec8-aeb5-44ed80a9a320-audit-policies\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.532013 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/feefeca2-5088-4ec8-aeb5-44ed80a9a320-v4-0-config-system-serving-cert\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.532067 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/feefeca2-5088-4ec8-aeb5-44ed80a9a320-audit-dir\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.532100 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/feefeca2-5088-4ec8-aeb5-44ed80a9a320-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.532130 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/feefeca2-5088-4ec8-aeb5-44ed80a9a320-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.532152 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/feefeca2-5088-4ec8-aeb5-44ed80a9a320-v4-0-config-user-template-login\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.532173 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/feefeca2-5088-4ec8-aeb5-44ed80a9a320-v4-0-config-user-template-error\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.532197 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/feefeca2-5088-4ec8-aeb5-44ed80a9a320-v4-0-config-system-service-ca\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.532272 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/feefeca2-5088-4ec8-aeb5-44ed80a9a320-v4-0-config-system-session\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.532305 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/feefeca2-5088-4ec8-aeb5-44ed80a9a320-v4-0-config-system-router-certs\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.532330 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/feefeca2-5088-4ec8-aeb5-44ed80a9a320-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.532352 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c66dc115-0279-4477-a84b-d0609ca43e7b-serving-cert\") pod \"controller-manager-68b5756878-p2ltt\" (UID: \"c66dc115-0279-4477-a84b-d0609ca43e7b\") " pod="openshift-controller-manager/controller-manager-68b5756878-p2ltt" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.532388 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlgnp\" (UniqueName: \"kubernetes.io/projected/feefeca2-5088-4ec8-aeb5-44ed80a9a320-kube-api-access-wlgnp\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.532410 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c66dc115-0279-4477-a84b-d0609ca43e7b-client-ca\") pod \"controller-manager-68b5756878-p2ltt\" (UID: \"c66dc115-0279-4477-a84b-d0609ca43e7b\") " pod="openshift-controller-manager/controller-manager-68b5756878-p2ltt" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.532435 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/feefeca2-5088-4ec8-aeb5-44ed80a9a320-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.532470 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjz6r\" (UniqueName: \"kubernetes.io/projected/c66dc115-0279-4477-a84b-d0609ca43e7b-kube-api-access-qjz6r\") pod \"controller-manager-68b5756878-p2ltt\" (UID: \"c66dc115-0279-4477-a84b-d0609ca43e7b\") " pod="openshift-controller-manager/controller-manager-68b5756878-p2ltt" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.532496 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c66dc115-0279-4477-a84b-d0609ca43e7b-config\") pod \"controller-manager-68b5756878-p2ltt\" (UID: \"c66dc115-0279-4477-a84b-d0609ca43e7b\") " pod="openshift-controller-manager/controller-manager-68b5756878-p2ltt" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.532525 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/feefeca2-5088-4ec8-aeb5-44ed80a9a320-v4-0-config-system-cliconfig\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.633383 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/feefeca2-5088-4ec8-aeb5-44ed80a9a320-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.633441 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/feefeca2-5088-4ec8-aeb5-44ed80a9a320-v4-0-config-user-template-login\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.633472 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/feefeca2-5088-4ec8-aeb5-44ed80a9a320-v4-0-config-user-template-error\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.633497 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/feefeca2-5088-4ec8-aeb5-44ed80a9a320-v4-0-config-system-service-ca\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.633530 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c66dc115-0279-4477-a84b-d0609ca43e7b-serving-cert\") pod \"controller-manager-68b5756878-p2ltt\" (UID: \"c66dc115-0279-4477-a84b-d0609ca43e7b\") " pod="openshift-controller-manager/controller-manager-68b5756878-p2ltt" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.633550 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/feefeca2-5088-4ec8-aeb5-44ed80a9a320-v4-0-config-system-session\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.633569 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/feefeca2-5088-4ec8-aeb5-44ed80a9a320-v4-0-config-system-router-certs\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.633592 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/feefeca2-5088-4ec8-aeb5-44ed80a9a320-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.633627 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlgnp\" (UniqueName: \"kubernetes.io/projected/feefeca2-5088-4ec8-aeb5-44ed80a9a320-kube-api-access-wlgnp\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.633649 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c66dc115-0279-4477-a84b-d0609ca43e7b-client-ca\") pod \"controller-manager-68b5756878-p2ltt\" (UID: \"c66dc115-0279-4477-a84b-d0609ca43e7b\") " pod="openshift-controller-manager/controller-manager-68b5756878-p2ltt" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.633670 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/feefeca2-5088-4ec8-aeb5-44ed80a9a320-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.633706 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjz6r\" (UniqueName: \"kubernetes.io/projected/c66dc115-0279-4477-a84b-d0609ca43e7b-kube-api-access-qjz6r\") pod \"controller-manager-68b5756878-p2ltt\" (UID: \"c66dc115-0279-4477-a84b-d0609ca43e7b\") " pod="openshift-controller-manager/controller-manager-68b5756878-p2ltt" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.633736 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c66dc115-0279-4477-a84b-d0609ca43e7b-config\") pod \"controller-manager-68b5756878-p2ltt\" (UID: \"c66dc115-0279-4477-a84b-d0609ca43e7b\") " pod="openshift-controller-manager/controller-manager-68b5756878-p2ltt" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.633763 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/feefeca2-5088-4ec8-aeb5-44ed80a9a320-v4-0-config-system-cliconfig\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.633784 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c66dc115-0279-4477-a84b-d0609ca43e7b-proxy-ca-bundles\") pod \"controller-manager-68b5756878-p2ltt\" (UID: \"c66dc115-0279-4477-a84b-d0609ca43e7b\") " pod="openshift-controller-manager/controller-manager-68b5756878-p2ltt" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.633804 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/feefeca2-5088-4ec8-aeb5-44ed80a9a320-audit-policies\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.633823 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/feefeca2-5088-4ec8-aeb5-44ed80a9a320-v4-0-config-system-serving-cert\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.633852 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/feefeca2-5088-4ec8-aeb5-44ed80a9a320-audit-dir\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.633881 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/feefeca2-5088-4ec8-aeb5-44ed80a9a320-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.635116 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/feefeca2-5088-4ec8-aeb5-44ed80a9a320-audit-policies\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.635125 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c66dc115-0279-4477-a84b-d0609ca43e7b-client-ca\") pod \"controller-manager-68b5756878-p2ltt\" (UID: \"c66dc115-0279-4477-a84b-d0609ca43e7b\") " pod="openshift-controller-manager/controller-manager-68b5756878-p2ltt" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.635258 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/feefeca2-5088-4ec8-aeb5-44ed80a9a320-audit-dir\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.635482 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/feefeca2-5088-4ec8-aeb5-44ed80a9a320-v4-0-config-system-cliconfig\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.635659 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/feefeca2-5088-4ec8-aeb5-44ed80a9a320-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.636286 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c66dc115-0279-4477-a84b-d0609ca43e7b-proxy-ca-bundles\") pod \"controller-manager-68b5756878-p2ltt\" (UID: \"c66dc115-0279-4477-a84b-d0609ca43e7b\") " pod="openshift-controller-manager/controller-manager-68b5756878-p2ltt" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.636441 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/feefeca2-5088-4ec8-aeb5-44ed80a9a320-v4-0-config-system-service-ca\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.640454 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/feefeca2-5088-4ec8-aeb5-44ed80a9a320-v4-0-config-user-template-error\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.640607 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/feefeca2-5088-4ec8-aeb5-44ed80a9a320-v4-0-config-user-template-login\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.640820 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/feefeca2-5088-4ec8-aeb5-44ed80a9a320-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.640900 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/feefeca2-5088-4ec8-aeb5-44ed80a9a320-v4-0-config-system-serving-cert\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.641850 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c66dc115-0279-4477-a84b-d0609ca43e7b-config\") pod \"controller-manager-68b5756878-p2ltt\" (UID: \"c66dc115-0279-4477-a84b-d0609ca43e7b\") " pod="openshift-controller-manager/controller-manager-68b5756878-p2ltt" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.642057 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/feefeca2-5088-4ec8-aeb5-44ed80a9a320-v4-0-config-system-session\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.642438 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/feefeca2-5088-4ec8-aeb5-44ed80a9a320-v4-0-config-system-router-certs\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.642506 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/feefeca2-5088-4ec8-aeb5-44ed80a9a320-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.645899 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c66dc115-0279-4477-a84b-d0609ca43e7b-serving-cert\") pod \"controller-manager-68b5756878-p2ltt\" (UID: \"c66dc115-0279-4477-a84b-d0609ca43e7b\") " pod="openshift-controller-manager/controller-manager-68b5756878-p2ltt" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.646837 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/feefeca2-5088-4ec8-aeb5-44ed80a9a320-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.647800 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.652832 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjz6r\" (UniqueName: \"kubernetes.io/projected/c66dc115-0279-4477-a84b-d0609ca43e7b-kube-api-access-qjz6r\") pod \"controller-manager-68b5756878-p2ltt\" (UID: \"c66dc115-0279-4477-a84b-d0609ca43e7b\") " pod="openshift-controller-manager/controller-manager-68b5756878-p2ltt" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.653954 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlgnp\" (UniqueName: \"kubernetes.io/projected/feefeca2-5088-4ec8-aeb5-44ed80a9a320-kube-api-access-wlgnp\") pod \"oauth-openshift-575db9c649-6kwjs\" (UID: \"feefeca2-5088-4ec8-aeb5-44ed80a9a320\") " pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.672432 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.703098 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.791286 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68b5756878-p2ltt" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.824566 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.847491 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rmjsl" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.847561 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rmjsl" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.882472 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.910009 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rmjsl" Feb 26 17:21:19 crc kubenswrapper[4805]: I0226 17:21:19.979491 4805 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 26 17:21:20 crc kubenswrapper[4805]: I0226 17:21:20.052817 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 26 17:21:20 crc kubenswrapper[4805]: I0226 17:21:20.074693 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-575db9c649-6kwjs"] Feb 26 17:21:20 crc kubenswrapper[4805]: I0226 17:21:20.147631 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 26 17:21:20 crc kubenswrapper[4805]: I0226 17:21:20.216810 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-68b5756878-p2ltt"] Feb 26 17:21:20 crc kubenswrapper[4805]: W0226 17:21:20.220260 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc66dc115_0279_4477_a84b_d0609ca43e7b.slice/crio-9b5376c45a7b4b49dbc80dab6fd805bbd492be50e7a378f540e02f306feec5f5 WatchSource:0}: Error finding container 9b5376c45a7b4b49dbc80dab6fd805bbd492be50e7a378f540e02f306feec5f5: Status 404 returned error can't find the container with id 9b5376c45a7b4b49dbc80dab6fd805bbd492be50e7a378f540e02f306feec5f5 Feb 26 17:21:20 crc kubenswrapper[4805]: I0226 17:21:20.244061 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gzkpq" Feb 26 17:21:20 crc kubenswrapper[4805]: I0226 17:21:20.306209 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gzkpq" Feb 26 17:21:20 crc kubenswrapper[4805]: I0226 17:21:20.363420 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68b5756878-p2ltt" event={"ID":"c66dc115-0279-4477-a84b-d0609ca43e7b","Type":"ContainerStarted","Data":"2c51758cbf2504b3eedf8abdff59909dd367f0f6240d4cd26862c312b5dde0ee"} Feb 26 17:21:20 crc kubenswrapper[4805]: I0226 17:21:20.363497 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68b5756878-p2ltt" event={"ID":"c66dc115-0279-4477-a84b-d0609ca43e7b","Type":"ContainerStarted","Data":"9b5376c45a7b4b49dbc80dab6fd805bbd492be50e7a378f540e02f306feec5f5"} Feb 26 17:21:20 crc kubenswrapper[4805]: I0226 17:21:20.363634 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-68b5756878-p2ltt" Feb 26 17:21:20 crc kubenswrapper[4805]: I0226 17:21:20.365515 4805 patch_prober.go:28] interesting pod/controller-manager-68b5756878-p2ltt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Feb 26 17:21:20 crc kubenswrapper[4805]: I0226 17:21:20.365590 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-68b5756878-p2ltt" podUID="c66dc115-0279-4477-a84b-d0609ca43e7b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Feb 26 17:21:20 crc kubenswrapper[4805]: I0226 17:21:20.365731 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" event={"ID":"feefeca2-5088-4ec8-aeb5-44ed80a9a320","Type":"ContainerStarted","Data":"5daaac091c52d19834f45f37bba78d3418179b7d6dc58adde92572a89be5249b"} Feb 26 17:21:20 crc kubenswrapper[4805]: I0226 17:21:20.365786 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" event={"ID":"feefeca2-5088-4ec8-aeb5-44ed80a9a320","Type":"ContainerStarted","Data":"86384ecf146a5e3cca3d3be1c48ad7b2bb2ada5747e23567f35ac4a723d1b0ea"} Feb 26 17:21:20 crc kubenswrapper[4805]: I0226 17:21:20.383992 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-68b5756878-p2ltt" podStartSLOduration=58.383969256 podStartE2EDuration="58.383969256s" podCreationTimestamp="2026-02-26 17:20:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:21:20.381788253 +0000 UTC m=+394.943542592" watchObservedRunningTime="2026-02-26 17:21:20.383969256 +0000 UTC m=+394.945723595" Feb 26 17:21:20 crc kubenswrapper[4805]: I0226 17:21:20.404978 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" podStartSLOduration=85.404954392 podStartE2EDuration="1m25.404954392s" podCreationTimestamp="2026-02-26 17:19:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:21:20.403576888 +0000 UTC m=+394.965331227" watchObservedRunningTime="2026-02-26 17:21:20.404954392 +0000 UTC m=+394.966708731" Feb 26 17:21:20 crc kubenswrapper[4805]: I0226 17:21:20.419826 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rmjsl" Feb 26 17:21:20 crc kubenswrapper[4805]: I0226 17:21:20.444137 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 26 17:21:20 crc kubenswrapper[4805]: I0226 17:21:20.486256 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 26 17:21:20 crc kubenswrapper[4805]: I0226 17:21:20.510140 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 26 17:21:20 crc kubenswrapper[4805]: I0226 17:21:20.863385 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 26 17:21:20 crc kubenswrapper[4805]: I0226 17:21:20.914267 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 26 17:21:20 crc kubenswrapper[4805]: I0226 17:21:20.967884 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:21:20 crc kubenswrapper[4805]: I0226 17:21:20.968423 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:21:20 crc kubenswrapper[4805]: I0226 17:21:20.974038 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:21:21 crc kubenswrapper[4805]: I0226 17:21:21.031980 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 26 17:21:21 crc kubenswrapper[4805]: I0226 17:21:21.226068 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 26 17:21:21 crc kubenswrapper[4805]: I0226 17:21:21.372974 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-575db9c649-6kwjs_feefeca2-5088-4ec8-aeb5-44ed80a9a320/oauth-openshift/0.log" Feb 26 17:21:21 crc kubenswrapper[4805]: I0226 17:21:21.373046 4805 generic.go:334] "Generic (PLEG): container finished" podID="feefeca2-5088-4ec8-aeb5-44ed80a9a320" containerID="5daaac091c52d19834f45f37bba78d3418179b7d6dc58adde92572a89be5249b" exitCode=255 Feb 26 17:21:21 crc kubenswrapper[4805]: I0226 17:21:21.373120 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" event={"ID":"feefeca2-5088-4ec8-aeb5-44ed80a9a320","Type":"ContainerDied","Data":"5daaac091c52d19834f45f37bba78d3418179b7d6dc58adde92572a89be5249b"} Feb 26 17:21:21 crc kubenswrapper[4805]: I0226 17:21:21.373507 4805 scope.go:117] "RemoveContainer" containerID="5daaac091c52d19834f45f37bba78d3418179b7d6dc58adde92572a89be5249b" Feb 26 17:21:21 crc kubenswrapper[4805]: I0226 17:21:21.377969 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:21:21 crc kubenswrapper[4805]: I0226 17:21:21.379011 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-68b5756878-p2ltt" Feb 26 17:21:21 crc kubenswrapper[4805]: I0226 17:21:21.385138 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 26 17:21:21 crc kubenswrapper[4805]: I0226 17:21:21.417044 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 26 17:21:21 crc kubenswrapper[4805]: I0226 17:21:21.556166 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 26 17:21:21 crc kubenswrapper[4805]: I0226 17:21:21.718379 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 26 17:21:21 crc kubenswrapper[4805]: I0226 17:21:21.725658 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56449c5486-ccnml"] Feb 26 17:21:21 crc kubenswrapper[4805]: I0226 17:21:21.726269 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56449c5486-ccnml" Feb 26 17:21:21 crc kubenswrapper[4805]: I0226 17:21:21.728224 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 26 17:21:21 crc kubenswrapper[4805]: I0226 17:21:21.728286 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 26 17:21:21 crc kubenswrapper[4805]: I0226 17:21:21.728403 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 26 17:21:21 crc kubenswrapper[4805]: I0226 17:21:21.728537 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 26 17:21:21 crc kubenswrapper[4805]: I0226 17:21:21.728555 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 26 17:21:21 crc kubenswrapper[4805]: I0226 17:21:21.728778 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 26 17:21:21 crc kubenswrapper[4805]: I0226 17:21:21.758927 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56449c5486-ccnml"] Feb 26 17:21:21 crc kubenswrapper[4805]: I0226 17:21:21.767745 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ca0c71a-d56b-4d23-a063-a32dc21262fe-client-ca\") pod \"route-controller-manager-56449c5486-ccnml\" (UID: \"6ca0c71a-d56b-4d23-a063-a32dc21262fe\") " pod="openshift-route-controller-manager/route-controller-manager-56449c5486-ccnml" Feb 26 17:21:21 crc kubenswrapper[4805]: I0226 17:21:21.768064 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ca0c71a-d56b-4d23-a063-a32dc21262fe-serving-cert\") pod \"route-controller-manager-56449c5486-ccnml\" (UID: \"6ca0c71a-d56b-4d23-a063-a32dc21262fe\") " pod="openshift-route-controller-manager/route-controller-manager-56449c5486-ccnml" Feb 26 17:21:21 crc kubenswrapper[4805]: I0226 17:21:21.768159 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88c2r\" (UniqueName: \"kubernetes.io/projected/6ca0c71a-d56b-4d23-a063-a32dc21262fe-kube-api-access-88c2r\") pod \"route-controller-manager-56449c5486-ccnml\" (UID: \"6ca0c71a-d56b-4d23-a063-a32dc21262fe\") " pod="openshift-route-controller-manager/route-controller-manager-56449c5486-ccnml" Feb 26 17:21:21 crc kubenswrapper[4805]: I0226 17:21:21.768257 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ca0c71a-d56b-4d23-a063-a32dc21262fe-config\") pod \"route-controller-manager-56449c5486-ccnml\" (UID: \"6ca0c71a-d56b-4d23-a063-a32dc21262fe\") " pod="openshift-route-controller-manager/route-controller-manager-56449c5486-ccnml" Feb 26 17:21:21 crc kubenswrapper[4805]: I0226 17:21:21.837687 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 26 17:21:21 crc kubenswrapper[4805]: I0226 17:21:21.869698 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ca0c71a-d56b-4d23-a063-a32dc21262fe-client-ca\") pod \"route-controller-manager-56449c5486-ccnml\" (UID: \"6ca0c71a-d56b-4d23-a063-a32dc21262fe\") " pod="openshift-route-controller-manager/route-controller-manager-56449c5486-ccnml" Feb 26 17:21:21 crc kubenswrapper[4805]: I0226 17:21:21.869751 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ca0c71a-d56b-4d23-a063-a32dc21262fe-serving-cert\") pod \"route-controller-manager-56449c5486-ccnml\" (UID: \"6ca0c71a-d56b-4d23-a063-a32dc21262fe\") " pod="openshift-route-controller-manager/route-controller-manager-56449c5486-ccnml" Feb 26 17:21:21 crc kubenswrapper[4805]: I0226 17:21:21.869771 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88c2r\" (UniqueName: \"kubernetes.io/projected/6ca0c71a-d56b-4d23-a063-a32dc21262fe-kube-api-access-88c2r\") pod \"route-controller-manager-56449c5486-ccnml\" (UID: \"6ca0c71a-d56b-4d23-a063-a32dc21262fe\") " pod="openshift-route-controller-manager/route-controller-manager-56449c5486-ccnml" Feb 26 17:21:21 crc kubenswrapper[4805]: I0226 17:21:21.869800 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ca0c71a-d56b-4d23-a063-a32dc21262fe-config\") pod \"route-controller-manager-56449c5486-ccnml\" (UID: \"6ca0c71a-d56b-4d23-a063-a32dc21262fe\") " pod="openshift-route-controller-manager/route-controller-manager-56449c5486-ccnml" Feb 26 17:21:21 crc kubenswrapper[4805]: I0226 17:21:21.870907 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ca0c71a-d56b-4d23-a063-a32dc21262fe-config\") pod \"route-controller-manager-56449c5486-ccnml\" (UID: \"6ca0c71a-d56b-4d23-a063-a32dc21262fe\") " pod="openshift-route-controller-manager/route-controller-manager-56449c5486-ccnml" Feb 26 17:21:21 crc kubenswrapper[4805]: I0226 17:21:21.871458 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ca0c71a-d56b-4d23-a063-a32dc21262fe-client-ca\") pod \"route-controller-manager-56449c5486-ccnml\" (UID: \"6ca0c71a-d56b-4d23-a063-a32dc21262fe\") " pod="openshift-route-controller-manager/route-controller-manager-56449c5486-ccnml" Feb 26 17:21:21 crc kubenswrapper[4805]: I0226 17:21:21.880310 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ca0c71a-d56b-4d23-a063-a32dc21262fe-serving-cert\") pod \"route-controller-manager-56449c5486-ccnml\" (UID: \"6ca0c71a-d56b-4d23-a063-a32dc21262fe\") " pod="openshift-route-controller-manager/route-controller-manager-56449c5486-ccnml" Feb 26 17:21:21 crc kubenswrapper[4805]: I0226 17:21:21.894035 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88c2r\" (UniqueName: \"kubernetes.io/projected/6ca0c71a-d56b-4d23-a063-a32dc21262fe-kube-api-access-88c2r\") pod \"route-controller-manager-56449c5486-ccnml\" (UID: \"6ca0c71a-d56b-4d23-a063-a32dc21262fe\") " pod="openshift-route-controller-manager/route-controller-manager-56449c5486-ccnml" Feb 26 17:21:21 crc kubenswrapper[4805]: I0226 17:21:21.919025 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 26 17:21:21 crc kubenswrapper[4805]: I0226 17:21:21.945284 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 26 17:21:22 crc kubenswrapper[4805]: I0226 17:21:22.051848 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56449c5486-ccnml" Feb 26 17:21:22 crc kubenswrapper[4805]: I0226 17:21:22.086370 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 26 17:21:22 crc kubenswrapper[4805]: I0226 17:21:22.204775 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 26 17:21:22 crc kubenswrapper[4805]: I0226 17:21:22.283596 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 26 17:21:22 crc kubenswrapper[4805]: I0226 17:21:22.292513 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56449c5486-ccnml"] Feb 26 17:21:22 crc kubenswrapper[4805]: W0226 17:21:22.302299 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ca0c71a_d56b_4d23_a063_a32dc21262fe.slice/crio-3a86f8b71a306b407660160747f486e2e8dff3324437bb3923b2a609ade99f9d WatchSource:0}: Error finding container 3a86f8b71a306b407660160747f486e2e8dff3324437bb3923b2a609ade99f9d: Status 404 returned error can't find the container with id 3a86f8b71a306b407660160747f486e2e8dff3324437bb3923b2a609ade99f9d Feb 26 17:21:22 crc kubenswrapper[4805]: I0226 17:21:22.314684 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 26 17:21:22 crc kubenswrapper[4805]: I0226 17:21:22.337674 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 26 17:21:22 crc kubenswrapper[4805]: I0226 17:21:22.379904 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-575db9c649-6kwjs_feefeca2-5088-4ec8-aeb5-44ed80a9a320/oauth-openshift/1.log" Feb 26 17:21:22 crc kubenswrapper[4805]: I0226 17:21:22.380424 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-575db9c649-6kwjs_feefeca2-5088-4ec8-aeb5-44ed80a9a320/oauth-openshift/0.log" Feb 26 17:21:22 crc kubenswrapper[4805]: I0226 17:21:22.380465 4805 generic.go:334] "Generic (PLEG): container finished" podID="feefeca2-5088-4ec8-aeb5-44ed80a9a320" containerID="ba4b6cc971cb0077108954c7925d7c89b1b204d92a346a139400c15a1a6c7cd0" exitCode=255 Feb 26 17:21:22 crc kubenswrapper[4805]: I0226 17:21:22.380527 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" event={"ID":"feefeca2-5088-4ec8-aeb5-44ed80a9a320","Type":"ContainerDied","Data":"ba4b6cc971cb0077108954c7925d7c89b1b204d92a346a139400c15a1a6c7cd0"} Feb 26 17:21:22 crc kubenswrapper[4805]: I0226 17:21:22.380567 4805 scope.go:117] "RemoveContainer" containerID="5daaac091c52d19834f45f37bba78d3418179b7d6dc58adde92572a89be5249b" Feb 26 17:21:22 crc kubenswrapper[4805]: I0226 17:21:22.380908 4805 scope.go:117] "RemoveContainer" containerID="ba4b6cc971cb0077108954c7925d7c89b1b204d92a346a139400c15a1a6c7cd0" Feb 26 17:21:22 crc kubenswrapper[4805]: E0226 17:21:22.381115 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-575db9c649-6kwjs_openshift-authentication(feefeca2-5088-4ec8-aeb5-44ed80a9a320)\"" pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" podUID="feefeca2-5088-4ec8-aeb5-44ed80a9a320" Feb 26 17:21:22 crc kubenswrapper[4805]: I0226 17:21:22.381784 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56449c5486-ccnml" event={"ID":"6ca0c71a-d56b-4d23-a063-a32dc21262fe","Type":"ContainerStarted","Data":"3a86f8b71a306b407660160747f486e2e8dff3324437bb3923b2a609ade99f9d"} Feb 26 17:21:22 crc kubenswrapper[4805]: I0226 17:21:22.923668 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 26 17:21:23 crc kubenswrapper[4805]: I0226 17:21:23.246729 4805 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 26 17:21:23 crc kubenswrapper[4805]: I0226 17:21:23.247435 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://d0682160bc5a5d23ae35e6999fb4630e74af294a6e56bcc044918da56611b8be" gracePeriod=5 Feb 26 17:21:23 crc kubenswrapper[4805]: I0226 17:21:23.377065 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n5l62" Feb 26 17:21:23 crc kubenswrapper[4805]: I0226 17:21:23.390179 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-575db9c649-6kwjs_feefeca2-5088-4ec8-aeb5-44ed80a9a320/oauth-openshift/1.log" Feb 26 17:21:23 crc kubenswrapper[4805]: I0226 17:21:23.391125 4805 scope.go:117] "RemoveContainer" containerID="ba4b6cc971cb0077108954c7925d7c89b1b204d92a346a139400c15a1a6c7cd0" Feb 26 17:21:23 crc kubenswrapper[4805]: E0226 17:21:23.391327 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-575db9c649-6kwjs_openshift-authentication(feefeca2-5088-4ec8-aeb5-44ed80a9a320)\"" pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" podUID="feefeca2-5088-4ec8-aeb5-44ed80a9a320" Feb 26 17:21:23 crc kubenswrapper[4805]: I0226 17:21:23.392405 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56449c5486-ccnml" event={"ID":"6ca0c71a-d56b-4d23-a063-a32dc21262fe","Type":"ContainerStarted","Data":"656c95f06aaac82b2625abb562b5d86bf6c5668c694e21afe97be437ecb24d95"} Feb 26 17:21:23 crc kubenswrapper[4805]: I0226 17:21:23.414483 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-56449c5486-ccnml" podStartSLOduration=61.414462884 podStartE2EDuration="1m1.414462884s" podCreationTimestamp="2026-02-26 17:20:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:21:23.410162039 +0000 UTC m=+397.971916398" watchObservedRunningTime="2026-02-26 17:21:23.414462884 +0000 UTC m=+397.976217223" Feb 26 17:21:23 crc kubenswrapper[4805]: I0226 17:21:23.422373 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n5l62" Feb 26 17:21:23 crc kubenswrapper[4805]: I0226 17:21:23.537467 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 26 17:21:23 crc kubenswrapper[4805]: I0226 17:21:23.594863 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 26 17:21:24 crc kubenswrapper[4805]: I0226 17:21:24.166841 4805 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 26 17:21:24 crc kubenswrapper[4805]: I0226 17:21:24.168385 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 26 17:21:24 crc kubenswrapper[4805]: I0226 17:21:24.168593 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 17:21:24 crc kubenswrapper[4805]: I0226 17:21:24.169676 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"1715e7dd93558f4dbe006f03b545a383b0491c34e670307f4e469b563e0d26a6"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 26 17:21:24 crc kubenswrapper[4805]: I0226 17:21:24.169894 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://1715e7dd93558f4dbe006f03b545a383b0491c34e670307f4e469b563e0d26a6" gracePeriod=30 Feb 26 17:21:24 crc kubenswrapper[4805]: I0226 17:21:24.265735 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 26 17:21:24 crc kubenswrapper[4805]: I0226 17:21:24.399680 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-56449c5486-ccnml" Feb 26 17:21:24 crc kubenswrapper[4805]: I0226 17:21:24.404208 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-56449c5486-ccnml" Feb 26 17:21:28 crc kubenswrapper[4805]: I0226 17:21:28.420276 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 26 17:21:28 crc kubenswrapper[4805]: I0226 17:21:28.420603 4805 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="d0682160bc5a5d23ae35e6999fb4630e74af294a6e56bcc044918da56611b8be" exitCode=137 Feb 26 17:21:28 crc kubenswrapper[4805]: I0226 17:21:28.837612 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 26 17:21:28 crc kubenswrapper[4805]: I0226 17:21:28.837726 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 17:21:28 crc kubenswrapper[4805]: I0226 17:21:28.976150 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 26 17:21:28 crc kubenswrapper[4805]: I0226 17:21:28.976280 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 26 17:21:28 crc kubenswrapper[4805]: I0226 17:21:28.976390 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 26 17:21:28 crc kubenswrapper[4805]: I0226 17:21:28.976404 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 17:21:28 crc kubenswrapper[4805]: I0226 17:21:28.976435 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 26 17:21:28 crc kubenswrapper[4805]: I0226 17:21:28.976496 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 26 17:21:28 crc kubenswrapper[4805]: I0226 17:21:28.976585 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 17:21:28 crc kubenswrapper[4805]: I0226 17:21:28.976652 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 17:21:28 crc kubenswrapper[4805]: I0226 17:21:28.976951 4805 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 26 17:21:28 crc kubenswrapper[4805]: I0226 17:21:28.976987 4805 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 26 17:21:28 crc kubenswrapper[4805]: I0226 17:21:28.977004 4805 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 26 17:21:28 crc kubenswrapper[4805]: I0226 17:21:28.976326 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 17:21:28 crc kubenswrapper[4805]: I0226 17:21:28.989438 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 17:21:29 crc kubenswrapper[4805]: I0226 17:21:29.077779 4805 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 26 17:21:29 crc kubenswrapper[4805]: I0226 17:21:29.077827 4805 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 26 17:21:29 crc kubenswrapper[4805]: I0226 17:21:29.429779 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 26 17:21:29 crc kubenswrapper[4805]: I0226 17:21:29.429884 4805 scope.go:117] "RemoveContainer" containerID="d0682160bc5a5d23ae35e6999fb4630e74af294a6e56bcc044918da56611b8be" Feb 26 17:21:29 crc kubenswrapper[4805]: I0226 17:21:29.430079 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 26 17:21:29 crc kubenswrapper[4805]: I0226 17:21:29.824977 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:29 crc kubenswrapper[4805]: I0226 17:21:29.825450 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:29 crc kubenswrapper[4805]: I0226 17:21:29.826252 4805 scope.go:117] "RemoveContainer" containerID="ba4b6cc971cb0077108954c7925d7c89b1b204d92a346a139400c15a1a6c7cd0" Feb 26 17:21:29 crc kubenswrapper[4805]: E0226 17:21:29.826622 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-575db9c649-6kwjs_openshift-authentication(feefeca2-5088-4ec8-aeb5-44ed80a9a320)\"" pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" podUID="feefeca2-5088-4ec8-aeb5-44ed80a9a320" Feb 26 17:21:29 crc kubenswrapper[4805]: I0226 17:21:29.953913 4805 scope.go:117] "RemoveContainer" containerID="1be3bc8406b07c6767ab638e176b548226a3aceb13e26bf70ac8c6913d433403" Feb 26 17:21:30 crc kubenswrapper[4805]: I0226 17:21:30.438935 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-source-55646444c4-trplf_9d751cbb-f2e2-430d-9754-c882a5e924a5/check-endpoints/1.log" Feb 26 17:21:30 crc kubenswrapper[4805]: I0226 17:21:30.438991 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"98efce2e829cf52c097dfd8f2d93957e3c5309f1731018e1a120ec780ce42313"} Feb 26 17:21:30 crc kubenswrapper[4805]: I0226 17:21:30.962599 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 26 17:21:40 crc kubenswrapper[4805]: I0226 17:21:40.283861 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 26 17:21:41 crc kubenswrapper[4805]: I0226 17:21:41.953503 4805 scope.go:117] "RemoveContainer" containerID="ba4b6cc971cb0077108954c7925d7c89b1b204d92a346a139400c15a1a6c7cd0" Feb 26 17:21:42 crc kubenswrapper[4805]: I0226 17:21:42.513835 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-575db9c649-6kwjs_feefeca2-5088-4ec8-aeb5-44ed80a9a320/oauth-openshift/1.log" Feb 26 17:21:42 crc kubenswrapper[4805]: I0226 17:21:42.514243 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" event={"ID":"feefeca2-5088-4ec8-aeb5-44ed80a9a320","Type":"ContainerStarted","Data":"a2bc6f8d133553cd42018e7bdf6c892fac889a72b53aec0ec90561b547fb458d"} Feb 26 17:21:42 crc kubenswrapper[4805]: I0226 17:21:42.514662 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:42 crc kubenswrapper[4805]: I0226 17:21:42.516609 4805 generic.go:334] "Generic (PLEG): container finished" podID="ca3d06f8-3cc9-4e77-9d45-e1232c00b04f" containerID="715c9c1a7e8cdd6d792854aee2c8264bf52d2d10d0c3d38305e6d873da60da55" exitCode=0 Feb 26 17:21:42 crc kubenswrapper[4805]: I0226 17:21:42.516663 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-d7sjf" event={"ID":"ca3d06f8-3cc9-4e77-9d45-e1232c00b04f","Type":"ContainerDied","Data":"715c9c1a7e8cdd6d792854aee2c8264bf52d2d10d0c3d38305e6d873da60da55"} Feb 26 17:21:42 crc kubenswrapper[4805]: I0226 17:21:42.517366 4805 scope.go:117] "RemoveContainer" containerID="715c9c1a7e8cdd6d792854aee2c8264bf52d2d10d0c3d38305e6d873da60da55" Feb 26 17:21:42 crc kubenswrapper[4805]: I0226 17:21:42.637851 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-575db9c649-6kwjs" Feb 26 17:21:43 crc kubenswrapper[4805]: I0226 17:21:43.524909 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-d7sjf" event={"ID":"ca3d06f8-3cc9-4e77-9d45-e1232c00b04f","Type":"ContainerStarted","Data":"d7342f56504ecc4a2d2d74adb34367a3a0c12ab1d4d85a4bf41cedfb50fdc481"} Feb 26 17:21:43 crc kubenswrapper[4805]: I0226 17:21:43.525758 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-d7sjf" Feb 26 17:21:43 crc kubenswrapper[4805]: I0226 17:21:43.527998 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-d7sjf" Feb 26 17:21:52 crc kubenswrapper[4805]: I0226 17:21:52.696432 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-68b5756878-p2ltt"] Feb 26 17:21:52 crc kubenswrapper[4805]: I0226 17:21:52.697068 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-68b5756878-p2ltt" podUID="c66dc115-0279-4477-a84b-d0609ca43e7b" containerName="controller-manager" containerID="cri-o://2c51758cbf2504b3eedf8abdff59909dd367f0f6240d4cd26862c312b5dde0ee" gracePeriod=30 Feb 26 17:21:52 crc kubenswrapper[4805]: I0226 17:21:52.699649 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56449c5486-ccnml"] Feb 26 17:21:52 crc kubenswrapper[4805]: I0226 17:21:52.699856 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-56449c5486-ccnml" podUID="6ca0c71a-d56b-4d23-a063-a32dc21262fe" containerName="route-controller-manager" containerID="cri-o://656c95f06aaac82b2625abb562b5d86bf6c5668c694e21afe97be437ecb24d95" gracePeriod=30 Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.170736 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68b5756878-p2ltt" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.181513 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56449c5486-ccnml" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.196088 4805 generic.go:334] "Generic (PLEG): container finished" podID="c66dc115-0279-4477-a84b-d0609ca43e7b" containerID="2c51758cbf2504b3eedf8abdff59909dd367f0f6240d4cd26862c312b5dde0ee" exitCode=0 Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.196142 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68b5756878-p2ltt" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.196174 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68b5756878-p2ltt" event={"ID":"c66dc115-0279-4477-a84b-d0609ca43e7b","Type":"ContainerDied","Data":"2c51758cbf2504b3eedf8abdff59909dd367f0f6240d4cd26862c312b5dde0ee"} Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.196220 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68b5756878-p2ltt" event={"ID":"c66dc115-0279-4477-a84b-d0609ca43e7b","Type":"ContainerDied","Data":"9b5376c45a7b4b49dbc80dab6fd805bbd492be50e7a378f540e02f306feec5f5"} Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.196238 4805 scope.go:117] "RemoveContainer" containerID="2c51758cbf2504b3eedf8abdff59909dd367f0f6240d4cd26862c312b5dde0ee" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.197855 4805 generic.go:334] "Generic (PLEG): container finished" podID="6ca0c71a-d56b-4d23-a063-a32dc21262fe" containerID="656c95f06aaac82b2625abb562b5d86bf6c5668c694e21afe97be437ecb24d95" exitCode=0 Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.197909 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56449c5486-ccnml" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.197905 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56449c5486-ccnml" event={"ID":"6ca0c71a-d56b-4d23-a063-a32dc21262fe","Type":"ContainerDied","Data":"656c95f06aaac82b2625abb562b5d86bf6c5668c694e21afe97be437ecb24d95"} Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.198228 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56449c5486-ccnml" event={"ID":"6ca0c71a-d56b-4d23-a063-a32dc21262fe","Type":"ContainerDied","Data":"3a86f8b71a306b407660160747f486e2e8dff3324437bb3923b2a609ade99f9d"} Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.219421 4805 scope.go:117] "RemoveContainer" containerID="2c51758cbf2504b3eedf8abdff59909dd367f0f6240d4cd26862c312b5dde0ee" Feb 26 17:21:53 crc kubenswrapper[4805]: E0226 17:21:53.219874 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c51758cbf2504b3eedf8abdff59909dd367f0f6240d4cd26862c312b5dde0ee\": container with ID starting with 2c51758cbf2504b3eedf8abdff59909dd367f0f6240d4cd26862c312b5dde0ee not found: ID does not exist" containerID="2c51758cbf2504b3eedf8abdff59909dd367f0f6240d4cd26862c312b5dde0ee" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.219908 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c51758cbf2504b3eedf8abdff59909dd367f0f6240d4cd26862c312b5dde0ee"} err="failed to get container status \"2c51758cbf2504b3eedf8abdff59909dd367f0f6240d4cd26862c312b5dde0ee\": rpc error: code = NotFound desc = could not find container \"2c51758cbf2504b3eedf8abdff59909dd367f0f6240d4cd26862c312b5dde0ee\": container with ID starting with 2c51758cbf2504b3eedf8abdff59909dd367f0f6240d4cd26862c312b5dde0ee not found: ID does not exist" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.219931 4805 scope.go:117] "RemoveContainer" containerID="656c95f06aaac82b2625abb562b5d86bf6c5668c694e21afe97be437ecb24d95" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.237167 4805 scope.go:117] "RemoveContainer" containerID="656c95f06aaac82b2625abb562b5d86bf6c5668c694e21afe97be437ecb24d95" Feb 26 17:21:53 crc kubenswrapper[4805]: E0226 17:21:53.237932 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"656c95f06aaac82b2625abb562b5d86bf6c5668c694e21afe97be437ecb24d95\": container with ID starting with 656c95f06aaac82b2625abb562b5d86bf6c5668c694e21afe97be437ecb24d95 not found: ID does not exist" containerID="656c95f06aaac82b2625abb562b5d86bf6c5668c694e21afe97be437ecb24d95" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.237973 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"656c95f06aaac82b2625abb562b5d86bf6c5668c694e21afe97be437ecb24d95"} err="failed to get container status \"656c95f06aaac82b2625abb562b5d86bf6c5668c694e21afe97be437ecb24d95\": rpc error: code = NotFound desc = could not find container \"656c95f06aaac82b2625abb562b5d86bf6c5668c694e21afe97be437ecb24d95\": container with ID starting with 656c95f06aaac82b2625abb562b5d86bf6c5668c694e21afe97be437ecb24d95 not found: ID does not exist" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.315260 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c66dc115-0279-4477-a84b-d0609ca43e7b-config\") pod \"c66dc115-0279-4477-a84b-d0609ca43e7b\" (UID: \"c66dc115-0279-4477-a84b-d0609ca43e7b\") " Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.315575 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ca0c71a-d56b-4d23-a063-a32dc21262fe-client-ca\") pod \"6ca0c71a-d56b-4d23-a063-a32dc21262fe\" (UID: \"6ca0c71a-d56b-4d23-a063-a32dc21262fe\") " Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.315633 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c66dc115-0279-4477-a84b-d0609ca43e7b-serving-cert\") pod \"c66dc115-0279-4477-a84b-d0609ca43e7b\" (UID: \"c66dc115-0279-4477-a84b-d0609ca43e7b\") " Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.315666 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ca0c71a-d56b-4d23-a063-a32dc21262fe-config\") pod \"6ca0c71a-d56b-4d23-a063-a32dc21262fe\" (UID: \"6ca0c71a-d56b-4d23-a063-a32dc21262fe\") " Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.315709 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c66dc115-0279-4477-a84b-d0609ca43e7b-client-ca\") pod \"c66dc115-0279-4477-a84b-d0609ca43e7b\" (UID: \"c66dc115-0279-4477-a84b-d0609ca43e7b\") " Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.315743 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c66dc115-0279-4477-a84b-d0609ca43e7b-proxy-ca-bundles\") pod \"c66dc115-0279-4477-a84b-d0609ca43e7b\" (UID: \"c66dc115-0279-4477-a84b-d0609ca43e7b\") " Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.315769 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjz6r\" (UniqueName: \"kubernetes.io/projected/c66dc115-0279-4477-a84b-d0609ca43e7b-kube-api-access-qjz6r\") pod \"c66dc115-0279-4477-a84b-d0609ca43e7b\" (UID: \"c66dc115-0279-4477-a84b-d0609ca43e7b\") " Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.315845 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88c2r\" (UniqueName: \"kubernetes.io/projected/6ca0c71a-d56b-4d23-a063-a32dc21262fe-kube-api-access-88c2r\") pod \"6ca0c71a-d56b-4d23-a063-a32dc21262fe\" (UID: \"6ca0c71a-d56b-4d23-a063-a32dc21262fe\") " Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.315920 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ca0c71a-d56b-4d23-a063-a32dc21262fe-serving-cert\") pod \"6ca0c71a-d56b-4d23-a063-a32dc21262fe\" (UID: \"6ca0c71a-d56b-4d23-a063-a32dc21262fe\") " Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.316272 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ca0c71a-d56b-4d23-a063-a32dc21262fe-client-ca" (OuterVolumeSpecName: "client-ca") pod "6ca0c71a-d56b-4d23-a063-a32dc21262fe" (UID: "6ca0c71a-d56b-4d23-a063-a32dc21262fe"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.316591 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c66dc115-0279-4477-a84b-d0609ca43e7b-client-ca" (OuterVolumeSpecName: "client-ca") pod "c66dc115-0279-4477-a84b-d0609ca43e7b" (UID: "c66dc115-0279-4477-a84b-d0609ca43e7b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.316971 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c66dc115-0279-4477-a84b-d0609ca43e7b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c66dc115-0279-4477-a84b-d0609ca43e7b" (UID: "c66dc115-0279-4477-a84b-d0609ca43e7b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.317298 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ca0c71a-d56b-4d23-a063-a32dc21262fe-config" (OuterVolumeSpecName: "config") pod "6ca0c71a-d56b-4d23-a063-a32dc21262fe" (UID: "6ca0c71a-d56b-4d23-a063-a32dc21262fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.317736 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c66dc115-0279-4477-a84b-d0609ca43e7b-config" (OuterVolumeSpecName: "config") pod "c66dc115-0279-4477-a84b-d0609ca43e7b" (UID: "c66dc115-0279-4477-a84b-d0609ca43e7b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.320719 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c66dc115-0279-4477-a84b-d0609ca43e7b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c66dc115-0279-4477-a84b-d0609ca43e7b" (UID: "c66dc115-0279-4477-a84b-d0609ca43e7b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.320812 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ca0c71a-d56b-4d23-a063-a32dc21262fe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6ca0c71a-d56b-4d23-a063-a32dc21262fe" (UID: "6ca0c71a-d56b-4d23-a063-a32dc21262fe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.320993 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ca0c71a-d56b-4d23-a063-a32dc21262fe-kube-api-access-88c2r" (OuterVolumeSpecName: "kube-api-access-88c2r") pod "6ca0c71a-d56b-4d23-a063-a32dc21262fe" (UID: "6ca0c71a-d56b-4d23-a063-a32dc21262fe"). InnerVolumeSpecName "kube-api-access-88c2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.321107 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c66dc115-0279-4477-a84b-d0609ca43e7b-kube-api-access-qjz6r" (OuterVolumeSpecName: "kube-api-access-qjz6r") pod "c66dc115-0279-4477-a84b-d0609ca43e7b" (UID: "c66dc115-0279-4477-a84b-d0609ca43e7b"). InnerVolumeSpecName "kube-api-access-qjz6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.416985 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c66dc115-0279-4477-a84b-d0609ca43e7b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.417047 4805 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c66dc115-0279-4477-a84b-d0609ca43e7b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.417061 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjz6r\" (UniqueName: \"kubernetes.io/projected/c66dc115-0279-4477-a84b-d0609ca43e7b-kube-api-access-qjz6r\") on node \"crc\" DevicePath \"\"" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.417073 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88c2r\" (UniqueName: \"kubernetes.io/projected/6ca0c71a-d56b-4d23-a063-a32dc21262fe-kube-api-access-88c2r\") on node \"crc\" DevicePath \"\"" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.417082 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ca0c71a-d56b-4d23-a063-a32dc21262fe-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.417090 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c66dc115-0279-4477-a84b-d0609ca43e7b-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.417098 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ca0c71a-d56b-4d23-a063-a32dc21262fe-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.417106 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c66dc115-0279-4477-a84b-d0609ca43e7b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.417113 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ca0c71a-d56b-4d23-a063-a32dc21262fe-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.519985 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-68b5756878-p2ltt"] Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.523612 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-68b5756878-p2ltt"] Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.530087 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56449c5486-ccnml"] Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.533612 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56449c5486-ccnml"] Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.748810 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6857845b4c-7r6v4"] Feb 26 17:21:53 crc kubenswrapper[4805]: E0226 17:21:53.749090 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ca0c71a-d56b-4d23-a063-a32dc21262fe" containerName="route-controller-manager" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.749108 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ca0c71a-d56b-4d23-a063-a32dc21262fe" containerName="route-controller-manager" Feb 26 17:21:53 crc kubenswrapper[4805]: E0226 17:21:53.749130 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.749137 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 26 17:21:53 crc kubenswrapper[4805]: E0226 17:21:53.749154 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c66dc115-0279-4477-a84b-d0609ca43e7b" containerName="controller-manager" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.749162 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c66dc115-0279-4477-a84b-d0609ca43e7b" containerName="controller-manager" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.749264 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.749276 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c66dc115-0279-4477-a84b-d0609ca43e7b" containerName="controller-manager" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.749284 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ca0c71a-d56b-4d23-a063-a32dc21262fe" containerName="route-controller-manager" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.749690 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-7r6v4" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.754426 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.754502 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.754630 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.754760 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.754894 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.755041 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.767755 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6857845b4c-7r6v4"] Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.821705 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a319f7d4-9083-4014-b960-29199f6cdd7c-client-ca\") pod \"route-controller-manager-6857845b4c-7r6v4\" (UID: \"a319f7d4-9083-4014-b960-29199f6cdd7c\") " pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-7r6v4" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.821755 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a319f7d4-9083-4014-b960-29199f6cdd7c-serving-cert\") pod \"route-controller-manager-6857845b4c-7r6v4\" (UID: \"a319f7d4-9083-4014-b960-29199f6cdd7c\") " pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-7r6v4" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.822044 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5c22\" (UniqueName: \"kubernetes.io/projected/a319f7d4-9083-4014-b960-29199f6cdd7c-kube-api-access-m5c22\") pod \"route-controller-manager-6857845b4c-7r6v4\" (UID: \"a319f7d4-9083-4014-b960-29199f6cdd7c\") " pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-7r6v4" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.822098 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a319f7d4-9083-4014-b960-29199f6cdd7c-config\") pod \"route-controller-manager-6857845b4c-7r6v4\" (UID: \"a319f7d4-9083-4014-b960-29199f6cdd7c\") " pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-7r6v4" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.923718 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a319f7d4-9083-4014-b960-29199f6cdd7c-config\") pod \"route-controller-manager-6857845b4c-7r6v4\" (UID: \"a319f7d4-9083-4014-b960-29199f6cdd7c\") " pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-7r6v4" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.923764 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5c22\" (UniqueName: \"kubernetes.io/projected/a319f7d4-9083-4014-b960-29199f6cdd7c-kube-api-access-m5c22\") pod \"route-controller-manager-6857845b4c-7r6v4\" (UID: \"a319f7d4-9083-4014-b960-29199f6cdd7c\") " pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-7r6v4" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.923799 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a319f7d4-9083-4014-b960-29199f6cdd7c-client-ca\") pod \"route-controller-manager-6857845b4c-7r6v4\" (UID: \"a319f7d4-9083-4014-b960-29199f6cdd7c\") " pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-7r6v4" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.923826 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a319f7d4-9083-4014-b960-29199f6cdd7c-serving-cert\") pod \"route-controller-manager-6857845b4c-7r6v4\" (UID: \"a319f7d4-9083-4014-b960-29199f6cdd7c\") " pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-7r6v4" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.924829 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a319f7d4-9083-4014-b960-29199f6cdd7c-client-ca\") pod \"route-controller-manager-6857845b4c-7r6v4\" (UID: \"a319f7d4-9083-4014-b960-29199f6cdd7c\") " pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-7r6v4" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.924997 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a319f7d4-9083-4014-b960-29199f6cdd7c-config\") pod \"route-controller-manager-6857845b4c-7r6v4\" (UID: \"a319f7d4-9083-4014-b960-29199f6cdd7c\") " pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-7r6v4" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.927372 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a319f7d4-9083-4014-b960-29199f6cdd7c-serving-cert\") pod \"route-controller-manager-6857845b4c-7r6v4\" (UID: \"a319f7d4-9083-4014-b960-29199f6cdd7c\") " pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-7r6v4" Feb 26 17:21:53 crc kubenswrapper[4805]: I0226 17:21:53.937927 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5c22\" (UniqueName: \"kubernetes.io/projected/a319f7d4-9083-4014-b960-29199f6cdd7c-kube-api-access-m5c22\") pod \"route-controller-manager-6857845b4c-7r6v4\" (UID: \"a319f7d4-9083-4014-b960-29199f6cdd7c\") " pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-7r6v4" Feb 26 17:21:54 crc kubenswrapper[4805]: I0226 17:21:54.065354 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-7r6v4" Feb 26 17:21:54 crc kubenswrapper[4805]: I0226 17:21:54.253038 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6857845b4c-7r6v4"] Feb 26 17:21:54 crc kubenswrapper[4805]: W0226 17:21:54.258806 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda319f7d4_9083_4014_b960_29199f6cdd7c.slice/crio-12b606efffa228eaf503363e12f9fadbee80fe55130088daa4569d818d4c80aa WatchSource:0}: Error finding container 12b606efffa228eaf503363e12f9fadbee80fe55130088daa4569d818d4c80aa: Status 404 returned error can't find the container with id 12b606efffa228eaf503363e12f9fadbee80fe55130088daa4569d818d4c80aa Feb 26 17:21:54 crc kubenswrapper[4805]: I0226 17:21:54.750212 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7f86856858-s5xv2"] Feb 26 17:21:54 crc kubenswrapper[4805]: I0226 17:21:54.750977 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f86856858-s5xv2" Feb 26 17:21:54 crc kubenswrapper[4805]: I0226 17:21:54.759200 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 26 17:21:54 crc kubenswrapper[4805]: I0226 17:21:54.759407 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 26 17:21:54 crc kubenswrapper[4805]: I0226 17:21:54.760085 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 26 17:21:54 crc kubenswrapper[4805]: I0226 17:21:54.761233 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 26 17:21:54 crc kubenswrapper[4805]: I0226 17:21:54.761392 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 26 17:21:54 crc kubenswrapper[4805]: I0226 17:21:54.761398 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 26 17:21:54 crc kubenswrapper[4805]: I0226 17:21:54.765644 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 26 17:21:54 crc kubenswrapper[4805]: I0226 17:21:54.770498 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f86856858-s5xv2"] Feb 26 17:21:54 crc kubenswrapper[4805]: I0226 17:21:54.835672 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc9969ef-e0cb-4558-992c-c988921d0082-client-ca\") pod \"controller-manager-7f86856858-s5xv2\" (UID: \"fc9969ef-e0cb-4558-992c-c988921d0082\") " pod="openshift-controller-manager/controller-manager-7f86856858-s5xv2" Feb 26 17:21:54 crc kubenswrapper[4805]: I0226 17:21:54.835980 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fng7\" (UniqueName: \"kubernetes.io/projected/fc9969ef-e0cb-4558-992c-c988921d0082-kube-api-access-6fng7\") pod \"controller-manager-7f86856858-s5xv2\" (UID: \"fc9969ef-e0cb-4558-992c-c988921d0082\") " pod="openshift-controller-manager/controller-manager-7f86856858-s5xv2" Feb 26 17:21:54 crc kubenswrapper[4805]: I0226 17:21:54.836003 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fc9969ef-e0cb-4558-992c-c988921d0082-proxy-ca-bundles\") pod \"controller-manager-7f86856858-s5xv2\" (UID: \"fc9969ef-e0cb-4558-992c-c988921d0082\") " pod="openshift-controller-manager/controller-manager-7f86856858-s5xv2" Feb 26 17:21:54 crc kubenswrapper[4805]: I0226 17:21:54.836041 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc9969ef-e0cb-4558-992c-c988921d0082-config\") pod \"controller-manager-7f86856858-s5xv2\" (UID: \"fc9969ef-e0cb-4558-992c-c988921d0082\") " pod="openshift-controller-manager/controller-manager-7f86856858-s5xv2" Feb 26 17:21:54 crc kubenswrapper[4805]: I0226 17:21:54.836098 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc9969ef-e0cb-4558-992c-c988921d0082-serving-cert\") pod \"controller-manager-7f86856858-s5xv2\" (UID: \"fc9969ef-e0cb-4558-992c-c988921d0082\") " pod="openshift-controller-manager/controller-manager-7f86856858-s5xv2" Feb 26 17:21:54 crc kubenswrapper[4805]: I0226 17:21:54.936785 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc9969ef-e0cb-4558-992c-c988921d0082-serving-cert\") pod \"controller-manager-7f86856858-s5xv2\" (UID: \"fc9969ef-e0cb-4558-992c-c988921d0082\") " pod="openshift-controller-manager/controller-manager-7f86856858-s5xv2" Feb 26 17:21:54 crc kubenswrapper[4805]: I0226 17:21:54.936849 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc9969ef-e0cb-4558-992c-c988921d0082-client-ca\") pod \"controller-manager-7f86856858-s5xv2\" (UID: \"fc9969ef-e0cb-4558-992c-c988921d0082\") " pod="openshift-controller-manager/controller-manager-7f86856858-s5xv2" Feb 26 17:21:54 crc kubenswrapper[4805]: I0226 17:21:54.936887 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fng7\" (UniqueName: \"kubernetes.io/projected/fc9969ef-e0cb-4558-992c-c988921d0082-kube-api-access-6fng7\") pod \"controller-manager-7f86856858-s5xv2\" (UID: \"fc9969ef-e0cb-4558-992c-c988921d0082\") " pod="openshift-controller-manager/controller-manager-7f86856858-s5xv2" Feb 26 17:21:54 crc kubenswrapper[4805]: I0226 17:21:54.936912 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fc9969ef-e0cb-4558-992c-c988921d0082-proxy-ca-bundles\") pod \"controller-manager-7f86856858-s5xv2\" (UID: \"fc9969ef-e0cb-4558-992c-c988921d0082\") " pod="openshift-controller-manager/controller-manager-7f86856858-s5xv2" Feb 26 17:21:54 crc kubenswrapper[4805]: I0226 17:21:54.936938 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc9969ef-e0cb-4558-992c-c988921d0082-config\") pod \"controller-manager-7f86856858-s5xv2\" (UID: \"fc9969ef-e0cb-4558-992c-c988921d0082\") " pod="openshift-controller-manager/controller-manager-7f86856858-s5xv2" Feb 26 17:21:54 crc kubenswrapper[4805]: I0226 17:21:54.937845 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fc9969ef-e0cb-4558-992c-c988921d0082-proxy-ca-bundles\") pod \"controller-manager-7f86856858-s5xv2\" (UID: \"fc9969ef-e0cb-4558-992c-c988921d0082\") " pod="openshift-controller-manager/controller-manager-7f86856858-s5xv2" Feb 26 17:21:54 crc kubenswrapper[4805]: I0226 17:21:54.937864 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc9969ef-e0cb-4558-992c-c988921d0082-client-ca\") pod \"controller-manager-7f86856858-s5xv2\" (UID: \"fc9969ef-e0cb-4558-992c-c988921d0082\") " pod="openshift-controller-manager/controller-manager-7f86856858-s5xv2" Feb 26 17:21:54 crc kubenswrapper[4805]: I0226 17:21:54.938255 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc9969ef-e0cb-4558-992c-c988921d0082-config\") pod \"controller-manager-7f86856858-s5xv2\" (UID: \"fc9969ef-e0cb-4558-992c-c988921d0082\") " pod="openshift-controller-manager/controller-manager-7f86856858-s5xv2" Feb 26 17:21:54 crc kubenswrapper[4805]: I0226 17:21:54.943814 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc9969ef-e0cb-4558-992c-c988921d0082-serving-cert\") pod \"controller-manager-7f86856858-s5xv2\" (UID: \"fc9969ef-e0cb-4558-992c-c988921d0082\") " pod="openshift-controller-manager/controller-manager-7f86856858-s5xv2" Feb 26 17:21:54 crc kubenswrapper[4805]: I0226 17:21:54.955666 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fng7\" (UniqueName: \"kubernetes.io/projected/fc9969ef-e0cb-4558-992c-c988921d0082-kube-api-access-6fng7\") pod \"controller-manager-7f86856858-s5xv2\" (UID: \"fc9969ef-e0cb-4558-992c-c988921d0082\") " pod="openshift-controller-manager/controller-manager-7f86856858-s5xv2" Feb 26 17:21:54 crc kubenswrapper[4805]: I0226 17:21:54.960174 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ca0c71a-d56b-4d23-a063-a32dc21262fe" path="/var/lib/kubelet/pods/6ca0c71a-d56b-4d23-a063-a32dc21262fe/volumes" Feb 26 17:21:54 crc kubenswrapper[4805]: I0226 17:21:54.961131 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c66dc115-0279-4477-a84b-d0609ca43e7b" path="/var/lib/kubelet/pods/c66dc115-0279-4477-a84b-d0609ca43e7b/volumes" Feb 26 17:21:55 crc kubenswrapper[4805]: I0226 17:21:55.078487 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f86856858-s5xv2" Feb 26 17:21:55 crc kubenswrapper[4805]: I0226 17:21:55.218750 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-7r6v4" event={"ID":"a319f7d4-9083-4014-b960-29199f6cdd7c","Type":"ContainerStarted","Data":"6a739dbac086cfae05bed3c483f39213a2bf2fac6e516dfbfa55efc738e32a61"} Feb 26 17:21:55 crc kubenswrapper[4805]: I0226 17:21:55.219067 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-7r6v4" event={"ID":"a319f7d4-9083-4014-b960-29199f6cdd7c","Type":"ContainerStarted","Data":"12b606efffa228eaf503363e12f9fadbee80fe55130088daa4569d818d4c80aa"} Feb 26 17:21:55 crc kubenswrapper[4805]: I0226 17:21:55.219352 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-7r6v4" Feb 26 17:21:55 crc kubenswrapper[4805]: I0226 17:21:55.223707 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 26 17:21:55 crc kubenswrapper[4805]: I0226 17:21:55.225237 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 26 17:21:55 crc kubenswrapper[4805]: I0226 17:21:55.225657 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-7r6v4" Feb 26 17:21:55 crc kubenswrapper[4805]: I0226 17:21:55.226888 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 26 17:21:55 crc kubenswrapper[4805]: I0226 17:21:55.226922 4805 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="1715e7dd93558f4dbe006f03b545a383b0491c34e670307f4e469b563e0d26a6" exitCode=137 Feb 26 17:21:55 crc kubenswrapper[4805]: I0226 17:21:55.226948 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"1715e7dd93558f4dbe006f03b545a383b0491c34e670307f4e469b563e0d26a6"} Feb 26 17:21:55 crc kubenswrapper[4805]: I0226 17:21:55.226971 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9c733f4805a772a1a5cb06c310f41b123d22f814f7abc0390d26b590d0b428c1"} Feb 26 17:21:55 crc kubenswrapper[4805]: I0226 17:21:55.226990 4805 scope.go:117] "RemoveContainer" containerID="8661d5f04ab119c311367b9e13c65671370a488a8d37c63aa6e37f2e1556688b" Feb 26 17:21:55 crc kubenswrapper[4805]: I0226 17:21:55.238279 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-7r6v4" podStartSLOduration=3.238262922 podStartE2EDuration="3.238262922s" podCreationTimestamp="2026-02-26 17:21:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:21:55.235820312 +0000 UTC m=+429.797574651" watchObservedRunningTime="2026-02-26 17:21:55.238262922 +0000 UTC m=+429.800017261" Feb 26 17:21:55 crc kubenswrapper[4805]: I0226 17:21:55.284073 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f86856858-s5xv2"] Feb 26 17:21:55 crc kubenswrapper[4805]: W0226 17:21:55.288554 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc9969ef_e0cb_4558_992c_c988921d0082.slice/crio-6498a7506e7fb344db5da0930a2f5ce525d31fd2910b07f809c78622508174ab WatchSource:0}: Error finding container 6498a7506e7fb344db5da0930a2f5ce525d31fd2910b07f809c78622508174ab: Status 404 returned error can't find the container with id 6498a7506e7fb344db5da0930a2f5ce525d31fd2910b07f809c78622508174ab Feb 26 17:21:56 crc kubenswrapper[4805]: I0226 17:21:56.235398 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 26 17:21:56 crc kubenswrapper[4805]: I0226 17:21:56.236567 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 26 17:21:56 crc kubenswrapper[4805]: I0226 17:21:56.239179 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f86856858-s5xv2" event={"ID":"fc9969ef-e0cb-4558-992c-c988921d0082","Type":"ContainerStarted","Data":"b7c21052d1fd081157d72620d443a4edcec95406c315c950a001f28ce8ae6b40"} Feb 26 17:21:56 crc kubenswrapper[4805]: I0226 17:21:56.239253 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f86856858-s5xv2" event={"ID":"fc9969ef-e0cb-4558-992c-c988921d0082","Type":"ContainerStarted","Data":"6498a7506e7fb344db5da0930a2f5ce525d31fd2910b07f809c78622508174ab"} Feb 26 17:21:56 crc kubenswrapper[4805]: I0226 17:21:56.257421 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7f86856858-s5xv2" podStartSLOduration=4.257406865 podStartE2EDuration="4.257406865s" podCreationTimestamp="2026-02-26 17:21:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:21:56.256564465 +0000 UTC m=+430.818318814" watchObservedRunningTime="2026-02-26 17:21:56.257406865 +0000 UTC m=+430.819161204" Feb 26 17:21:57 crc kubenswrapper[4805]: I0226 17:21:57.244209 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7f86856858-s5xv2" Feb 26 17:21:57 crc kubenswrapper[4805]: I0226 17:21:57.250404 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7f86856858-s5xv2" Feb 26 17:21:58 crc kubenswrapper[4805]: I0226 17:21:58.019817 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 17:22:04 crc kubenswrapper[4805]: I0226 17:22:04.165799 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 17:22:04 crc kubenswrapper[4805]: I0226 17:22:04.170497 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 17:22:04 crc kubenswrapper[4805]: I0226 17:22:04.294885 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.106683 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f86856858-s5xv2"] Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.108467 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7f86856858-s5xv2" podUID="fc9969ef-e0cb-4558-992c-c988921d0082" containerName="controller-manager" containerID="cri-o://b7c21052d1fd081157d72620d443a4edcec95406c315c950a001f28ce8ae6b40" gracePeriod=30 Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.117880 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6857845b4c-7r6v4"] Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.118125 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-7r6v4" podUID="a319f7d4-9083-4014-b960-29199f6cdd7c" containerName="route-controller-manager" containerID="cri-o://6a739dbac086cfae05bed3c483f39213a2bf2fac6e516dfbfa55efc738e32a61" gracePeriod=30 Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.151427 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535442-dbgws"] Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.152074 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535442-dbgws" Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.153532 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.153572 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.153859 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.158151 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535442-dbgws"] Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.285375 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4qb9\" (UniqueName: \"kubernetes.io/projected/781869ac-5375-418c-a9db-22ce36c326fe-kube-api-access-k4qb9\") pod \"auto-csr-approver-29535442-dbgws\" (UID: \"781869ac-5375-418c-a9db-22ce36c326fe\") " pod="openshift-infra/auto-csr-approver-29535442-dbgws" Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.352550 4805 generic.go:334] "Generic (PLEG): container finished" podID="a319f7d4-9083-4014-b960-29199f6cdd7c" containerID="6a739dbac086cfae05bed3c483f39213a2bf2fac6e516dfbfa55efc738e32a61" exitCode=0 Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.352623 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-7r6v4" event={"ID":"a319f7d4-9083-4014-b960-29199f6cdd7c","Type":"ContainerDied","Data":"6a739dbac086cfae05bed3c483f39213a2bf2fac6e516dfbfa55efc738e32a61"} Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.354208 4805 generic.go:334] "Generic (PLEG): container finished" podID="fc9969ef-e0cb-4558-992c-c988921d0082" containerID="b7c21052d1fd081157d72620d443a4edcec95406c315c950a001f28ce8ae6b40" exitCode=0 Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.354239 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f86856858-s5xv2" event={"ID":"fc9969ef-e0cb-4558-992c-c988921d0082","Type":"ContainerDied","Data":"b7c21052d1fd081157d72620d443a4edcec95406c315c950a001f28ce8ae6b40"} Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.386482 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4qb9\" (UniqueName: \"kubernetes.io/projected/781869ac-5375-418c-a9db-22ce36c326fe-kube-api-access-k4qb9\") pod \"auto-csr-approver-29535442-dbgws\" (UID: \"781869ac-5375-418c-a9db-22ce36c326fe\") " pod="openshift-infra/auto-csr-approver-29535442-dbgws" Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.423519 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4qb9\" (UniqueName: \"kubernetes.io/projected/781869ac-5375-418c-a9db-22ce36c326fe-kube-api-access-k4qb9\") pod \"auto-csr-approver-29535442-dbgws\" (UID: \"781869ac-5375-418c-a9db-22ce36c326fe\") " pod="openshift-infra/auto-csr-approver-29535442-dbgws" Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.589863 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535442-dbgws" Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.630425 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-7r6v4" Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.687761 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f86856858-s5xv2" Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.691191 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a319f7d4-9083-4014-b960-29199f6cdd7c-serving-cert\") pod \"a319f7d4-9083-4014-b960-29199f6cdd7c\" (UID: \"a319f7d4-9083-4014-b960-29199f6cdd7c\") " Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.691239 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a319f7d4-9083-4014-b960-29199f6cdd7c-client-ca\") pod \"a319f7d4-9083-4014-b960-29199f6cdd7c\" (UID: \"a319f7d4-9083-4014-b960-29199f6cdd7c\") " Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.691313 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a319f7d4-9083-4014-b960-29199f6cdd7c-config\") pod \"a319f7d4-9083-4014-b960-29199f6cdd7c\" (UID: \"a319f7d4-9083-4014-b960-29199f6cdd7c\") " Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.691352 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5c22\" (UniqueName: \"kubernetes.io/projected/a319f7d4-9083-4014-b960-29199f6cdd7c-kube-api-access-m5c22\") pod \"a319f7d4-9083-4014-b960-29199f6cdd7c\" (UID: \"a319f7d4-9083-4014-b960-29199f6cdd7c\") " Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.692301 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a319f7d4-9083-4014-b960-29199f6cdd7c-client-ca" (OuterVolumeSpecName: "client-ca") pod "a319f7d4-9083-4014-b960-29199f6cdd7c" (UID: "a319f7d4-9083-4014-b960-29199f6cdd7c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.692458 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a319f7d4-9083-4014-b960-29199f6cdd7c-config" (OuterVolumeSpecName: "config") pod "a319f7d4-9083-4014-b960-29199f6cdd7c" (UID: "a319f7d4-9083-4014-b960-29199f6cdd7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.695278 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a319f7d4-9083-4014-b960-29199f6cdd7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a319f7d4-9083-4014-b960-29199f6cdd7c" (UID: "a319f7d4-9083-4014-b960-29199f6cdd7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.695632 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a319f7d4-9083-4014-b960-29199f6cdd7c-kube-api-access-m5c22" (OuterVolumeSpecName: "kube-api-access-m5c22") pod "a319f7d4-9083-4014-b960-29199f6cdd7c" (UID: "a319f7d4-9083-4014-b960-29199f6cdd7c"). InnerVolumeSpecName "kube-api-access-m5c22". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.792170 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fng7\" (UniqueName: \"kubernetes.io/projected/fc9969ef-e0cb-4558-992c-c988921d0082-kube-api-access-6fng7\") pod \"fc9969ef-e0cb-4558-992c-c988921d0082\" (UID: \"fc9969ef-e0cb-4558-992c-c988921d0082\") " Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.792257 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fc9969ef-e0cb-4558-992c-c988921d0082-proxy-ca-bundles\") pod \"fc9969ef-e0cb-4558-992c-c988921d0082\" (UID: \"fc9969ef-e0cb-4558-992c-c988921d0082\") " Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.792291 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc9969ef-e0cb-4558-992c-c988921d0082-config\") pod \"fc9969ef-e0cb-4558-992c-c988921d0082\" (UID: \"fc9969ef-e0cb-4558-992c-c988921d0082\") " Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.792329 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc9969ef-e0cb-4558-992c-c988921d0082-serving-cert\") pod \"fc9969ef-e0cb-4558-992c-c988921d0082\" (UID: \"fc9969ef-e0cb-4558-992c-c988921d0082\") " Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.792365 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc9969ef-e0cb-4558-992c-c988921d0082-client-ca\") pod \"fc9969ef-e0cb-4558-992c-c988921d0082\" (UID: \"fc9969ef-e0cb-4558-992c-c988921d0082\") " Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.792650 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a319f7d4-9083-4014-b960-29199f6cdd7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.792667 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a319f7d4-9083-4014-b960-29199f6cdd7c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.792682 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a319f7d4-9083-4014-b960-29199f6cdd7c-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.792695 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5c22\" (UniqueName: \"kubernetes.io/projected/a319f7d4-9083-4014-b960-29199f6cdd7c-kube-api-access-m5c22\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.793410 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc9969ef-e0cb-4558-992c-c988921d0082-config" (OuterVolumeSpecName: "config") pod "fc9969ef-e0cb-4558-992c-c988921d0082" (UID: "fc9969ef-e0cb-4558-992c-c988921d0082"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.793700 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc9969ef-e0cb-4558-992c-c988921d0082-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "fc9969ef-e0cb-4558-992c-c988921d0082" (UID: "fc9969ef-e0cb-4558-992c-c988921d0082"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.793917 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc9969ef-e0cb-4558-992c-c988921d0082-client-ca" (OuterVolumeSpecName: "client-ca") pod "fc9969ef-e0cb-4558-992c-c988921d0082" (UID: "fc9969ef-e0cb-4558-992c-c988921d0082"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.795447 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc9969ef-e0cb-4558-992c-c988921d0082-kube-api-access-6fng7" (OuterVolumeSpecName: "kube-api-access-6fng7") pod "fc9969ef-e0cb-4558-992c-c988921d0082" (UID: "fc9969ef-e0cb-4558-992c-c988921d0082"). InnerVolumeSpecName "kube-api-access-6fng7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.795543 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc9969ef-e0cb-4558-992c-c988921d0082-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fc9969ef-e0cb-4558-992c-c988921d0082" (UID: "fc9969ef-e0cb-4558-992c-c988921d0082"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.894183 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc9969ef-e0cb-4558-992c-c988921d0082-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.894218 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc9969ef-e0cb-4558-992c-c988921d0082-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.894233 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fng7\" (UniqueName: \"kubernetes.io/projected/fc9969ef-e0cb-4558-992c-c988921d0082-kube-api-access-6fng7\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.894246 4805 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fc9969ef-e0cb-4558-992c-c988921d0082-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.894259 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc9969ef-e0cb-4558-992c-c988921d0082-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:14 crc kubenswrapper[4805]: I0226 17:22:14.990948 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535442-dbgws"] Feb 26 17:22:14 crc kubenswrapper[4805]: W0226 17:22:14.995222 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod781869ac_5375_418c_a9db_22ce36c326fe.slice/crio-bf8d3b3743e612b1c45a9204a6cb06d46c162950218ff659371b1b78e841b92f WatchSource:0}: Error finding container bf8d3b3743e612b1c45a9204a6cb06d46c162950218ff659371b1b78e841b92f: Status 404 returned error can't find the container with id bf8d3b3743e612b1c45a9204a6cb06d46c162950218ff659371b1b78e841b92f Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.361904 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f86856858-s5xv2" event={"ID":"fc9969ef-e0cb-4558-992c-c988921d0082","Type":"ContainerDied","Data":"6498a7506e7fb344db5da0930a2f5ce525d31fd2910b07f809c78622508174ab"} Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.361953 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f86856858-s5xv2" Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.361963 4805 scope.go:117] "RemoveContainer" containerID="b7c21052d1fd081157d72620d443a4edcec95406c315c950a001f28ce8ae6b40" Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.363731 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-7r6v4" Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.363962 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-7r6v4" event={"ID":"a319f7d4-9083-4014-b960-29199f6cdd7c","Type":"ContainerDied","Data":"12b606efffa228eaf503363e12f9fadbee80fe55130088daa4569d818d4c80aa"} Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.365618 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535442-dbgws" event={"ID":"781869ac-5375-418c-a9db-22ce36c326fe","Type":"ContainerStarted","Data":"bf8d3b3743e612b1c45a9204a6cb06d46c162950218ff659371b1b78e841b92f"} Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.379925 4805 scope.go:117] "RemoveContainer" containerID="6a739dbac086cfae05bed3c483f39213a2bf2fac6e516dfbfa55efc738e32a61" Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.391162 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6857845b4c-7r6v4"] Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.398108 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6857845b4c-7r6v4"] Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.403373 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f86856858-s5xv2"] Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.417227 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7f86856858-s5xv2"] Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.761586 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59656c64c4-mqqld"] Feb 26 17:22:15 crc kubenswrapper[4805]: E0226 17:22:15.762055 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a319f7d4-9083-4014-b960-29199f6cdd7c" containerName="route-controller-manager" Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.762148 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a319f7d4-9083-4014-b960-29199f6cdd7c" containerName="route-controller-manager" Feb 26 17:22:15 crc kubenswrapper[4805]: E0226 17:22:15.762223 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc9969ef-e0cb-4558-992c-c988921d0082" containerName="controller-manager" Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.762280 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc9969ef-e0cb-4558-992c-c988921d0082" containerName="controller-manager" Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.762419 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc9969ef-e0cb-4558-992c-c988921d0082" containerName="controller-manager" Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.762490 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a319f7d4-9083-4014-b960-29199f6cdd7c" containerName="route-controller-manager" Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.762916 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59656c64c4-mqqld" Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.764943 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.765177 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.765335 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.765360 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-cd9f4966d-6glgz"] Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.765455 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.765552 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.765936 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cd9f4966d-6glgz" Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.766332 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.768932 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.768971 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.769155 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.769280 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.770513 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.771524 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.778763 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.783371 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59656c64c4-mqqld"] Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.788202 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-cd9f4966d-6glgz"] Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.906310 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4fcv\" (UniqueName: \"kubernetes.io/projected/06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8-kube-api-access-h4fcv\") pod \"route-controller-manager-59656c64c4-mqqld\" (UID: \"06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8\") " pod="openshift-route-controller-manager/route-controller-manager-59656c64c4-mqqld" Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.906594 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93b3feae-d3a5-4818-94c6-ac59f5ec20e9-serving-cert\") pod \"controller-manager-cd9f4966d-6glgz\" (UID: \"93b3feae-d3a5-4818-94c6-ac59f5ec20e9\") " pod="openshift-controller-manager/controller-manager-cd9f4966d-6glgz" Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.906690 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8-serving-cert\") pod \"route-controller-manager-59656c64c4-mqqld\" (UID: \"06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8\") " pod="openshift-route-controller-manager/route-controller-manager-59656c64c4-mqqld" Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.906789 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4ql2\" (UniqueName: \"kubernetes.io/projected/93b3feae-d3a5-4818-94c6-ac59f5ec20e9-kube-api-access-s4ql2\") pod \"controller-manager-cd9f4966d-6glgz\" (UID: \"93b3feae-d3a5-4818-94c6-ac59f5ec20e9\") " pod="openshift-controller-manager/controller-manager-cd9f4966d-6glgz" Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.906886 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8-config\") pod \"route-controller-manager-59656c64c4-mqqld\" (UID: \"06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8\") " pod="openshift-route-controller-manager/route-controller-manager-59656c64c4-mqqld" Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.906985 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93b3feae-d3a5-4818-94c6-ac59f5ec20e9-client-ca\") pod \"controller-manager-cd9f4966d-6glgz\" (UID: \"93b3feae-d3a5-4818-94c6-ac59f5ec20e9\") " pod="openshift-controller-manager/controller-manager-cd9f4966d-6glgz" Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.907086 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93b3feae-d3a5-4818-94c6-ac59f5ec20e9-config\") pod \"controller-manager-cd9f4966d-6glgz\" (UID: \"93b3feae-d3a5-4818-94c6-ac59f5ec20e9\") " pod="openshift-controller-manager/controller-manager-cd9f4966d-6glgz" Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.907225 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93b3feae-d3a5-4818-94c6-ac59f5ec20e9-proxy-ca-bundles\") pod \"controller-manager-cd9f4966d-6glgz\" (UID: \"93b3feae-d3a5-4818-94c6-ac59f5ec20e9\") " pod="openshift-controller-manager/controller-manager-cd9f4966d-6glgz" Feb 26 17:22:15 crc kubenswrapper[4805]: I0226 17:22:15.907310 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8-client-ca\") pod \"route-controller-manager-59656c64c4-mqqld\" (UID: \"06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8\") " pod="openshift-route-controller-manager/route-controller-manager-59656c64c4-mqqld" Feb 26 17:22:16 crc kubenswrapper[4805]: I0226 17:22:16.008419 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8-client-ca\") pod \"route-controller-manager-59656c64c4-mqqld\" (UID: \"06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8\") " pod="openshift-route-controller-manager/route-controller-manager-59656c64c4-mqqld" Feb 26 17:22:16 crc kubenswrapper[4805]: I0226 17:22:16.008472 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4fcv\" (UniqueName: \"kubernetes.io/projected/06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8-kube-api-access-h4fcv\") pod \"route-controller-manager-59656c64c4-mqqld\" (UID: \"06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8\") " pod="openshift-route-controller-manager/route-controller-manager-59656c64c4-mqqld" Feb 26 17:22:16 crc kubenswrapper[4805]: I0226 17:22:16.008509 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93b3feae-d3a5-4818-94c6-ac59f5ec20e9-serving-cert\") pod \"controller-manager-cd9f4966d-6glgz\" (UID: \"93b3feae-d3a5-4818-94c6-ac59f5ec20e9\") " pod="openshift-controller-manager/controller-manager-cd9f4966d-6glgz" Feb 26 17:22:16 crc kubenswrapper[4805]: I0226 17:22:16.008530 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8-serving-cert\") pod \"route-controller-manager-59656c64c4-mqqld\" (UID: \"06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8\") " pod="openshift-route-controller-manager/route-controller-manager-59656c64c4-mqqld" Feb 26 17:22:16 crc kubenswrapper[4805]: I0226 17:22:16.008547 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4ql2\" (UniqueName: \"kubernetes.io/projected/93b3feae-d3a5-4818-94c6-ac59f5ec20e9-kube-api-access-s4ql2\") pod \"controller-manager-cd9f4966d-6glgz\" (UID: \"93b3feae-d3a5-4818-94c6-ac59f5ec20e9\") " pod="openshift-controller-manager/controller-manager-cd9f4966d-6glgz" Feb 26 17:22:16 crc kubenswrapper[4805]: I0226 17:22:16.008572 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8-config\") pod \"route-controller-manager-59656c64c4-mqqld\" (UID: \"06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8\") " pod="openshift-route-controller-manager/route-controller-manager-59656c64c4-mqqld" Feb 26 17:22:16 crc kubenswrapper[4805]: I0226 17:22:16.008603 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93b3feae-d3a5-4818-94c6-ac59f5ec20e9-client-ca\") pod \"controller-manager-cd9f4966d-6glgz\" (UID: \"93b3feae-d3a5-4818-94c6-ac59f5ec20e9\") " pod="openshift-controller-manager/controller-manager-cd9f4966d-6glgz" Feb 26 17:22:16 crc kubenswrapper[4805]: I0226 17:22:16.008624 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93b3feae-d3a5-4818-94c6-ac59f5ec20e9-config\") pod \"controller-manager-cd9f4966d-6glgz\" (UID: \"93b3feae-d3a5-4818-94c6-ac59f5ec20e9\") " pod="openshift-controller-manager/controller-manager-cd9f4966d-6glgz" Feb 26 17:22:16 crc kubenswrapper[4805]: I0226 17:22:16.008653 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93b3feae-d3a5-4818-94c6-ac59f5ec20e9-proxy-ca-bundles\") pod \"controller-manager-cd9f4966d-6glgz\" (UID: \"93b3feae-d3a5-4818-94c6-ac59f5ec20e9\") " pod="openshift-controller-manager/controller-manager-cd9f4966d-6glgz" Feb 26 17:22:16 crc kubenswrapper[4805]: I0226 17:22:16.010247 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93b3feae-d3a5-4818-94c6-ac59f5ec20e9-client-ca\") pod \"controller-manager-cd9f4966d-6glgz\" (UID: \"93b3feae-d3a5-4818-94c6-ac59f5ec20e9\") " pod="openshift-controller-manager/controller-manager-cd9f4966d-6glgz" Feb 26 17:22:16 crc kubenswrapper[4805]: I0226 17:22:16.010629 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93b3feae-d3a5-4818-94c6-ac59f5ec20e9-proxy-ca-bundles\") pod \"controller-manager-cd9f4966d-6glgz\" (UID: \"93b3feae-d3a5-4818-94c6-ac59f5ec20e9\") " pod="openshift-controller-manager/controller-manager-cd9f4966d-6glgz" Feb 26 17:22:16 crc kubenswrapper[4805]: I0226 17:22:16.011566 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8-config\") pod \"route-controller-manager-59656c64c4-mqqld\" (UID: \"06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8\") " pod="openshift-route-controller-manager/route-controller-manager-59656c64c4-mqqld" Feb 26 17:22:16 crc kubenswrapper[4805]: I0226 17:22:16.011916 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8-client-ca\") pod \"route-controller-manager-59656c64c4-mqqld\" (UID: \"06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8\") " pod="openshift-route-controller-manager/route-controller-manager-59656c64c4-mqqld" Feb 26 17:22:16 crc kubenswrapper[4805]: I0226 17:22:16.011863 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93b3feae-d3a5-4818-94c6-ac59f5ec20e9-config\") pod \"controller-manager-cd9f4966d-6glgz\" (UID: \"93b3feae-d3a5-4818-94c6-ac59f5ec20e9\") " pod="openshift-controller-manager/controller-manager-cd9f4966d-6glgz" Feb 26 17:22:16 crc kubenswrapper[4805]: I0226 17:22:16.017746 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93b3feae-d3a5-4818-94c6-ac59f5ec20e9-serving-cert\") pod \"controller-manager-cd9f4966d-6glgz\" (UID: \"93b3feae-d3a5-4818-94c6-ac59f5ec20e9\") " pod="openshift-controller-manager/controller-manager-cd9f4966d-6glgz" Feb 26 17:22:16 crc kubenswrapper[4805]: I0226 17:22:16.017813 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8-serving-cert\") pod \"route-controller-manager-59656c64c4-mqqld\" (UID: \"06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8\") " pod="openshift-route-controller-manager/route-controller-manager-59656c64c4-mqqld" Feb 26 17:22:16 crc kubenswrapper[4805]: I0226 17:22:16.033353 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4ql2\" (UniqueName: \"kubernetes.io/projected/93b3feae-d3a5-4818-94c6-ac59f5ec20e9-kube-api-access-s4ql2\") pod \"controller-manager-cd9f4966d-6glgz\" (UID: \"93b3feae-d3a5-4818-94c6-ac59f5ec20e9\") " pod="openshift-controller-manager/controller-manager-cd9f4966d-6glgz" Feb 26 17:22:16 crc kubenswrapper[4805]: I0226 17:22:16.035045 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4fcv\" (UniqueName: \"kubernetes.io/projected/06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8-kube-api-access-h4fcv\") pod \"route-controller-manager-59656c64c4-mqqld\" (UID: \"06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8\") " pod="openshift-route-controller-manager/route-controller-manager-59656c64c4-mqqld" Feb 26 17:22:16 crc kubenswrapper[4805]: I0226 17:22:16.086231 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59656c64c4-mqqld" Feb 26 17:22:16 crc kubenswrapper[4805]: I0226 17:22:16.102559 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cd9f4966d-6glgz" Feb 26 17:22:16 crc kubenswrapper[4805]: I0226 17:22:16.307215 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59656c64c4-mqqld"] Feb 26 17:22:16 crc kubenswrapper[4805]: W0226 17:22:16.316534 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06933984_edd2_4f3f_9bd8_6b2c8eb2bfc8.slice/crio-151b3fab5003e2fe7e110ce882f8d8d879fe2b0ba9ef12909451e612664dfcbc WatchSource:0}: Error finding container 151b3fab5003e2fe7e110ce882f8d8d879fe2b0ba9ef12909451e612664dfcbc: Status 404 returned error can't find the container with id 151b3fab5003e2fe7e110ce882f8d8d879fe2b0ba9ef12909451e612664dfcbc Feb 26 17:22:16 crc kubenswrapper[4805]: I0226 17:22:16.340092 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-cd9f4966d-6glgz"] Feb 26 17:22:16 crc kubenswrapper[4805]: W0226 17:22:16.350250 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93b3feae_d3a5_4818_94c6_ac59f5ec20e9.slice/crio-89c0ee7fe47543ee3856b89e095e66c170d56de3af3483c7094fa01a966e5eb1 WatchSource:0}: Error finding container 89c0ee7fe47543ee3856b89e095e66c170d56de3af3483c7094fa01a966e5eb1: Status 404 returned error can't find the container with id 89c0ee7fe47543ee3856b89e095e66c170d56de3af3483c7094fa01a966e5eb1 Feb 26 17:22:16 crc kubenswrapper[4805]: I0226 17:22:16.373375 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cd9f4966d-6glgz" event={"ID":"93b3feae-d3a5-4818-94c6-ac59f5ec20e9","Type":"ContainerStarted","Data":"89c0ee7fe47543ee3856b89e095e66c170d56de3af3483c7094fa01a966e5eb1"} Feb 26 17:22:16 crc kubenswrapper[4805]: I0226 17:22:16.376166 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59656c64c4-mqqld" event={"ID":"06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8","Type":"ContainerStarted","Data":"151b3fab5003e2fe7e110ce882f8d8d879fe2b0ba9ef12909451e612664dfcbc"} Feb 26 17:22:16 crc kubenswrapper[4805]: I0226 17:22:16.966708 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a319f7d4-9083-4014-b960-29199f6cdd7c" path="/var/lib/kubelet/pods/a319f7d4-9083-4014-b960-29199f6cdd7c/volumes" Feb 26 17:22:16 crc kubenswrapper[4805]: I0226 17:22:16.968031 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc9969ef-e0cb-4558-992c-c988921d0082" path="/var/lib/kubelet/pods/fc9969ef-e0cb-4558-992c-c988921d0082/volumes" Feb 26 17:22:17 crc kubenswrapper[4805]: I0226 17:22:17.383738 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cd9f4966d-6glgz" event={"ID":"93b3feae-d3a5-4818-94c6-ac59f5ec20e9","Type":"ContainerStarted","Data":"9492072be6cab00a0af5648c3f6aa728cfa52f348513b0d0cf83a33c1fa6e767"} Feb 26 17:22:17 crc kubenswrapper[4805]: I0226 17:22:17.384880 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-cd9f4966d-6glgz" Feb 26 17:22:17 crc kubenswrapper[4805]: I0226 17:22:17.386484 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59656c64c4-mqqld" event={"ID":"06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8","Type":"ContainerStarted","Data":"59283a8baaad43155a9a2f942cf050308d925408093f06be9781d2e0c6ea5d62"} Feb 26 17:22:17 crc kubenswrapper[4805]: I0226 17:22:17.387094 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-59656c64c4-mqqld" Feb 26 17:22:17 crc kubenswrapper[4805]: I0226 17:22:17.388557 4805 generic.go:334] "Generic (PLEG): container finished" podID="781869ac-5375-418c-a9db-22ce36c326fe" containerID="edddc47e4bc778bc90cada61a753d495ef64356fbc6a59d94d95221441be045f" exitCode=0 Feb 26 17:22:17 crc kubenswrapper[4805]: I0226 17:22:17.388609 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535442-dbgws" event={"ID":"781869ac-5375-418c-a9db-22ce36c326fe","Type":"ContainerDied","Data":"edddc47e4bc778bc90cada61a753d495ef64356fbc6a59d94d95221441be045f"} Feb 26 17:22:17 crc kubenswrapper[4805]: I0226 17:22:17.389336 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-cd9f4966d-6glgz" Feb 26 17:22:17 crc kubenswrapper[4805]: I0226 17:22:17.392564 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-59656c64c4-mqqld" Feb 26 17:22:17 crc kubenswrapper[4805]: I0226 17:22:17.399602 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-cd9f4966d-6glgz" podStartSLOduration=3.399582027 podStartE2EDuration="3.399582027s" podCreationTimestamp="2026-02-26 17:22:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:22:17.398936771 +0000 UTC m=+451.960691110" watchObservedRunningTime="2026-02-26 17:22:17.399582027 +0000 UTC m=+451.961336366" Feb 26 17:22:17 crc kubenswrapper[4805]: I0226 17:22:17.431438 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-59656c64c4-mqqld" podStartSLOduration=3.431414063 podStartE2EDuration="3.431414063s" podCreationTimestamp="2026-02-26 17:22:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:22:17.427244839 +0000 UTC m=+451.988999198" watchObservedRunningTime="2026-02-26 17:22:17.431414063 +0000 UTC m=+451.993168412" Feb 26 17:22:18 crc kubenswrapper[4805]: I0226 17:22:18.644949 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535442-dbgws" Feb 26 17:22:18 crc kubenswrapper[4805]: I0226 17:22:18.741783 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4qb9\" (UniqueName: \"kubernetes.io/projected/781869ac-5375-418c-a9db-22ce36c326fe-kube-api-access-k4qb9\") pod \"781869ac-5375-418c-a9db-22ce36c326fe\" (UID: \"781869ac-5375-418c-a9db-22ce36c326fe\") " Feb 26 17:22:18 crc kubenswrapper[4805]: I0226 17:22:18.747687 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/781869ac-5375-418c-a9db-22ce36c326fe-kube-api-access-k4qb9" (OuterVolumeSpecName: "kube-api-access-k4qb9") pod "781869ac-5375-418c-a9db-22ce36c326fe" (UID: "781869ac-5375-418c-a9db-22ce36c326fe"). InnerVolumeSpecName "kube-api-access-k4qb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:22:18 crc kubenswrapper[4805]: I0226 17:22:18.844811 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4qb9\" (UniqueName: \"kubernetes.io/projected/781869ac-5375-418c-a9db-22ce36c326fe-kube-api-access-k4qb9\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:19 crc kubenswrapper[4805]: I0226 17:22:19.401265 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535442-dbgws" event={"ID":"781869ac-5375-418c-a9db-22ce36c326fe","Type":"ContainerDied","Data":"bf8d3b3743e612b1c45a9204a6cb06d46c162950218ff659371b1b78e841b92f"} Feb 26 17:22:19 crc kubenswrapper[4805]: I0226 17:22:19.401633 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf8d3b3743e612b1c45a9204a6cb06d46c162950218ff659371b1b78e841b92f" Feb 26 17:22:19 crc kubenswrapper[4805]: I0226 17:22:19.401312 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535442-dbgws" Feb 26 17:22:32 crc kubenswrapper[4805]: I0226 17:22:32.391722 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7fvvs"] Feb 26 17:22:32 crc kubenswrapper[4805]: I0226 17:22:32.392293 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7fvvs" podUID="83d8504f-33ad-4812-bc1e-11233c225974" containerName="registry-server" containerID="cri-o://7906c38ad6133b6a391f43c2087dec2d40532a9a7374b6a8d6ed2437a083ec47" gracePeriod=2 Feb 26 17:22:32 crc kubenswrapper[4805]: I0226 17:22:32.915489 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fvvs" Feb 26 17:22:32 crc kubenswrapper[4805]: I0226 17:22:32.977900 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 17:22:32 crc kubenswrapper[4805]: I0226 17:22:32.977971 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 17:22:32 crc kubenswrapper[4805]: I0226 17:22:32.993338 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gzkpq"] Feb 26 17:22:32 crc kubenswrapper[4805]: I0226 17:22:32.993563 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gzkpq" podUID="87196950-f6be-442b-a725-3cdee5962f55" containerName="registry-server" containerID="cri-o://691a564fe17e21e2ba9b79b502179bb34e0d35e8be865c3888a5b0b1536787c1" gracePeriod=2 Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.026652 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83d8504f-33ad-4812-bc1e-11233c225974-utilities\") pod \"83d8504f-33ad-4812-bc1e-11233c225974\" (UID: \"83d8504f-33ad-4812-bc1e-11233c225974\") " Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.026701 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnrtw\" (UniqueName: \"kubernetes.io/projected/83d8504f-33ad-4812-bc1e-11233c225974-kube-api-access-bnrtw\") pod \"83d8504f-33ad-4812-bc1e-11233c225974\" (UID: \"83d8504f-33ad-4812-bc1e-11233c225974\") " Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.026793 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83d8504f-33ad-4812-bc1e-11233c225974-catalog-content\") pod \"83d8504f-33ad-4812-bc1e-11233c225974\" (UID: \"83d8504f-33ad-4812-bc1e-11233c225974\") " Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.027588 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83d8504f-33ad-4812-bc1e-11233c225974-utilities" (OuterVolumeSpecName: "utilities") pod "83d8504f-33ad-4812-bc1e-11233c225974" (UID: "83d8504f-33ad-4812-bc1e-11233c225974"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.035766 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83d8504f-33ad-4812-bc1e-11233c225974-kube-api-access-bnrtw" (OuterVolumeSpecName: "kube-api-access-bnrtw") pod "83d8504f-33ad-4812-bc1e-11233c225974" (UID: "83d8504f-33ad-4812-bc1e-11233c225974"). InnerVolumeSpecName "kube-api-access-bnrtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.080669 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83d8504f-33ad-4812-bc1e-11233c225974-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83d8504f-33ad-4812-bc1e-11233c225974" (UID: "83d8504f-33ad-4812-bc1e-11233c225974"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.129195 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83d8504f-33ad-4812-bc1e-11233c225974-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.129248 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnrtw\" (UniqueName: \"kubernetes.io/projected/83d8504f-33ad-4812-bc1e-11233c225974-kube-api-access-bnrtw\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.129290 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83d8504f-33ad-4812-bc1e-11233c225974-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.424176 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gzkpq" Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.487680 4805 generic.go:334] "Generic (PLEG): container finished" podID="87196950-f6be-442b-a725-3cdee5962f55" containerID="691a564fe17e21e2ba9b79b502179bb34e0d35e8be865c3888a5b0b1536787c1" exitCode=0 Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.487997 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gzkpq" Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.487911 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzkpq" event={"ID":"87196950-f6be-442b-a725-3cdee5962f55","Type":"ContainerDied","Data":"691a564fe17e21e2ba9b79b502179bb34e0d35e8be865c3888a5b0b1536787c1"} Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.488184 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzkpq" event={"ID":"87196950-f6be-442b-a725-3cdee5962f55","Type":"ContainerDied","Data":"34cbff5bc8cf6298bff43d3ca62fde5fab21ed596e5c62a72589a70dc9e3c774"} Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.488225 4805 scope.go:117] "RemoveContainer" containerID="691a564fe17e21e2ba9b79b502179bb34e0d35e8be865c3888a5b0b1536787c1" Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.490924 4805 generic.go:334] "Generic (PLEG): container finished" podID="83d8504f-33ad-4812-bc1e-11233c225974" containerID="7906c38ad6133b6a391f43c2087dec2d40532a9a7374b6a8d6ed2437a083ec47" exitCode=0 Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.491060 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fvvs" event={"ID":"83d8504f-33ad-4812-bc1e-11233c225974","Type":"ContainerDied","Data":"7906c38ad6133b6a391f43c2087dec2d40532a9a7374b6a8d6ed2437a083ec47"} Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.491193 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fvvs" event={"ID":"83d8504f-33ad-4812-bc1e-11233c225974","Type":"ContainerDied","Data":"6072246c8d3ab45eac6d8b3267a2965acb1a1ee9ade6e33d188acd735bf12d0e"} Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.491074 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fvvs" Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.504746 4805 scope.go:117] "RemoveContainer" containerID="403d8a011c5965e3e1b078336abff0f9cc7b71924078653cfc32bd1dbb6ba5ad" Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.526589 4805 scope.go:117] "RemoveContainer" containerID="aedd15a02a145d8ac533dabf5e478a79787dda12378181d8a5de7742c0debb66" Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.533586 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87196950-f6be-442b-a725-3cdee5962f55-utilities\") pod \"87196950-f6be-442b-a725-3cdee5962f55\" (UID: \"87196950-f6be-442b-a725-3cdee5962f55\") " Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.533639 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6lzz\" (UniqueName: \"kubernetes.io/projected/87196950-f6be-442b-a725-3cdee5962f55-kube-api-access-p6lzz\") pod \"87196950-f6be-442b-a725-3cdee5962f55\" (UID: \"87196950-f6be-442b-a725-3cdee5962f55\") " Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.533781 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87196950-f6be-442b-a725-3cdee5962f55-catalog-content\") pod \"87196950-f6be-442b-a725-3cdee5962f55\" (UID: \"87196950-f6be-442b-a725-3cdee5962f55\") " Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.536193 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87196950-f6be-442b-a725-3cdee5962f55-utilities" (OuterVolumeSpecName: "utilities") pod "87196950-f6be-442b-a725-3cdee5962f55" (UID: "87196950-f6be-442b-a725-3cdee5962f55"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.552533 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7fvvs"] Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.555784 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7fvvs"] Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.569388 4805 scope.go:117] "RemoveContainer" containerID="691a564fe17e21e2ba9b79b502179bb34e0d35e8be865c3888a5b0b1536787c1" Feb 26 17:22:33 crc kubenswrapper[4805]: E0226 17:22:33.570056 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"691a564fe17e21e2ba9b79b502179bb34e0d35e8be865c3888a5b0b1536787c1\": container with ID starting with 691a564fe17e21e2ba9b79b502179bb34e0d35e8be865c3888a5b0b1536787c1 not found: ID does not exist" containerID="691a564fe17e21e2ba9b79b502179bb34e0d35e8be865c3888a5b0b1536787c1" Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.570120 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"691a564fe17e21e2ba9b79b502179bb34e0d35e8be865c3888a5b0b1536787c1"} err="failed to get container status \"691a564fe17e21e2ba9b79b502179bb34e0d35e8be865c3888a5b0b1536787c1\": rpc error: code = NotFound desc = could not find container \"691a564fe17e21e2ba9b79b502179bb34e0d35e8be865c3888a5b0b1536787c1\": container with ID starting with 691a564fe17e21e2ba9b79b502179bb34e0d35e8be865c3888a5b0b1536787c1 not found: ID does not exist" Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.570164 4805 scope.go:117] "RemoveContainer" containerID="403d8a011c5965e3e1b078336abff0f9cc7b71924078653cfc32bd1dbb6ba5ad" Feb 26 17:22:33 crc kubenswrapper[4805]: E0226 17:22:33.570641 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"403d8a011c5965e3e1b078336abff0f9cc7b71924078653cfc32bd1dbb6ba5ad\": container with ID starting with 403d8a011c5965e3e1b078336abff0f9cc7b71924078653cfc32bd1dbb6ba5ad not found: ID does not exist" containerID="403d8a011c5965e3e1b078336abff0f9cc7b71924078653cfc32bd1dbb6ba5ad" Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.570672 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"403d8a011c5965e3e1b078336abff0f9cc7b71924078653cfc32bd1dbb6ba5ad"} err="failed to get container status \"403d8a011c5965e3e1b078336abff0f9cc7b71924078653cfc32bd1dbb6ba5ad\": rpc error: code = NotFound desc = could not find container \"403d8a011c5965e3e1b078336abff0f9cc7b71924078653cfc32bd1dbb6ba5ad\": container with ID starting with 403d8a011c5965e3e1b078336abff0f9cc7b71924078653cfc32bd1dbb6ba5ad not found: ID does not exist" Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.570687 4805 scope.go:117] "RemoveContainer" containerID="aedd15a02a145d8ac533dabf5e478a79787dda12378181d8a5de7742c0debb66" Feb 26 17:22:33 crc kubenswrapper[4805]: E0226 17:22:33.571350 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aedd15a02a145d8ac533dabf5e478a79787dda12378181d8a5de7742c0debb66\": container with ID starting with aedd15a02a145d8ac533dabf5e478a79787dda12378181d8a5de7742c0debb66 not found: ID does not exist" containerID="aedd15a02a145d8ac533dabf5e478a79787dda12378181d8a5de7742c0debb66" Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.571375 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aedd15a02a145d8ac533dabf5e478a79787dda12378181d8a5de7742c0debb66"} err="failed to get container status \"aedd15a02a145d8ac533dabf5e478a79787dda12378181d8a5de7742c0debb66\": rpc error: code = NotFound desc = could not find container \"aedd15a02a145d8ac533dabf5e478a79787dda12378181d8a5de7742c0debb66\": container with ID starting with aedd15a02a145d8ac533dabf5e478a79787dda12378181d8a5de7742c0debb66 not found: ID does not exist" Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.571391 4805 scope.go:117] "RemoveContainer" containerID="7906c38ad6133b6a391f43c2087dec2d40532a9a7374b6a8d6ed2437a083ec47" Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.571848 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87196950-f6be-442b-a725-3cdee5962f55-kube-api-access-p6lzz" (OuterVolumeSpecName: "kube-api-access-p6lzz") pod "87196950-f6be-442b-a725-3cdee5962f55" (UID: "87196950-f6be-442b-a725-3cdee5962f55"). InnerVolumeSpecName "kube-api-access-p6lzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.586160 4805 scope.go:117] "RemoveContainer" containerID="a2c139a491e284663312701882ad39d995220f59d3c83bfa33aca8d267ceec9c" Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.591188 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87196950-f6be-442b-a725-3cdee5962f55-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87196950-f6be-442b-a725-3cdee5962f55" (UID: "87196950-f6be-442b-a725-3cdee5962f55"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.601663 4805 scope.go:117] "RemoveContainer" containerID="74e7423b37658a4afd05201fd49048f980d08f7797dafc6b276de33565e71893" Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.624245 4805 scope.go:117] "RemoveContainer" containerID="7906c38ad6133b6a391f43c2087dec2d40532a9a7374b6a8d6ed2437a083ec47" Feb 26 17:22:33 crc kubenswrapper[4805]: E0226 17:22:33.625123 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7906c38ad6133b6a391f43c2087dec2d40532a9a7374b6a8d6ed2437a083ec47\": container with ID starting with 7906c38ad6133b6a391f43c2087dec2d40532a9a7374b6a8d6ed2437a083ec47 not found: ID does not exist" containerID="7906c38ad6133b6a391f43c2087dec2d40532a9a7374b6a8d6ed2437a083ec47" Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.625160 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7906c38ad6133b6a391f43c2087dec2d40532a9a7374b6a8d6ed2437a083ec47"} err="failed to get container status \"7906c38ad6133b6a391f43c2087dec2d40532a9a7374b6a8d6ed2437a083ec47\": rpc error: code = NotFound desc = could not find container \"7906c38ad6133b6a391f43c2087dec2d40532a9a7374b6a8d6ed2437a083ec47\": container with ID starting with 7906c38ad6133b6a391f43c2087dec2d40532a9a7374b6a8d6ed2437a083ec47 not found: ID does not exist" Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.625191 4805 scope.go:117] "RemoveContainer" containerID="a2c139a491e284663312701882ad39d995220f59d3c83bfa33aca8d267ceec9c" Feb 26 17:22:33 crc kubenswrapper[4805]: E0226 17:22:33.625592 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2c139a491e284663312701882ad39d995220f59d3c83bfa33aca8d267ceec9c\": container with ID starting with a2c139a491e284663312701882ad39d995220f59d3c83bfa33aca8d267ceec9c not found: ID does not exist" containerID="a2c139a491e284663312701882ad39d995220f59d3c83bfa33aca8d267ceec9c" Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.625665 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2c139a491e284663312701882ad39d995220f59d3c83bfa33aca8d267ceec9c"} err="failed to get container status \"a2c139a491e284663312701882ad39d995220f59d3c83bfa33aca8d267ceec9c\": rpc error: code = NotFound desc = could not find container \"a2c139a491e284663312701882ad39d995220f59d3c83bfa33aca8d267ceec9c\": container with ID starting with a2c139a491e284663312701882ad39d995220f59d3c83bfa33aca8d267ceec9c not found: ID does not exist" Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.625709 4805 scope.go:117] "RemoveContainer" containerID="74e7423b37658a4afd05201fd49048f980d08f7797dafc6b276de33565e71893" Feb 26 17:22:33 crc kubenswrapper[4805]: E0226 17:22:33.626057 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74e7423b37658a4afd05201fd49048f980d08f7797dafc6b276de33565e71893\": container with ID starting with 74e7423b37658a4afd05201fd49048f980d08f7797dafc6b276de33565e71893 not found: ID does not exist" containerID="74e7423b37658a4afd05201fd49048f980d08f7797dafc6b276de33565e71893" Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.626097 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74e7423b37658a4afd05201fd49048f980d08f7797dafc6b276de33565e71893"} err="failed to get container status \"74e7423b37658a4afd05201fd49048f980d08f7797dafc6b276de33565e71893\": rpc error: code = NotFound desc = could not find container \"74e7423b37658a4afd05201fd49048f980d08f7797dafc6b276de33565e71893\": container with ID starting with 74e7423b37658a4afd05201fd49048f980d08f7797dafc6b276de33565e71893 not found: ID does not exist" Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.635790 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87196950-f6be-442b-a725-3cdee5962f55-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.635841 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87196950-f6be-442b-a725-3cdee5962f55-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.635856 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6lzz\" (UniqueName: \"kubernetes.io/projected/87196950-f6be-442b-a725-3cdee5962f55-kube-api-access-p6lzz\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.816282 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gzkpq"] Feb 26 17:22:33 crc kubenswrapper[4805]: I0226 17:22:33.820151 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gzkpq"] Feb 26 17:22:34 crc kubenswrapper[4805]: I0226 17:22:34.792963 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2xsss"] Feb 26 17:22:34 crc kubenswrapper[4805]: I0226 17:22:34.793212 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2xsss" podUID="7a876494-42c5-4be9-aad1-46d8ce3c68bb" containerName="registry-server" containerID="cri-o://34529b69203a5f40a94eb522b498519d9054d9622241ed114174b1dda1953af4" gracePeriod=2 Feb 26 17:22:34 crc kubenswrapper[4805]: I0226 17:22:34.961255 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83d8504f-33ad-4812-bc1e-11233c225974" path="/var/lib/kubelet/pods/83d8504f-33ad-4812-bc1e-11233c225974/volumes" Feb 26 17:22:34 crc kubenswrapper[4805]: I0226 17:22:34.962152 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87196950-f6be-442b-a725-3cdee5962f55" path="/var/lib/kubelet/pods/87196950-f6be-442b-a725-3cdee5962f55/volumes" Feb 26 17:22:35 crc kubenswrapper[4805]: I0226 17:22:35.322447 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2xsss" Feb 26 17:22:35 crc kubenswrapper[4805]: I0226 17:22:35.358895 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2qjc\" (UniqueName: \"kubernetes.io/projected/7a876494-42c5-4be9-aad1-46d8ce3c68bb-kube-api-access-n2qjc\") pod \"7a876494-42c5-4be9-aad1-46d8ce3c68bb\" (UID: \"7a876494-42c5-4be9-aad1-46d8ce3c68bb\") " Feb 26 17:22:35 crc kubenswrapper[4805]: I0226 17:22:35.359012 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a876494-42c5-4be9-aad1-46d8ce3c68bb-utilities\") pod \"7a876494-42c5-4be9-aad1-46d8ce3c68bb\" (UID: \"7a876494-42c5-4be9-aad1-46d8ce3c68bb\") " Feb 26 17:22:35 crc kubenswrapper[4805]: I0226 17:22:35.359098 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a876494-42c5-4be9-aad1-46d8ce3c68bb-catalog-content\") pod \"7a876494-42c5-4be9-aad1-46d8ce3c68bb\" (UID: \"7a876494-42c5-4be9-aad1-46d8ce3c68bb\") " Feb 26 17:22:35 crc kubenswrapper[4805]: I0226 17:22:35.360269 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a876494-42c5-4be9-aad1-46d8ce3c68bb-utilities" (OuterVolumeSpecName: "utilities") pod "7a876494-42c5-4be9-aad1-46d8ce3c68bb" (UID: "7a876494-42c5-4be9-aad1-46d8ce3c68bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:22:35 crc kubenswrapper[4805]: I0226 17:22:35.368333 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a876494-42c5-4be9-aad1-46d8ce3c68bb-kube-api-access-n2qjc" (OuterVolumeSpecName: "kube-api-access-n2qjc") pod "7a876494-42c5-4be9-aad1-46d8ce3c68bb" (UID: "7a876494-42c5-4be9-aad1-46d8ce3c68bb"). InnerVolumeSpecName "kube-api-access-n2qjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:22:35 crc kubenswrapper[4805]: I0226 17:22:35.386008 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a876494-42c5-4be9-aad1-46d8ce3c68bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a876494-42c5-4be9-aad1-46d8ce3c68bb" (UID: "7a876494-42c5-4be9-aad1-46d8ce3c68bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:22:35 crc kubenswrapper[4805]: I0226 17:22:35.400928 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n5l62"] Feb 26 17:22:35 crc kubenswrapper[4805]: I0226 17:22:35.401387 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n5l62" podUID="884caea0-c055-415b-92c3-fae420465726" containerName="registry-server" containerID="cri-o://49f93c1090503b3c506b09fd0a040fe039e461f141dff8d14209ba3ed77fb873" gracePeriod=2 Feb 26 17:22:35 crc kubenswrapper[4805]: I0226 17:22:35.460378 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a876494-42c5-4be9-aad1-46d8ce3c68bb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:35 crc kubenswrapper[4805]: I0226 17:22:35.460414 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2qjc\" (UniqueName: \"kubernetes.io/projected/7a876494-42c5-4be9-aad1-46d8ce3c68bb-kube-api-access-n2qjc\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:35 crc kubenswrapper[4805]: I0226 17:22:35.460426 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a876494-42c5-4be9-aad1-46d8ce3c68bb-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:35 crc kubenswrapper[4805]: I0226 17:22:35.507515 4805 generic.go:334] "Generic (PLEG): container finished" podID="7a876494-42c5-4be9-aad1-46d8ce3c68bb" containerID="34529b69203a5f40a94eb522b498519d9054d9622241ed114174b1dda1953af4" exitCode=0 Feb 26 17:22:35 crc kubenswrapper[4805]: I0226 17:22:35.507566 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2xsss" event={"ID":"7a876494-42c5-4be9-aad1-46d8ce3c68bb","Type":"ContainerDied","Data":"34529b69203a5f40a94eb522b498519d9054d9622241ed114174b1dda1953af4"} Feb 26 17:22:35 crc kubenswrapper[4805]: I0226 17:22:35.507599 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2xsss" event={"ID":"7a876494-42c5-4be9-aad1-46d8ce3c68bb","Type":"ContainerDied","Data":"2cf95c71cada185b9870da61b2081b121a02856e36fd269527db7a0f8a0fe6de"} Feb 26 17:22:35 crc kubenswrapper[4805]: I0226 17:22:35.507598 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2xsss" Feb 26 17:22:35 crc kubenswrapper[4805]: I0226 17:22:35.507623 4805 scope.go:117] "RemoveContainer" containerID="34529b69203a5f40a94eb522b498519d9054d9622241ed114174b1dda1953af4" Feb 26 17:22:35 crc kubenswrapper[4805]: I0226 17:22:35.545440 4805 scope.go:117] "RemoveContainer" containerID="7e61f7441131b794c97fef61acf5d277700c6085d200d0f3b5cdcc7e7a20395b" Feb 26 17:22:35 crc kubenswrapper[4805]: I0226 17:22:35.561444 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2xsss"] Feb 26 17:22:35 crc kubenswrapper[4805]: I0226 17:22:35.567792 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2xsss"] Feb 26 17:22:35 crc kubenswrapper[4805]: I0226 17:22:35.586917 4805 scope.go:117] "RemoveContainer" containerID="8e17cb90e7bcf797777439980b61bac20684c9283052ccf63e9aa8b0a6e38341" Feb 26 17:22:35 crc kubenswrapper[4805]: I0226 17:22:35.610714 4805 scope.go:117] "RemoveContainer" containerID="34529b69203a5f40a94eb522b498519d9054d9622241ed114174b1dda1953af4" Feb 26 17:22:35 crc kubenswrapper[4805]: E0226 17:22:35.616096 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34529b69203a5f40a94eb522b498519d9054d9622241ed114174b1dda1953af4\": container with ID starting with 34529b69203a5f40a94eb522b498519d9054d9622241ed114174b1dda1953af4 not found: ID does not exist" containerID="34529b69203a5f40a94eb522b498519d9054d9622241ed114174b1dda1953af4" Feb 26 17:22:35 crc kubenswrapper[4805]: I0226 17:22:35.616143 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34529b69203a5f40a94eb522b498519d9054d9622241ed114174b1dda1953af4"} err="failed to get container status \"34529b69203a5f40a94eb522b498519d9054d9622241ed114174b1dda1953af4\": rpc error: code = NotFound desc = could not find container \"34529b69203a5f40a94eb522b498519d9054d9622241ed114174b1dda1953af4\": container with ID starting with 34529b69203a5f40a94eb522b498519d9054d9622241ed114174b1dda1953af4 not found: ID does not exist" Feb 26 17:22:35 crc kubenswrapper[4805]: I0226 17:22:35.616204 4805 scope.go:117] "RemoveContainer" containerID="7e61f7441131b794c97fef61acf5d277700c6085d200d0f3b5cdcc7e7a20395b" Feb 26 17:22:35 crc kubenswrapper[4805]: E0226 17:22:35.617518 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e61f7441131b794c97fef61acf5d277700c6085d200d0f3b5cdcc7e7a20395b\": container with ID starting with 7e61f7441131b794c97fef61acf5d277700c6085d200d0f3b5cdcc7e7a20395b not found: ID does not exist" containerID="7e61f7441131b794c97fef61acf5d277700c6085d200d0f3b5cdcc7e7a20395b" Feb 26 17:22:35 crc kubenswrapper[4805]: I0226 17:22:35.617550 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e61f7441131b794c97fef61acf5d277700c6085d200d0f3b5cdcc7e7a20395b"} err="failed to get container status \"7e61f7441131b794c97fef61acf5d277700c6085d200d0f3b5cdcc7e7a20395b\": rpc error: code = NotFound desc = could not find container \"7e61f7441131b794c97fef61acf5d277700c6085d200d0f3b5cdcc7e7a20395b\": container with ID starting with 7e61f7441131b794c97fef61acf5d277700c6085d200d0f3b5cdcc7e7a20395b not found: ID does not exist" Feb 26 17:22:35 crc kubenswrapper[4805]: I0226 17:22:35.617575 4805 scope.go:117] "RemoveContainer" containerID="8e17cb90e7bcf797777439980b61bac20684c9283052ccf63e9aa8b0a6e38341" Feb 26 17:22:35 crc kubenswrapper[4805]: E0226 17:22:35.617945 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e17cb90e7bcf797777439980b61bac20684c9283052ccf63e9aa8b0a6e38341\": container with ID starting with 8e17cb90e7bcf797777439980b61bac20684c9283052ccf63e9aa8b0a6e38341 not found: ID does not exist" containerID="8e17cb90e7bcf797777439980b61bac20684c9283052ccf63e9aa8b0a6e38341" Feb 26 17:22:35 crc kubenswrapper[4805]: I0226 17:22:35.617963 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e17cb90e7bcf797777439980b61bac20684c9283052ccf63e9aa8b0a6e38341"} err="failed to get container status \"8e17cb90e7bcf797777439980b61bac20684c9283052ccf63e9aa8b0a6e38341\": rpc error: code = NotFound desc = could not find container \"8e17cb90e7bcf797777439980b61bac20684c9283052ccf63e9aa8b0a6e38341\": container with ID starting with 8e17cb90e7bcf797777439980b61bac20684c9283052ccf63e9aa8b0a6e38341 not found: ID does not exist" Feb 26 17:22:35 crc kubenswrapper[4805]: I0226 17:22:35.872038 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n5l62" Feb 26 17:22:35 crc kubenswrapper[4805]: I0226 17:22:35.970122 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/884caea0-c055-415b-92c3-fae420465726-catalog-content\") pod \"884caea0-c055-415b-92c3-fae420465726\" (UID: \"884caea0-c055-415b-92c3-fae420465726\") " Feb 26 17:22:35 crc kubenswrapper[4805]: I0226 17:22:35.970241 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjwsj\" (UniqueName: \"kubernetes.io/projected/884caea0-c055-415b-92c3-fae420465726-kube-api-access-rjwsj\") pod \"884caea0-c055-415b-92c3-fae420465726\" (UID: \"884caea0-c055-415b-92c3-fae420465726\") " Feb 26 17:22:35 crc kubenswrapper[4805]: I0226 17:22:35.970273 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/884caea0-c055-415b-92c3-fae420465726-utilities\") pod \"884caea0-c055-415b-92c3-fae420465726\" (UID: \"884caea0-c055-415b-92c3-fae420465726\") " Feb 26 17:22:35 crc kubenswrapper[4805]: I0226 17:22:35.971416 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/884caea0-c055-415b-92c3-fae420465726-utilities" (OuterVolumeSpecName: "utilities") pod "884caea0-c055-415b-92c3-fae420465726" (UID: "884caea0-c055-415b-92c3-fae420465726"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:22:35 crc kubenswrapper[4805]: I0226 17:22:35.977409 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/884caea0-c055-415b-92c3-fae420465726-kube-api-access-rjwsj" (OuterVolumeSpecName: "kube-api-access-rjwsj") pod "884caea0-c055-415b-92c3-fae420465726" (UID: "884caea0-c055-415b-92c3-fae420465726"). InnerVolumeSpecName "kube-api-access-rjwsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:22:36 crc kubenswrapper[4805]: I0226 17:22:36.077402 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjwsj\" (UniqueName: \"kubernetes.io/projected/884caea0-c055-415b-92c3-fae420465726-kube-api-access-rjwsj\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:36 crc kubenswrapper[4805]: I0226 17:22:36.077462 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/884caea0-c055-415b-92c3-fae420465726-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:36 crc kubenswrapper[4805]: I0226 17:22:36.100285 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/884caea0-c055-415b-92c3-fae420465726-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "884caea0-c055-415b-92c3-fae420465726" (UID: "884caea0-c055-415b-92c3-fae420465726"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:22:36 crc kubenswrapper[4805]: I0226 17:22:36.178952 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/884caea0-c055-415b-92c3-fae420465726-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:36 crc kubenswrapper[4805]: I0226 17:22:36.516756 4805 generic.go:334] "Generic (PLEG): container finished" podID="884caea0-c055-415b-92c3-fae420465726" containerID="49f93c1090503b3c506b09fd0a040fe039e461f141dff8d14209ba3ed77fb873" exitCode=0 Feb 26 17:22:36 crc kubenswrapper[4805]: I0226 17:22:36.516801 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5l62" event={"ID":"884caea0-c055-415b-92c3-fae420465726","Type":"ContainerDied","Data":"49f93c1090503b3c506b09fd0a040fe039e461f141dff8d14209ba3ed77fb873"} Feb 26 17:22:36 crc kubenswrapper[4805]: I0226 17:22:36.516841 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n5l62" Feb 26 17:22:36 crc kubenswrapper[4805]: I0226 17:22:36.516867 4805 scope.go:117] "RemoveContainer" containerID="49f93c1090503b3c506b09fd0a040fe039e461f141dff8d14209ba3ed77fb873" Feb 26 17:22:36 crc kubenswrapper[4805]: I0226 17:22:36.516852 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n5l62" event={"ID":"884caea0-c055-415b-92c3-fae420465726","Type":"ContainerDied","Data":"8ae67eb4b56d2107cbec72987ceb1b6325462d73fa9a5b6c0d81be810c77df81"} Feb 26 17:22:36 crc kubenswrapper[4805]: I0226 17:22:36.539518 4805 scope.go:117] "RemoveContainer" containerID="0d48a1f95e5e3c8aa906c61ed59ea8a92bb7baf529f7b10c77c7ab8958f736eb" Feb 26 17:22:36 crc kubenswrapper[4805]: I0226 17:22:36.548926 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n5l62"] Feb 26 17:22:36 crc kubenswrapper[4805]: I0226 17:22:36.553268 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n5l62"] Feb 26 17:22:36 crc kubenswrapper[4805]: I0226 17:22:36.570358 4805 scope.go:117] "RemoveContainer" containerID="4ababcfbc06775918fd1e3e6d68709088132077d63810c00be2d25568a725cc9" Feb 26 17:22:36 crc kubenswrapper[4805]: I0226 17:22:36.586200 4805 scope.go:117] "RemoveContainer" containerID="49f93c1090503b3c506b09fd0a040fe039e461f141dff8d14209ba3ed77fb873" Feb 26 17:22:36 crc kubenswrapper[4805]: E0226 17:22:36.586700 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49f93c1090503b3c506b09fd0a040fe039e461f141dff8d14209ba3ed77fb873\": container with ID starting with 49f93c1090503b3c506b09fd0a040fe039e461f141dff8d14209ba3ed77fb873 not found: ID does not exist" containerID="49f93c1090503b3c506b09fd0a040fe039e461f141dff8d14209ba3ed77fb873" Feb 26 17:22:36 crc kubenswrapper[4805]: I0226 17:22:36.586751 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49f93c1090503b3c506b09fd0a040fe039e461f141dff8d14209ba3ed77fb873"} err="failed to get container status \"49f93c1090503b3c506b09fd0a040fe039e461f141dff8d14209ba3ed77fb873\": rpc error: code = NotFound desc = could not find container \"49f93c1090503b3c506b09fd0a040fe039e461f141dff8d14209ba3ed77fb873\": container with ID starting with 49f93c1090503b3c506b09fd0a040fe039e461f141dff8d14209ba3ed77fb873 not found: ID does not exist" Feb 26 17:22:36 crc kubenswrapper[4805]: I0226 17:22:36.586783 4805 scope.go:117] "RemoveContainer" containerID="0d48a1f95e5e3c8aa906c61ed59ea8a92bb7baf529f7b10c77c7ab8958f736eb" Feb 26 17:22:36 crc kubenswrapper[4805]: E0226 17:22:36.587236 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d48a1f95e5e3c8aa906c61ed59ea8a92bb7baf529f7b10c77c7ab8958f736eb\": container with ID starting with 0d48a1f95e5e3c8aa906c61ed59ea8a92bb7baf529f7b10c77c7ab8958f736eb not found: ID does not exist" containerID="0d48a1f95e5e3c8aa906c61ed59ea8a92bb7baf529f7b10c77c7ab8958f736eb" Feb 26 17:22:36 crc kubenswrapper[4805]: I0226 17:22:36.587279 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d48a1f95e5e3c8aa906c61ed59ea8a92bb7baf529f7b10c77c7ab8958f736eb"} err="failed to get container status \"0d48a1f95e5e3c8aa906c61ed59ea8a92bb7baf529f7b10c77c7ab8958f736eb\": rpc error: code = NotFound desc = could not find container \"0d48a1f95e5e3c8aa906c61ed59ea8a92bb7baf529f7b10c77c7ab8958f736eb\": container with ID starting with 0d48a1f95e5e3c8aa906c61ed59ea8a92bb7baf529f7b10c77c7ab8958f736eb not found: ID does not exist" Feb 26 17:22:36 crc kubenswrapper[4805]: I0226 17:22:36.587305 4805 scope.go:117] "RemoveContainer" containerID="4ababcfbc06775918fd1e3e6d68709088132077d63810c00be2d25568a725cc9" Feb 26 17:22:36 crc kubenswrapper[4805]: E0226 17:22:36.587786 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ababcfbc06775918fd1e3e6d68709088132077d63810c00be2d25568a725cc9\": container with ID starting with 4ababcfbc06775918fd1e3e6d68709088132077d63810c00be2d25568a725cc9 not found: ID does not exist" containerID="4ababcfbc06775918fd1e3e6d68709088132077d63810c00be2d25568a725cc9" Feb 26 17:22:36 crc kubenswrapper[4805]: I0226 17:22:36.587817 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ababcfbc06775918fd1e3e6d68709088132077d63810c00be2d25568a725cc9"} err="failed to get container status \"4ababcfbc06775918fd1e3e6d68709088132077d63810c00be2d25568a725cc9\": rpc error: code = NotFound desc = could not find container \"4ababcfbc06775918fd1e3e6d68709088132077d63810c00be2d25568a725cc9\": container with ID starting with 4ababcfbc06775918fd1e3e6d68709088132077d63810c00be2d25568a725cc9 not found: ID does not exist" Feb 26 17:22:36 crc kubenswrapper[4805]: I0226 17:22:36.961686 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a876494-42c5-4be9-aad1-46d8ce3c68bb" path="/var/lib/kubelet/pods/7a876494-42c5-4be9-aad1-46d8ce3c68bb/volumes" Feb 26 17:22:36 crc kubenswrapper[4805]: I0226 17:22:36.962677 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="884caea0-c055-415b-92c3-fae420465726" path="/var/lib/kubelet/pods/884caea0-c055-415b-92c3-fae420465726/volumes" Feb 26 17:22:42 crc kubenswrapper[4805]: I0226 17:22:42.012723 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59656c64c4-mqqld"] Feb 26 17:22:42 crc kubenswrapper[4805]: I0226 17:22:42.013551 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-59656c64c4-mqqld" podUID="06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8" containerName="route-controller-manager" containerID="cri-o://59283a8baaad43155a9a2f942cf050308d925408093f06be9781d2e0c6ea5d62" gracePeriod=30 Feb 26 17:22:42 crc kubenswrapper[4805]: I0226 17:22:42.512397 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59656c64c4-mqqld" Feb 26 17:22:42 crc kubenswrapper[4805]: I0226 17:22:42.554192 4805 generic.go:334] "Generic (PLEG): container finished" podID="06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8" containerID="59283a8baaad43155a9a2f942cf050308d925408093f06be9781d2e0c6ea5d62" exitCode=0 Feb 26 17:22:42 crc kubenswrapper[4805]: I0226 17:22:42.554235 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59656c64c4-mqqld" event={"ID":"06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8","Type":"ContainerDied","Data":"59283a8baaad43155a9a2f942cf050308d925408093f06be9781d2e0c6ea5d62"} Feb 26 17:22:42 crc kubenswrapper[4805]: I0226 17:22:42.554260 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59656c64c4-mqqld" event={"ID":"06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8","Type":"ContainerDied","Data":"151b3fab5003e2fe7e110ce882f8d8d879fe2b0ba9ef12909451e612664dfcbc"} Feb 26 17:22:42 crc kubenswrapper[4805]: I0226 17:22:42.554276 4805 scope.go:117] "RemoveContainer" containerID="59283a8baaad43155a9a2f942cf050308d925408093f06be9781d2e0c6ea5d62" Feb 26 17:22:42 crc kubenswrapper[4805]: I0226 17:22:42.554367 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59656c64c4-mqqld" Feb 26 17:22:42 crc kubenswrapper[4805]: I0226 17:22:42.568027 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8-serving-cert\") pod \"06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8\" (UID: \"06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8\") " Feb 26 17:22:42 crc kubenswrapper[4805]: I0226 17:22:42.568085 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8-client-ca\") pod \"06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8\" (UID: \"06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8\") " Feb 26 17:22:42 crc kubenswrapper[4805]: I0226 17:22:42.568147 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8-config\") pod \"06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8\" (UID: \"06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8\") " Feb 26 17:22:42 crc kubenswrapper[4805]: I0226 17:22:42.568179 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4fcv\" (UniqueName: \"kubernetes.io/projected/06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8-kube-api-access-h4fcv\") pod \"06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8\" (UID: \"06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8\") " Feb 26 17:22:42 crc kubenswrapper[4805]: I0226 17:22:42.569682 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8-client-ca" (OuterVolumeSpecName: "client-ca") pod "06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8" (UID: "06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:22:42 crc kubenswrapper[4805]: I0226 17:22:42.569831 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8-config" (OuterVolumeSpecName: "config") pod "06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8" (UID: "06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:22:42 crc kubenswrapper[4805]: I0226 17:22:42.572998 4805 scope.go:117] "RemoveContainer" containerID="59283a8baaad43155a9a2f942cf050308d925408093f06be9781d2e0c6ea5d62" Feb 26 17:22:42 crc kubenswrapper[4805]: I0226 17:22:42.573570 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8" (UID: "06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:22:42 crc kubenswrapper[4805]: E0226 17:22:42.574088 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59283a8baaad43155a9a2f942cf050308d925408093f06be9781d2e0c6ea5d62\": container with ID starting with 59283a8baaad43155a9a2f942cf050308d925408093f06be9781d2e0c6ea5d62 not found: ID does not exist" containerID="59283a8baaad43155a9a2f942cf050308d925408093f06be9781d2e0c6ea5d62" Feb 26 17:22:42 crc kubenswrapper[4805]: I0226 17:22:42.574120 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59283a8baaad43155a9a2f942cf050308d925408093f06be9781d2e0c6ea5d62"} err="failed to get container status \"59283a8baaad43155a9a2f942cf050308d925408093f06be9781d2e0c6ea5d62\": rpc error: code = NotFound desc = could not find container \"59283a8baaad43155a9a2f942cf050308d925408093f06be9781d2e0c6ea5d62\": container with ID starting with 59283a8baaad43155a9a2f942cf050308d925408093f06be9781d2e0c6ea5d62 not found: ID does not exist" Feb 26 17:22:42 crc kubenswrapper[4805]: I0226 17:22:42.574666 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8-kube-api-access-h4fcv" (OuterVolumeSpecName: "kube-api-access-h4fcv") pod "06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8" (UID: "06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8"). InnerVolumeSpecName "kube-api-access-h4fcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:22:42 crc kubenswrapper[4805]: I0226 17:22:42.669991 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:42 crc kubenswrapper[4805]: I0226 17:22:42.670060 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4fcv\" (UniqueName: \"kubernetes.io/projected/06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8-kube-api-access-h4fcv\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:42 crc kubenswrapper[4805]: I0226 17:22:42.670073 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:42 crc kubenswrapper[4805]: I0226 17:22:42.670083 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:42 crc kubenswrapper[4805]: I0226 17:22:42.886189 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59656c64c4-mqqld"] Feb 26 17:22:42 crc kubenswrapper[4805]: I0226 17:22:42.888967 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59656c64c4-mqqld"] Feb 26 17:22:42 crc kubenswrapper[4805]: I0226 17:22:42.963223 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8" path="/var/lib/kubelet/pods/06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8/volumes" Feb 26 17:22:43 crc kubenswrapper[4805]: I0226 17:22:43.783873 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6857845b4c-hbfn2"] Feb 26 17:22:43 crc kubenswrapper[4805]: E0226 17:22:43.784405 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="884caea0-c055-415b-92c3-fae420465726" containerName="registry-server" Feb 26 17:22:43 crc kubenswrapper[4805]: I0226 17:22:43.784421 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="884caea0-c055-415b-92c3-fae420465726" containerName="registry-server" Feb 26 17:22:43 crc kubenswrapper[4805]: E0226 17:22:43.784433 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87196950-f6be-442b-a725-3cdee5962f55" containerName="extract-utilities" Feb 26 17:22:43 crc kubenswrapper[4805]: I0226 17:22:43.784441 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="87196950-f6be-442b-a725-3cdee5962f55" containerName="extract-utilities" Feb 26 17:22:43 crc kubenswrapper[4805]: E0226 17:22:43.784456 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="884caea0-c055-415b-92c3-fae420465726" containerName="extract-content" Feb 26 17:22:43 crc kubenswrapper[4805]: I0226 17:22:43.784464 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="884caea0-c055-415b-92c3-fae420465726" containerName="extract-content" Feb 26 17:22:43 crc kubenswrapper[4805]: E0226 17:22:43.784478 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83d8504f-33ad-4812-bc1e-11233c225974" containerName="registry-server" Feb 26 17:22:43 crc kubenswrapper[4805]: I0226 17:22:43.784486 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="83d8504f-33ad-4812-bc1e-11233c225974" containerName="registry-server" Feb 26 17:22:43 crc kubenswrapper[4805]: E0226 17:22:43.784493 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="781869ac-5375-418c-a9db-22ce36c326fe" containerName="oc" Feb 26 17:22:43 crc kubenswrapper[4805]: I0226 17:22:43.784499 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="781869ac-5375-418c-a9db-22ce36c326fe" containerName="oc" Feb 26 17:22:43 crc kubenswrapper[4805]: E0226 17:22:43.784511 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a876494-42c5-4be9-aad1-46d8ce3c68bb" containerName="extract-utilities" Feb 26 17:22:43 crc kubenswrapper[4805]: I0226 17:22:43.784518 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a876494-42c5-4be9-aad1-46d8ce3c68bb" containerName="extract-utilities" Feb 26 17:22:43 crc kubenswrapper[4805]: E0226 17:22:43.784528 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a876494-42c5-4be9-aad1-46d8ce3c68bb" containerName="registry-server" Feb 26 17:22:43 crc kubenswrapper[4805]: I0226 17:22:43.784534 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a876494-42c5-4be9-aad1-46d8ce3c68bb" containerName="registry-server" Feb 26 17:22:43 crc kubenswrapper[4805]: E0226 17:22:43.784544 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87196950-f6be-442b-a725-3cdee5962f55" containerName="extract-content" Feb 26 17:22:43 crc kubenswrapper[4805]: I0226 17:22:43.784551 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="87196950-f6be-442b-a725-3cdee5962f55" containerName="extract-content" Feb 26 17:22:43 crc kubenswrapper[4805]: E0226 17:22:43.784558 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83d8504f-33ad-4812-bc1e-11233c225974" containerName="extract-content" Feb 26 17:22:43 crc kubenswrapper[4805]: I0226 17:22:43.784565 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="83d8504f-33ad-4812-bc1e-11233c225974" containerName="extract-content" Feb 26 17:22:43 crc kubenswrapper[4805]: E0226 17:22:43.784575 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a876494-42c5-4be9-aad1-46d8ce3c68bb" containerName="extract-content" Feb 26 17:22:43 crc kubenswrapper[4805]: I0226 17:22:43.784582 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a876494-42c5-4be9-aad1-46d8ce3c68bb" containerName="extract-content" Feb 26 17:22:43 crc kubenswrapper[4805]: E0226 17:22:43.784592 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="884caea0-c055-415b-92c3-fae420465726" containerName="extract-utilities" Feb 26 17:22:43 crc kubenswrapper[4805]: I0226 17:22:43.784600 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="884caea0-c055-415b-92c3-fae420465726" containerName="extract-utilities" Feb 26 17:22:43 crc kubenswrapper[4805]: E0226 17:22:43.784607 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83d8504f-33ad-4812-bc1e-11233c225974" containerName="extract-utilities" Feb 26 17:22:43 crc kubenswrapper[4805]: I0226 17:22:43.784616 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="83d8504f-33ad-4812-bc1e-11233c225974" containerName="extract-utilities" Feb 26 17:22:43 crc kubenswrapper[4805]: E0226 17:22:43.784626 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8" containerName="route-controller-manager" Feb 26 17:22:43 crc kubenswrapper[4805]: I0226 17:22:43.784634 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8" containerName="route-controller-manager" Feb 26 17:22:43 crc kubenswrapper[4805]: E0226 17:22:43.784642 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87196950-f6be-442b-a725-3cdee5962f55" containerName="registry-server" Feb 26 17:22:43 crc kubenswrapper[4805]: I0226 17:22:43.784649 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="87196950-f6be-442b-a725-3cdee5962f55" containerName="registry-server" Feb 26 17:22:43 crc kubenswrapper[4805]: I0226 17:22:43.784760 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="83d8504f-33ad-4812-bc1e-11233c225974" containerName="registry-server" Feb 26 17:22:43 crc kubenswrapper[4805]: I0226 17:22:43.784773 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a876494-42c5-4be9-aad1-46d8ce3c68bb" containerName="registry-server" Feb 26 17:22:43 crc kubenswrapper[4805]: I0226 17:22:43.784785 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="06933984-edd2-4f3f-9bd8-6b2c8eb2bfc8" containerName="route-controller-manager" Feb 26 17:22:43 crc kubenswrapper[4805]: I0226 17:22:43.784795 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="87196950-f6be-442b-a725-3cdee5962f55" containerName="registry-server" Feb 26 17:22:43 crc kubenswrapper[4805]: I0226 17:22:43.784806 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="781869ac-5375-418c-a9db-22ce36c326fe" containerName="oc" Feb 26 17:22:43 crc kubenswrapper[4805]: I0226 17:22:43.784820 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="884caea0-c055-415b-92c3-fae420465726" containerName="registry-server" Feb 26 17:22:43 crc kubenswrapper[4805]: I0226 17:22:43.785245 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-hbfn2" Feb 26 17:22:43 crc kubenswrapper[4805]: I0226 17:22:43.790575 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 26 17:22:43 crc kubenswrapper[4805]: I0226 17:22:43.791281 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 26 17:22:43 crc kubenswrapper[4805]: I0226 17:22:43.791305 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 26 17:22:43 crc kubenswrapper[4805]: I0226 17:22:43.791771 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 26 17:22:43 crc kubenswrapper[4805]: I0226 17:22:43.791776 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 26 17:22:43 crc kubenswrapper[4805]: I0226 17:22:43.791822 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 26 17:22:43 crc kubenswrapper[4805]: I0226 17:22:43.804206 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6857845b4c-hbfn2"] Feb 26 17:22:43 crc kubenswrapper[4805]: I0226 17:22:43.883566 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3629b83-8df6-4de4-8be7-30a0e87ca459-client-ca\") pod \"route-controller-manager-6857845b4c-hbfn2\" (UID: \"f3629b83-8df6-4de4-8be7-30a0e87ca459\") " pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-hbfn2" Feb 26 17:22:43 crc kubenswrapper[4805]: I0226 17:22:43.883618 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3629b83-8df6-4de4-8be7-30a0e87ca459-config\") pod \"route-controller-manager-6857845b4c-hbfn2\" (UID: \"f3629b83-8df6-4de4-8be7-30a0e87ca459\") " pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-hbfn2" Feb 26 17:22:43 crc kubenswrapper[4805]: I0226 17:22:43.883714 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3629b83-8df6-4de4-8be7-30a0e87ca459-serving-cert\") pod \"route-controller-manager-6857845b4c-hbfn2\" (UID: \"f3629b83-8df6-4de4-8be7-30a0e87ca459\") " pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-hbfn2" Feb 26 17:22:43 crc kubenswrapper[4805]: I0226 17:22:43.883754 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhtcx\" (UniqueName: \"kubernetes.io/projected/f3629b83-8df6-4de4-8be7-30a0e87ca459-kube-api-access-vhtcx\") pod \"route-controller-manager-6857845b4c-hbfn2\" (UID: \"f3629b83-8df6-4de4-8be7-30a0e87ca459\") " pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-hbfn2" Feb 26 17:22:43 crc kubenswrapper[4805]: I0226 17:22:43.985373 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3629b83-8df6-4de4-8be7-30a0e87ca459-serving-cert\") pod \"route-controller-manager-6857845b4c-hbfn2\" (UID: \"f3629b83-8df6-4de4-8be7-30a0e87ca459\") " pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-hbfn2" Feb 26 17:22:43 crc kubenswrapper[4805]: I0226 17:22:43.985422 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhtcx\" (UniqueName: \"kubernetes.io/projected/f3629b83-8df6-4de4-8be7-30a0e87ca459-kube-api-access-vhtcx\") pod \"route-controller-manager-6857845b4c-hbfn2\" (UID: \"f3629b83-8df6-4de4-8be7-30a0e87ca459\") " pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-hbfn2" Feb 26 17:22:43 crc kubenswrapper[4805]: I0226 17:22:43.985447 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3629b83-8df6-4de4-8be7-30a0e87ca459-client-ca\") pod \"route-controller-manager-6857845b4c-hbfn2\" (UID: \"f3629b83-8df6-4de4-8be7-30a0e87ca459\") " pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-hbfn2" Feb 26 17:22:43 crc kubenswrapper[4805]: I0226 17:22:43.985466 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3629b83-8df6-4de4-8be7-30a0e87ca459-config\") pod \"route-controller-manager-6857845b4c-hbfn2\" (UID: \"f3629b83-8df6-4de4-8be7-30a0e87ca459\") " pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-hbfn2" Feb 26 17:22:43 crc kubenswrapper[4805]: I0226 17:22:43.986642 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f3629b83-8df6-4de4-8be7-30a0e87ca459-client-ca\") pod \"route-controller-manager-6857845b4c-hbfn2\" (UID: \"f3629b83-8df6-4de4-8be7-30a0e87ca459\") " pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-hbfn2" Feb 26 17:22:43 crc kubenswrapper[4805]: I0226 17:22:43.986788 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3629b83-8df6-4de4-8be7-30a0e87ca459-config\") pod \"route-controller-manager-6857845b4c-hbfn2\" (UID: \"f3629b83-8df6-4de4-8be7-30a0e87ca459\") " pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-hbfn2" Feb 26 17:22:43 crc kubenswrapper[4805]: I0226 17:22:43.990563 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f3629b83-8df6-4de4-8be7-30a0e87ca459-serving-cert\") pod \"route-controller-manager-6857845b4c-hbfn2\" (UID: \"f3629b83-8df6-4de4-8be7-30a0e87ca459\") " pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-hbfn2" Feb 26 17:22:44 crc kubenswrapper[4805]: I0226 17:22:44.013697 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhtcx\" (UniqueName: \"kubernetes.io/projected/f3629b83-8df6-4de4-8be7-30a0e87ca459-kube-api-access-vhtcx\") pod \"route-controller-manager-6857845b4c-hbfn2\" (UID: \"f3629b83-8df6-4de4-8be7-30a0e87ca459\") " pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-hbfn2" Feb 26 17:22:44 crc kubenswrapper[4805]: I0226 17:22:44.100145 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-hbfn2" Feb 26 17:22:44 crc kubenswrapper[4805]: I0226 17:22:44.513052 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6857845b4c-hbfn2"] Feb 26 17:22:44 crc kubenswrapper[4805]: I0226 17:22:44.571740 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-hbfn2" event={"ID":"f3629b83-8df6-4de4-8be7-30a0e87ca459","Type":"ContainerStarted","Data":"179161618ca5fabe2e9afbfc6c03092d2fae06b91625c1ce4e99f4e4d77b9788"} Feb 26 17:22:45 crc kubenswrapper[4805]: I0226 17:22:45.578493 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-hbfn2" event={"ID":"f3629b83-8df6-4de4-8be7-30a0e87ca459","Type":"ContainerStarted","Data":"2def4e889df233c8b069440964332aa9984aa988c158a2b6a2356e5c3eff5c8d"} Feb 26 17:22:45 crc kubenswrapper[4805]: I0226 17:22:45.578836 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-hbfn2" Feb 26 17:22:45 crc kubenswrapper[4805]: I0226 17:22:45.584491 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-hbfn2" Feb 26 17:22:45 crc kubenswrapper[4805]: I0226 17:22:45.599854 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6857845b4c-hbfn2" podStartSLOduration=3.599835331 podStartE2EDuration="3.599835331s" podCreationTimestamp="2026-02-26 17:22:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:22:45.598954529 +0000 UTC m=+480.160708888" watchObservedRunningTime="2026-02-26 17:22:45.599835331 +0000 UTC m=+480.161589670" Feb 26 17:22:53 crc kubenswrapper[4805]: I0226 17:22:53.872183 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rmjsl"] Feb 26 17:22:53 crc kubenswrapper[4805]: I0226 17:22:53.872979 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rmjsl" podUID="1f5ab03e-b223-4e5b-8c9f-3d350a66156e" containerName="registry-server" containerID="cri-o://b79562286c88ab228b3c8c6104a206397cc102d7b51e066ac4575852f239913b" gracePeriod=30 Feb 26 17:22:53 crc kubenswrapper[4805]: I0226 17:22:53.884746 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tfcdt"] Feb 26 17:22:53 crc kubenswrapper[4805]: I0226 17:22:53.885047 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tfcdt" podUID="043bfd8c-1387-4b00-ad52-1e4efd43c942" containerName="registry-server" containerID="cri-o://d64c8f94627f0c9dc4487a64f520771b64fe673ec6d8fdb33fe6ef7bd536aca3" gracePeriod=30 Feb 26 17:22:53 crc kubenswrapper[4805]: I0226 17:22:53.897685 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d7sjf"] Feb 26 17:22:53 crc kubenswrapper[4805]: I0226 17:22:53.897930 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-d7sjf" podUID="ca3d06f8-3cc9-4e77-9d45-e1232c00b04f" containerName="marketplace-operator" containerID="cri-o://d7342f56504ecc4a2d2d74adb34367a3a0c12ab1d4d85a4bf41cedfb50fdc481" gracePeriod=30 Feb 26 17:22:53 crc kubenswrapper[4805]: I0226 17:22:53.903913 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jttjv"] Feb 26 17:22:53 crc kubenswrapper[4805]: I0226 17:22:53.910870 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jttjv" podUID="6239f68a-d80a-4fd4-9a6d-69bd48b1c015" containerName="registry-server" containerID="cri-o://b54daf2a21d916b80c5fdffaf0326ee22fbdc32fdbcdc2b58a8c35189309de85" gracePeriod=30 Feb 26 17:22:53 crc kubenswrapper[4805]: I0226 17:22:53.916119 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wnt7l"] Feb 26 17:22:53 crc kubenswrapper[4805]: I0226 17:22:53.916379 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wnt7l" podUID="231a1216-2a55-4e7b-b026-104624c69857" containerName="registry-server" containerID="cri-o://10b9cc55c2f4f19742b566d3a2fe4577649c716fb7f2830c87a9aaee835557f7" gracePeriod=30 Feb 26 17:22:53 crc kubenswrapper[4805]: I0226 17:22:53.931483 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-shsd9"] Feb 26 17:22:53 crc kubenswrapper[4805]: I0226 17:22:53.932315 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-shsd9" Feb 26 17:22:53 crc kubenswrapper[4805]: I0226 17:22:53.940935 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-shsd9"] Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.012702 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f4c1b8e1-fec8-422a-b155-b99fd4a121fc-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-shsd9\" (UID: \"f4c1b8e1-fec8-422a-b155-b99fd4a121fc\") " pod="openshift-marketplace/marketplace-operator-79b997595-shsd9" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.012766 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9vnk\" (UniqueName: \"kubernetes.io/projected/f4c1b8e1-fec8-422a-b155-b99fd4a121fc-kube-api-access-b9vnk\") pod \"marketplace-operator-79b997595-shsd9\" (UID: \"f4c1b8e1-fec8-422a-b155-b99fd4a121fc\") " pod="openshift-marketplace/marketplace-operator-79b997595-shsd9" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.012834 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f4c1b8e1-fec8-422a-b155-b99fd4a121fc-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-shsd9\" (UID: \"f4c1b8e1-fec8-422a-b155-b99fd4a121fc\") " pod="openshift-marketplace/marketplace-operator-79b997595-shsd9" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.113738 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9vnk\" (UniqueName: \"kubernetes.io/projected/f4c1b8e1-fec8-422a-b155-b99fd4a121fc-kube-api-access-b9vnk\") pod \"marketplace-operator-79b997595-shsd9\" (UID: \"f4c1b8e1-fec8-422a-b155-b99fd4a121fc\") " pod="openshift-marketplace/marketplace-operator-79b997595-shsd9" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.113799 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f4c1b8e1-fec8-422a-b155-b99fd4a121fc-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-shsd9\" (UID: \"f4c1b8e1-fec8-422a-b155-b99fd4a121fc\") " pod="openshift-marketplace/marketplace-operator-79b997595-shsd9" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.113884 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f4c1b8e1-fec8-422a-b155-b99fd4a121fc-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-shsd9\" (UID: \"f4c1b8e1-fec8-422a-b155-b99fd4a121fc\") " pod="openshift-marketplace/marketplace-operator-79b997595-shsd9" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.115102 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f4c1b8e1-fec8-422a-b155-b99fd4a121fc-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-shsd9\" (UID: \"f4c1b8e1-fec8-422a-b155-b99fd4a121fc\") " pod="openshift-marketplace/marketplace-operator-79b997595-shsd9" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.121927 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f4c1b8e1-fec8-422a-b155-b99fd4a121fc-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-shsd9\" (UID: \"f4c1b8e1-fec8-422a-b155-b99fd4a121fc\") " pod="openshift-marketplace/marketplace-operator-79b997595-shsd9" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.132105 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9vnk\" (UniqueName: \"kubernetes.io/projected/f4c1b8e1-fec8-422a-b155-b99fd4a121fc-kube-api-access-b9vnk\") pod \"marketplace-operator-79b997595-shsd9\" (UID: \"f4c1b8e1-fec8-422a-b155-b99fd4a121fc\") " pod="openshift-marketplace/marketplace-operator-79b997595-shsd9" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.349611 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-shsd9" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.358552 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tfcdt" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.410434 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-d7sjf" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.420348 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfrzh\" (UniqueName: \"kubernetes.io/projected/043bfd8c-1387-4b00-ad52-1e4efd43c942-kube-api-access-xfrzh\") pod \"043bfd8c-1387-4b00-ad52-1e4efd43c942\" (UID: \"043bfd8c-1387-4b00-ad52-1e4efd43c942\") " Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.420393 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/043bfd8c-1387-4b00-ad52-1e4efd43c942-utilities\") pod \"043bfd8c-1387-4b00-ad52-1e4efd43c942\" (UID: \"043bfd8c-1387-4b00-ad52-1e4efd43c942\") " Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.420411 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/043bfd8c-1387-4b00-ad52-1e4efd43c942-catalog-content\") pod \"043bfd8c-1387-4b00-ad52-1e4efd43c942\" (UID: \"043bfd8c-1387-4b00-ad52-1e4efd43c942\") " Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.422877 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/043bfd8c-1387-4b00-ad52-1e4efd43c942-utilities" (OuterVolumeSpecName: "utilities") pod "043bfd8c-1387-4b00-ad52-1e4efd43c942" (UID: "043bfd8c-1387-4b00-ad52-1e4efd43c942"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.428449 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/043bfd8c-1387-4b00-ad52-1e4efd43c942-kube-api-access-xfrzh" (OuterVolumeSpecName: "kube-api-access-xfrzh") pod "043bfd8c-1387-4b00-ad52-1e4efd43c942" (UID: "043bfd8c-1387-4b00-ad52-1e4efd43c942"). InnerVolumeSpecName "kube-api-access-xfrzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.439380 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jttjv" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.439824 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wnt7l" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.467187 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rmjsl" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.510708 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/043bfd8c-1387-4b00-ad52-1e4efd43c942-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "043bfd8c-1387-4b00-ad52-1e4efd43c942" (UID: "043bfd8c-1387-4b00-ad52-1e4efd43c942"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.522076 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ca3d06f8-3cc9-4e77-9d45-e1232c00b04f-marketplace-operator-metrics\") pod \"ca3d06f8-3cc9-4e77-9d45-e1232c00b04f\" (UID: \"ca3d06f8-3cc9-4e77-9d45-e1232c00b04f\") " Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.522119 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f5ab03e-b223-4e5b-8c9f-3d350a66156e-catalog-content\") pod \"1f5ab03e-b223-4e5b-8c9f-3d350a66156e\" (UID: \"1f5ab03e-b223-4e5b-8c9f-3d350a66156e\") " Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.522155 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ca3d06f8-3cc9-4e77-9d45-e1232c00b04f-marketplace-trusted-ca\") pod \"ca3d06f8-3cc9-4e77-9d45-e1232c00b04f\" (UID: \"ca3d06f8-3cc9-4e77-9d45-e1232c00b04f\") " Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.522180 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnv8p\" (UniqueName: \"kubernetes.io/projected/1f5ab03e-b223-4e5b-8c9f-3d350a66156e-kube-api-access-rnv8p\") pod \"1f5ab03e-b223-4e5b-8c9f-3d350a66156e\" (UID: \"1f5ab03e-b223-4e5b-8c9f-3d350a66156e\") " Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.522204 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58lf6\" (UniqueName: \"kubernetes.io/projected/6239f68a-d80a-4fd4-9a6d-69bd48b1c015-kube-api-access-58lf6\") pod \"6239f68a-d80a-4fd4-9a6d-69bd48b1c015\" (UID: \"6239f68a-d80a-4fd4-9a6d-69bd48b1c015\") " Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.522249 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f5ab03e-b223-4e5b-8c9f-3d350a66156e-utilities\") pod \"1f5ab03e-b223-4e5b-8c9f-3d350a66156e\" (UID: \"1f5ab03e-b223-4e5b-8c9f-3d350a66156e\") " Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.522270 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6239f68a-d80a-4fd4-9a6d-69bd48b1c015-utilities\") pod \"6239f68a-d80a-4fd4-9a6d-69bd48b1c015\" (UID: \"6239f68a-d80a-4fd4-9a6d-69bd48b1c015\") " Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.522290 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sckxv\" (UniqueName: \"kubernetes.io/projected/ca3d06f8-3cc9-4e77-9d45-e1232c00b04f-kube-api-access-sckxv\") pod \"ca3d06f8-3cc9-4e77-9d45-e1232c00b04f\" (UID: \"ca3d06f8-3cc9-4e77-9d45-e1232c00b04f\") " Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.522311 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6239f68a-d80a-4fd4-9a6d-69bd48b1c015-catalog-content\") pod \"6239f68a-d80a-4fd4-9a6d-69bd48b1c015\" (UID: \"6239f68a-d80a-4fd4-9a6d-69bd48b1c015\") " Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.522354 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/231a1216-2a55-4e7b-b026-104624c69857-catalog-content\") pod \"231a1216-2a55-4e7b-b026-104624c69857\" (UID: \"231a1216-2a55-4e7b-b026-104624c69857\") " Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.522382 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/231a1216-2a55-4e7b-b026-104624c69857-utilities\") pod \"231a1216-2a55-4e7b-b026-104624c69857\" (UID: \"231a1216-2a55-4e7b-b026-104624c69857\") " Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.522428 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfr46\" (UniqueName: \"kubernetes.io/projected/231a1216-2a55-4e7b-b026-104624c69857-kube-api-access-dfr46\") pod \"231a1216-2a55-4e7b-b026-104624c69857\" (UID: \"231a1216-2a55-4e7b-b026-104624c69857\") " Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.522623 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfrzh\" (UniqueName: \"kubernetes.io/projected/043bfd8c-1387-4b00-ad52-1e4efd43c942-kube-api-access-xfrzh\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.522636 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/043bfd8c-1387-4b00-ad52-1e4efd43c942-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.522648 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/043bfd8c-1387-4b00-ad52-1e4efd43c942-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.524883 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6239f68a-d80a-4fd4-9a6d-69bd48b1c015-utilities" (OuterVolumeSpecName: "utilities") pod "6239f68a-d80a-4fd4-9a6d-69bd48b1c015" (UID: "6239f68a-d80a-4fd4-9a6d-69bd48b1c015"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.525723 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca3d06f8-3cc9-4e77-9d45-e1232c00b04f-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "ca3d06f8-3cc9-4e77-9d45-e1232c00b04f" (UID: "ca3d06f8-3cc9-4e77-9d45-e1232c00b04f"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.526842 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca3d06f8-3cc9-4e77-9d45-e1232c00b04f-kube-api-access-sckxv" (OuterVolumeSpecName: "kube-api-access-sckxv") pod "ca3d06f8-3cc9-4e77-9d45-e1232c00b04f" (UID: "ca3d06f8-3cc9-4e77-9d45-e1232c00b04f"). InnerVolumeSpecName "kube-api-access-sckxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.528843 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6239f68a-d80a-4fd4-9a6d-69bd48b1c015-kube-api-access-58lf6" (OuterVolumeSpecName: "kube-api-access-58lf6") pod "6239f68a-d80a-4fd4-9a6d-69bd48b1c015" (UID: "6239f68a-d80a-4fd4-9a6d-69bd48b1c015"). InnerVolumeSpecName "kube-api-access-58lf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.528961 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca3d06f8-3cc9-4e77-9d45-e1232c00b04f-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "ca3d06f8-3cc9-4e77-9d45-e1232c00b04f" (UID: "ca3d06f8-3cc9-4e77-9d45-e1232c00b04f"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.530353 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/231a1216-2a55-4e7b-b026-104624c69857-utilities" (OuterVolumeSpecName: "utilities") pod "231a1216-2a55-4e7b-b026-104624c69857" (UID: "231a1216-2a55-4e7b-b026-104624c69857"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.531035 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f5ab03e-b223-4e5b-8c9f-3d350a66156e-kube-api-access-rnv8p" (OuterVolumeSpecName: "kube-api-access-rnv8p") pod "1f5ab03e-b223-4e5b-8c9f-3d350a66156e" (UID: "1f5ab03e-b223-4e5b-8c9f-3d350a66156e"). InnerVolumeSpecName "kube-api-access-rnv8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.532790 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f5ab03e-b223-4e5b-8c9f-3d350a66156e-utilities" (OuterVolumeSpecName: "utilities") pod "1f5ab03e-b223-4e5b-8c9f-3d350a66156e" (UID: "1f5ab03e-b223-4e5b-8c9f-3d350a66156e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.534270 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/231a1216-2a55-4e7b-b026-104624c69857-kube-api-access-dfr46" (OuterVolumeSpecName: "kube-api-access-dfr46") pod "231a1216-2a55-4e7b-b026-104624c69857" (UID: "231a1216-2a55-4e7b-b026-104624c69857"). InnerVolumeSpecName "kube-api-access-dfr46". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.558233 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6239f68a-d80a-4fd4-9a6d-69bd48b1c015-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6239f68a-d80a-4fd4-9a6d-69bd48b1c015" (UID: "6239f68a-d80a-4fd4-9a6d-69bd48b1c015"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.598853 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f5ab03e-b223-4e5b-8c9f-3d350a66156e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f5ab03e-b223-4e5b-8c9f-3d350a66156e" (UID: "1f5ab03e-b223-4e5b-8c9f-3d350a66156e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.622800 4805 generic.go:334] "Generic (PLEG): container finished" podID="ca3d06f8-3cc9-4e77-9d45-e1232c00b04f" containerID="d7342f56504ecc4a2d2d74adb34367a3a0c12ab1d4d85a4bf41cedfb50fdc481" exitCode=0 Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.622868 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-d7sjf" event={"ID":"ca3d06f8-3cc9-4e77-9d45-e1232c00b04f","Type":"ContainerDied","Data":"d7342f56504ecc4a2d2d74adb34367a3a0c12ab1d4d85a4bf41cedfb50fdc481"} Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.622899 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-d7sjf" event={"ID":"ca3d06f8-3cc9-4e77-9d45-e1232c00b04f","Type":"ContainerDied","Data":"60164c200f2f099a90f8d460237af904e8603af1a6e9dd10b1d6255edd870f78"} Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.622919 4805 scope.go:117] "RemoveContainer" containerID="d7342f56504ecc4a2d2d74adb34367a3a0c12ab1d4d85a4bf41cedfb50fdc481" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.623056 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-d7sjf" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.624349 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/231a1216-2a55-4e7b-b026-104624c69857-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.624383 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfr46\" (UniqueName: \"kubernetes.io/projected/231a1216-2a55-4e7b-b026-104624c69857-kube-api-access-dfr46\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.624393 4805 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ca3d06f8-3cc9-4e77-9d45-e1232c00b04f-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.624402 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f5ab03e-b223-4e5b-8c9f-3d350a66156e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.624411 4805 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ca3d06f8-3cc9-4e77-9d45-e1232c00b04f-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.624419 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnv8p\" (UniqueName: \"kubernetes.io/projected/1f5ab03e-b223-4e5b-8c9f-3d350a66156e-kube-api-access-rnv8p\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.624430 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58lf6\" (UniqueName: \"kubernetes.io/projected/6239f68a-d80a-4fd4-9a6d-69bd48b1c015-kube-api-access-58lf6\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.624438 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f5ab03e-b223-4e5b-8c9f-3d350a66156e-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.624446 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6239f68a-d80a-4fd4-9a6d-69bd48b1c015-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.624454 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sckxv\" (UniqueName: \"kubernetes.io/projected/ca3d06f8-3cc9-4e77-9d45-e1232c00b04f-kube-api-access-sckxv\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.624468 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6239f68a-d80a-4fd4-9a6d-69bd48b1c015-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.626920 4805 generic.go:334] "Generic (PLEG): container finished" podID="1f5ab03e-b223-4e5b-8c9f-3d350a66156e" containerID="b79562286c88ab228b3c8c6104a206397cc102d7b51e066ac4575852f239913b" exitCode=0 Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.626992 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rmjsl" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.627006 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rmjsl" event={"ID":"1f5ab03e-b223-4e5b-8c9f-3d350a66156e","Type":"ContainerDied","Data":"b79562286c88ab228b3c8c6104a206397cc102d7b51e066ac4575852f239913b"} Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.627224 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rmjsl" event={"ID":"1f5ab03e-b223-4e5b-8c9f-3d350a66156e","Type":"ContainerDied","Data":"ada6bb3732d4070c97c30a4bbdcc84dba25bd7637760060d01e08e0dfd378959"} Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.630405 4805 generic.go:334] "Generic (PLEG): container finished" podID="231a1216-2a55-4e7b-b026-104624c69857" containerID="10b9cc55c2f4f19742b566d3a2fe4577649c716fb7f2830c87a9aaee835557f7" exitCode=0 Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.630455 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wnt7l" event={"ID":"231a1216-2a55-4e7b-b026-104624c69857","Type":"ContainerDied","Data":"10b9cc55c2f4f19742b566d3a2fe4577649c716fb7f2830c87a9aaee835557f7"} Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.630471 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wnt7l" event={"ID":"231a1216-2a55-4e7b-b026-104624c69857","Type":"ContainerDied","Data":"58b9956844f37b58c34bf84f23e37b4dbb0f0cdf3f01c2fbc77bce8e737f7629"} Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.630524 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wnt7l" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.634321 4805 generic.go:334] "Generic (PLEG): container finished" podID="6239f68a-d80a-4fd4-9a6d-69bd48b1c015" containerID="b54daf2a21d916b80c5fdffaf0326ee22fbdc32fdbcdc2b58a8c35189309de85" exitCode=0 Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.634438 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jttjv" event={"ID":"6239f68a-d80a-4fd4-9a6d-69bd48b1c015","Type":"ContainerDied","Data":"b54daf2a21d916b80c5fdffaf0326ee22fbdc32fdbcdc2b58a8c35189309de85"} Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.634454 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jttjv" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.634472 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jttjv" event={"ID":"6239f68a-d80a-4fd4-9a6d-69bd48b1c015","Type":"ContainerDied","Data":"15ac97b2b081bb76149219f251e5191ed61b510817331a1e777f102bd39901ce"} Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.639220 4805 generic.go:334] "Generic (PLEG): container finished" podID="043bfd8c-1387-4b00-ad52-1e4efd43c942" containerID="d64c8f94627f0c9dc4487a64f520771b64fe673ec6d8fdb33fe6ef7bd536aca3" exitCode=0 Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.639269 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfcdt" event={"ID":"043bfd8c-1387-4b00-ad52-1e4efd43c942","Type":"ContainerDied","Data":"d64c8f94627f0c9dc4487a64f520771b64fe673ec6d8fdb33fe6ef7bd536aca3"} Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.639297 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tfcdt" event={"ID":"043bfd8c-1387-4b00-ad52-1e4efd43c942","Type":"ContainerDied","Data":"22404ef21c15145bc08558aea2ffe42aa9d1fc50fdb7930736eb5177f0156df5"} Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.639664 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tfcdt" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.643679 4805 scope.go:117] "RemoveContainer" containerID="715c9c1a7e8cdd6d792854aee2c8264bf52d2d10d0c3d38305e6d873da60da55" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.677415 4805 scope.go:117] "RemoveContainer" containerID="d7342f56504ecc4a2d2d74adb34367a3a0c12ab1d4d85a4bf41cedfb50fdc481" Feb 26 17:22:54 crc kubenswrapper[4805]: E0226 17:22:54.678509 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7342f56504ecc4a2d2d74adb34367a3a0c12ab1d4d85a4bf41cedfb50fdc481\": container with ID starting with d7342f56504ecc4a2d2d74adb34367a3a0c12ab1d4d85a4bf41cedfb50fdc481 not found: ID does not exist" containerID="d7342f56504ecc4a2d2d74adb34367a3a0c12ab1d4d85a4bf41cedfb50fdc481" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.680252 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7342f56504ecc4a2d2d74adb34367a3a0c12ab1d4d85a4bf41cedfb50fdc481"} err="failed to get container status \"d7342f56504ecc4a2d2d74adb34367a3a0c12ab1d4d85a4bf41cedfb50fdc481\": rpc error: code = NotFound desc = could not find container \"d7342f56504ecc4a2d2d74adb34367a3a0c12ab1d4d85a4bf41cedfb50fdc481\": container with ID starting with d7342f56504ecc4a2d2d74adb34367a3a0c12ab1d4d85a4bf41cedfb50fdc481 not found: ID does not exist" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.681365 4805 scope.go:117] "RemoveContainer" containerID="715c9c1a7e8cdd6d792854aee2c8264bf52d2d10d0c3d38305e6d873da60da55" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.683269 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rmjsl"] Feb 26 17:22:54 crc kubenswrapper[4805]: E0226 17:22:54.683554 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"715c9c1a7e8cdd6d792854aee2c8264bf52d2d10d0c3d38305e6d873da60da55\": container with ID starting with 715c9c1a7e8cdd6d792854aee2c8264bf52d2d10d0c3d38305e6d873da60da55 not found: ID does not exist" containerID="715c9c1a7e8cdd6d792854aee2c8264bf52d2d10d0c3d38305e6d873da60da55" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.683650 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"715c9c1a7e8cdd6d792854aee2c8264bf52d2d10d0c3d38305e6d873da60da55"} err="failed to get container status \"715c9c1a7e8cdd6d792854aee2c8264bf52d2d10d0c3d38305e6d873da60da55\": rpc error: code = NotFound desc = could not find container \"715c9c1a7e8cdd6d792854aee2c8264bf52d2d10d0c3d38305e6d873da60da55\": container with ID starting with 715c9c1a7e8cdd6d792854aee2c8264bf52d2d10d0c3d38305e6d873da60da55 not found: ID does not exist" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.683703 4805 scope.go:117] "RemoveContainer" containerID="b79562286c88ab228b3c8c6104a206397cc102d7b51e066ac4575852f239913b" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.689308 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/231a1216-2a55-4e7b-b026-104624c69857-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "231a1216-2a55-4e7b-b026-104624c69857" (UID: "231a1216-2a55-4e7b-b026-104624c69857"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.691778 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rmjsl"] Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.701549 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d7sjf"] Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.705167 4805 scope.go:117] "RemoveContainer" containerID="aa08eff57de18a39f44f9ca390e4d05711ab0e1d58e229554386dfd26958b4b8" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.705688 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d7sjf"] Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.709161 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jttjv"] Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.712660 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jttjv"] Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.717148 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tfcdt"] Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.720688 4805 scope.go:117] "RemoveContainer" containerID="e7543bd7fcd7df374d52902d3296b8ae90c1ae522e097d10ddcd6c9fe31d0c9a" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.721818 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tfcdt"] Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.726008 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/231a1216-2a55-4e7b-b026-104624c69857-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.733119 4805 scope.go:117] "RemoveContainer" containerID="b79562286c88ab228b3c8c6104a206397cc102d7b51e066ac4575852f239913b" Feb 26 17:22:54 crc kubenswrapper[4805]: E0226 17:22:54.733498 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b79562286c88ab228b3c8c6104a206397cc102d7b51e066ac4575852f239913b\": container with ID starting with b79562286c88ab228b3c8c6104a206397cc102d7b51e066ac4575852f239913b not found: ID does not exist" containerID="b79562286c88ab228b3c8c6104a206397cc102d7b51e066ac4575852f239913b" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.733536 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b79562286c88ab228b3c8c6104a206397cc102d7b51e066ac4575852f239913b"} err="failed to get container status \"b79562286c88ab228b3c8c6104a206397cc102d7b51e066ac4575852f239913b\": rpc error: code = NotFound desc = could not find container \"b79562286c88ab228b3c8c6104a206397cc102d7b51e066ac4575852f239913b\": container with ID starting with b79562286c88ab228b3c8c6104a206397cc102d7b51e066ac4575852f239913b not found: ID does not exist" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.733558 4805 scope.go:117] "RemoveContainer" containerID="aa08eff57de18a39f44f9ca390e4d05711ab0e1d58e229554386dfd26958b4b8" Feb 26 17:22:54 crc kubenswrapper[4805]: E0226 17:22:54.733832 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa08eff57de18a39f44f9ca390e4d05711ab0e1d58e229554386dfd26958b4b8\": container with ID starting with aa08eff57de18a39f44f9ca390e4d05711ab0e1d58e229554386dfd26958b4b8 not found: ID does not exist" containerID="aa08eff57de18a39f44f9ca390e4d05711ab0e1d58e229554386dfd26958b4b8" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.733883 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa08eff57de18a39f44f9ca390e4d05711ab0e1d58e229554386dfd26958b4b8"} err="failed to get container status \"aa08eff57de18a39f44f9ca390e4d05711ab0e1d58e229554386dfd26958b4b8\": rpc error: code = NotFound desc = could not find container \"aa08eff57de18a39f44f9ca390e4d05711ab0e1d58e229554386dfd26958b4b8\": container with ID starting with aa08eff57de18a39f44f9ca390e4d05711ab0e1d58e229554386dfd26958b4b8 not found: ID does not exist" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.733910 4805 scope.go:117] "RemoveContainer" containerID="e7543bd7fcd7df374d52902d3296b8ae90c1ae522e097d10ddcd6c9fe31d0c9a" Feb 26 17:22:54 crc kubenswrapper[4805]: E0226 17:22:54.734340 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7543bd7fcd7df374d52902d3296b8ae90c1ae522e097d10ddcd6c9fe31d0c9a\": container with ID starting with e7543bd7fcd7df374d52902d3296b8ae90c1ae522e097d10ddcd6c9fe31d0c9a not found: ID does not exist" containerID="e7543bd7fcd7df374d52902d3296b8ae90c1ae522e097d10ddcd6c9fe31d0c9a" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.734366 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7543bd7fcd7df374d52902d3296b8ae90c1ae522e097d10ddcd6c9fe31d0c9a"} err="failed to get container status \"e7543bd7fcd7df374d52902d3296b8ae90c1ae522e097d10ddcd6c9fe31d0c9a\": rpc error: code = NotFound desc = could not find container \"e7543bd7fcd7df374d52902d3296b8ae90c1ae522e097d10ddcd6c9fe31d0c9a\": container with ID starting with e7543bd7fcd7df374d52902d3296b8ae90c1ae522e097d10ddcd6c9fe31d0c9a not found: ID does not exist" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.734383 4805 scope.go:117] "RemoveContainer" containerID="10b9cc55c2f4f19742b566d3a2fe4577649c716fb7f2830c87a9aaee835557f7" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.745117 4805 scope.go:117] "RemoveContainer" containerID="06da329148ebaf712961f029d5244fbffa55525cfaf1ecf5efb77f1a0aa1dba7" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.759166 4805 scope.go:117] "RemoveContainer" containerID="3f2c66fe6516bca25e2fbbeaf987d3fe0a5aebf039162896c9b26801204f2dfa" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.773555 4805 scope.go:117] "RemoveContainer" containerID="10b9cc55c2f4f19742b566d3a2fe4577649c716fb7f2830c87a9aaee835557f7" Feb 26 17:22:54 crc kubenswrapper[4805]: E0226 17:22:54.774641 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10b9cc55c2f4f19742b566d3a2fe4577649c716fb7f2830c87a9aaee835557f7\": container with ID starting with 10b9cc55c2f4f19742b566d3a2fe4577649c716fb7f2830c87a9aaee835557f7 not found: ID does not exist" containerID="10b9cc55c2f4f19742b566d3a2fe4577649c716fb7f2830c87a9aaee835557f7" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.774690 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10b9cc55c2f4f19742b566d3a2fe4577649c716fb7f2830c87a9aaee835557f7"} err="failed to get container status \"10b9cc55c2f4f19742b566d3a2fe4577649c716fb7f2830c87a9aaee835557f7\": rpc error: code = NotFound desc = could not find container \"10b9cc55c2f4f19742b566d3a2fe4577649c716fb7f2830c87a9aaee835557f7\": container with ID starting with 10b9cc55c2f4f19742b566d3a2fe4577649c716fb7f2830c87a9aaee835557f7 not found: ID does not exist" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.774718 4805 scope.go:117] "RemoveContainer" containerID="06da329148ebaf712961f029d5244fbffa55525cfaf1ecf5efb77f1a0aa1dba7" Feb 26 17:22:54 crc kubenswrapper[4805]: E0226 17:22:54.775573 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06da329148ebaf712961f029d5244fbffa55525cfaf1ecf5efb77f1a0aa1dba7\": container with ID starting with 06da329148ebaf712961f029d5244fbffa55525cfaf1ecf5efb77f1a0aa1dba7 not found: ID does not exist" containerID="06da329148ebaf712961f029d5244fbffa55525cfaf1ecf5efb77f1a0aa1dba7" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.775607 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06da329148ebaf712961f029d5244fbffa55525cfaf1ecf5efb77f1a0aa1dba7"} err="failed to get container status \"06da329148ebaf712961f029d5244fbffa55525cfaf1ecf5efb77f1a0aa1dba7\": rpc error: code = NotFound desc = could not find container \"06da329148ebaf712961f029d5244fbffa55525cfaf1ecf5efb77f1a0aa1dba7\": container with ID starting with 06da329148ebaf712961f029d5244fbffa55525cfaf1ecf5efb77f1a0aa1dba7 not found: ID does not exist" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.775649 4805 scope.go:117] "RemoveContainer" containerID="3f2c66fe6516bca25e2fbbeaf987d3fe0a5aebf039162896c9b26801204f2dfa" Feb 26 17:22:54 crc kubenswrapper[4805]: E0226 17:22:54.775871 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f2c66fe6516bca25e2fbbeaf987d3fe0a5aebf039162896c9b26801204f2dfa\": container with ID starting with 3f2c66fe6516bca25e2fbbeaf987d3fe0a5aebf039162896c9b26801204f2dfa not found: ID does not exist" containerID="3f2c66fe6516bca25e2fbbeaf987d3fe0a5aebf039162896c9b26801204f2dfa" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.775907 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f2c66fe6516bca25e2fbbeaf987d3fe0a5aebf039162896c9b26801204f2dfa"} err="failed to get container status \"3f2c66fe6516bca25e2fbbeaf987d3fe0a5aebf039162896c9b26801204f2dfa\": rpc error: code = NotFound desc = could not find container \"3f2c66fe6516bca25e2fbbeaf987d3fe0a5aebf039162896c9b26801204f2dfa\": container with ID starting with 3f2c66fe6516bca25e2fbbeaf987d3fe0a5aebf039162896c9b26801204f2dfa not found: ID does not exist" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.775919 4805 scope.go:117] "RemoveContainer" containerID="b54daf2a21d916b80c5fdffaf0326ee22fbdc32fdbcdc2b58a8c35189309de85" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.811572 4805 scope.go:117] "RemoveContainer" containerID="d9bbc138e66f5c98f6f5a1fcf76fc09d0f02d9825a1105212c886ddb1b227584" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.813699 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-shsd9"] Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.833057 4805 scope.go:117] "RemoveContainer" containerID="58f7ff0d0eb4297c76fcabdaeaeda1cf4c6bc95a60fd79af453098edfe922877" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.854994 4805 scope.go:117] "RemoveContainer" containerID="b54daf2a21d916b80c5fdffaf0326ee22fbdc32fdbcdc2b58a8c35189309de85" Feb 26 17:22:54 crc kubenswrapper[4805]: E0226 17:22:54.855417 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b54daf2a21d916b80c5fdffaf0326ee22fbdc32fdbcdc2b58a8c35189309de85\": container with ID starting with b54daf2a21d916b80c5fdffaf0326ee22fbdc32fdbcdc2b58a8c35189309de85 not found: ID does not exist" containerID="b54daf2a21d916b80c5fdffaf0326ee22fbdc32fdbcdc2b58a8c35189309de85" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.855450 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b54daf2a21d916b80c5fdffaf0326ee22fbdc32fdbcdc2b58a8c35189309de85"} err="failed to get container status \"b54daf2a21d916b80c5fdffaf0326ee22fbdc32fdbcdc2b58a8c35189309de85\": rpc error: code = NotFound desc = could not find container \"b54daf2a21d916b80c5fdffaf0326ee22fbdc32fdbcdc2b58a8c35189309de85\": container with ID starting with b54daf2a21d916b80c5fdffaf0326ee22fbdc32fdbcdc2b58a8c35189309de85 not found: ID does not exist" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.855470 4805 scope.go:117] "RemoveContainer" containerID="d9bbc138e66f5c98f6f5a1fcf76fc09d0f02d9825a1105212c886ddb1b227584" Feb 26 17:22:54 crc kubenswrapper[4805]: E0226 17:22:54.855890 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9bbc138e66f5c98f6f5a1fcf76fc09d0f02d9825a1105212c886ddb1b227584\": container with ID starting with d9bbc138e66f5c98f6f5a1fcf76fc09d0f02d9825a1105212c886ddb1b227584 not found: ID does not exist" containerID="d9bbc138e66f5c98f6f5a1fcf76fc09d0f02d9825a1105212c886ddb1b227584" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.855930 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9bbc138e66f5c98f6f5a1fcf76fc09d0f02d9825a1105212c886ddb1b227584"} err="failed to get container status \"d9bbc138e66f5c98f6f5a1fcf76fc09d0f02d9825a1105212c886ddb1b227584\": rpc error: code = NotFound desc = could not find container \"d9bbc138e66f5c98f6f5a1fcf76fc09d0f02d9825a1105212c886ddb1b227584\": container with ID starting with d9bbc138e66f5c98f6f5a1fcf76fc09d0f02d9825a1105212c886ddb1b227584 not found: ID does not exist" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.855960 4805 scope.go:117] "RemoveContainer" containerID="58f7ff0d0eb4297c76fcabdaeaeda1cf4c6bc95a60fd79af453098edfe922877" Feb 26 17:22:54 crc kubenswrapper[4805]: E0226 17:22:54.856431 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58f7ff0d0eb4297c76fcabdaeaeda1cf4c6bc95a60fd79af453098edfe922877\": container with ID starting with 58f7ff0d0eb4297c76fcabdaeaeda1cf4c6bc95a60fd79af453098edfe922877 not found: ID does not exist" containerID="58f7ff0d0eb4297c76fcabdaeaeda1cf4c6bc95a60fd79af453098edfe922877" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.856456 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58f7ff0d0eb4297c76fcabdaeaeda1cf4c6bc95a60fd79af453098edfe922877"} err="failed to get container status \"58f7ff0d0eb4297c76fcabdaeaeda1cf4c6bc95a60fd79af453098edfe922877\": rpc error: code = NotFound desc = could not find container \"58f7ff0d0eb4297c76fcabdaeaeda1cf4c6bc95a60fd79af453098edfe922877\": container with ID starting with 58f7ff0d0eb4297c76fcabdaeaeda1cf4c6bc95a60fd79af453098edfe922877 not found: ID does not exist" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.856472 4805 scope.go:117] "RemoveContainer" containerID="d64c8f94627f0c9dc4487a64f520771b64fe673ec6d8fdb33fe6ef7bd536aca3" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.868705 4805 scope.go:117] "RemoveContainer" containerID="15d22ea6c47330e7a8396ff371731ae774dec0447f8abb5d78a23596fc8f949b" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.889910 4805 scope.go:117] "RemoveContainer" containerID="4678de5c7423994d3244ee7830b9b9fc2f8143927cf03668fd8217bf5a2cd6ca" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.916330 4805 scope.go:117] "RemoveContainer" containerID="d64c8f94627f0c9dc4487a64f520771b64fe673ec6d8fdb33fe6ef7bd536aca3" Feb 26 17:22:54 crc kubenswrapper[4805]: E0226 17:22:54.916764 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d64c8f94627f0c9dc4487a64f520771b64fe673ec6d8fdb33fe6ef7bd536aca3\": container with ID starting with d64c8f94627f0c9dc4487a64f520771b64fe673ec6d8fdb33fe6ef7bd536aca3 not found: ID does not exist" containerID="d64c8f94627f0c9dc4487a64f520771b64fe673ec6d8fdb33fe6ef7bd536aca3" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.916795 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d64c8f94627f0c9dc4487a64f520771b64fe673ec6d8fdb33fe6ef7bd536aca3"} err="failed to get container status \"d64c8f94627f0c9dc4487a64f520771b64fe673ec6d8fdb33fe6ef7bd536aca3\": rpc error: code = NotFound desc = could not find container \"d64c8f94627f0c9dc4487a64f520771b64fe673ec6d8fdb33fe6ef7bd536aca3\": container with ID starting with d64c8f94627f0c9dc4487a64f520771b64fe673ec6d8fdb33fe6ef7bd536aca3 not found: ID does not exist" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.916817 4805 scope.go:117] "RemoveContainer" containerID="15d22ea6c47330e7a8396ff371731ae774dec0447f8abb5d78a23596fc8f949b" Feb 26 17:22:54 crc kubenswrapper[4805]: E0226 17:22:54.917044 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15d22ea6c47330e7a8396ff371731ae774dec0447f8abb5d78a23596fc8f949b\": container with ID starting with 15d22ea6c47330e7a8396ff371731ae774dec0447f8abb5d78a23596fc8f949b not found: ID does not exist" containerID="15d22ea6c47330e7a8396ff371731ae774dec0447f8abb5d78a23596fc8f949b" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.917072 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15d22ea6c47330e7a8396ff371731ae774dec0447f8abb5d78a23596fc8f949b"} err="failed to get container status \"15d22ea6c47330e7a8396ff371731ae774dec0447f8abb5d78a23596fc8f949b\": rpc error: code = NotFound desc = could not find container \"15d22ea6c47330e7a8396ff371731ae774dec0447f8abb5d78a23596fc8f949b\": container with ID starting with 15d22ea6c47330e7a8396ff371731ae774dec0447f8abb5d78a23596fc8f949b not found: ID does not exist" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.917091 4805 scope.go:117] "RemoveContainer" containerID="4678de5c7423994d3244ee7830b9b9fc2f8143927cf03668fd8217bf5a2cd6ca" Feb 26 17:22:54 crc kubenswrapper[4805]: E0226 17:22:54.917425 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4678de5c7423994d3244ee7830b9b9fc2f8143927cf03668fd8217bf5a2cd6ca\": container with ID starting with 4678de5c7423994d3244ee7830b9b9fc2f8143927cf03668fd8217bf5a2cd6ca not found: ID does not exist" containerID="4678de5c7423994d3244ee7830b9b9fc2f8143927cf03668fd8217bf5a2cd6ca" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.917474 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4678de5c7423994d3244ee7830b9b9fc2f8143927cf03668fd8217bf5a2cd6ca"} err="failed to get container status \"4678de5c7423994d3244ee7830b9b9fc2f8143927cf03668fd8217bf5a2cd6ca\": rpc error: code = NotFound desc = could not find container \"4678de5c7423994d3244ee7830b9b9fc2f8143927cf03668fd8217bf5a2cd6ca\": container with ID starting with 4678de5c7423994d3244ee7830b9b9fc2f8143927cf03668fd8217bf5a2cd6ca not found: ID does not exist" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.965823 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="043bfd8c-1387-4b00-ad52-1e4efd43c942" path="/var/lib/kubelet/pods/043bfd8c-1387-4b00-ad52-1e4efd43c942/volumes" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.966489 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f5ab03e-b223-4e5b-8c9f-3d350a66156e" path="/var/lib/kubelet/pods/1f5ab03e-b223-4e5b-8c9f-3d350a66156e/volumes" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.967080 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6239f68a-d80a-4fd4-9a6d-69bd48b1c015" path="/var/lib/kubelet/pods/6239f68a-d80a-4fd4-9a6d-69bd48b1c015/volumes" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.968101 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca3d06f8-3cc9-4e77-9d45-e1232c00b04f" path="/var/lib/kubelet/pods/ca3d06f8-3cc9-4e77-9d45-e1232c00b04f/volumes" Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.968489 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wnt7l"] Feb 26 17:22:54 crc kubenswrapper[4805]: I0226 17:22:54.968519 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wnt7l"] Feb 26 17:22:55 crc kubenswrapper[4805]: I0226 17:22:55.239054 4805 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-d7sjf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 26 17:22:55 crc kubenswrapper[4805]: I0226 17:22:55.239124 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-d7sjf" podUID="ca3d06f8-3cc9-4e77-9d45-e1232c00b04f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 17:22:55 crc kubenswrapper[4805]: I0226 17:22:55.649092 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-shsd9" event={"ID":"f4c1b8e1-fec8-422a-b155-b99fd4a121fc","Type":"ContainerStarted","Data":"a180cacd82cc42559b13ae2da2128b260c3241aead6b9c356d3772194f72cfc3"} Feb 26 17:22:55 crc kubenswrapper[4805]: I0226 17:22:55.649131 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-shsd9" event={"ID":"f4c1b8e1-fec8-422a-b155-b99fd4a121fc","Type":"ContainerStarted","Data":"9dcee77e5d0011cba73e9e9b74a6457aeedc5580170fe5347748579fb3746074"} Feb 26 17:22:55 crc kubenswrapper[4805]: I0226 17:22:55.649337 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-shsd9" Feb 26 17:22:55 crc kubenswrapper[4805]: I0226 17:22:55.653435 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-shsd9" Feb 26 17:22:55 crc kubenswrapper[4805]: I0226 17:22:55.664063 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-shsd9" podStartSLOduration=2.664046343 podStartE2EDuration="2.664046343s" podCreationTimestamp="2026-02-26 17:22:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:22:55.662949695 +0000 UTC m=+490.224704064" watchObservedRunningTime="2026-02-26 17:22:55.664046343 +0000 UTC m=+490.225800682" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.091964 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zwn5f"] Feb 26 17:22:56 crc kubenswrapper[4805]: E0226 17:22:56.092253 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6239f68a-d80a-4fd4-9a6d-69bd48b1c015" containerName="extract-utilities" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.092271 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="6239f68a-d80a-4fd4-9a6d-69bd48b1c015" containerName="extract-utilities" Feb 26 17:22:56 crc kubenswrapper[4805]: E0226 17:22:56.092284 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca3d06f8-3cc9-4e77-9d45-e1232c00b04f" containerName="marketplace-operator" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.092292 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca3d06f8-3cc9-4e77-9d45-e1232c00b04f" containerName="marketplace-operator" Feb 26 17:22:56 crc kubenswrapper[4805]: E0226 17:22:56.092302 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6239f68a-d80a-4fd4-9a6d-69bd48b1c015" containerName="extract-content" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.092310 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="6239f68a-d80a-4fd4-9a6d-69bd48b1c015" containerName="extract-content" Feb 26 17:22:56 crc kubenswrapper[4805]: E0226 17:22:56.092323 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f5ab03e-b223-4e5b-8c9f-3d350a66156e" containerName="extract-utilities" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.092330 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f5ab03e-b223-4e5b-8c9f-3d350a66156e" containerName="extract-utilities" Feb 26 17:22:56 crc kubenswrapper[4805]: E0226 17:22:56.092340 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="231a1216-2a55-4e7b-b026-104624c69857" containerName="extract-utilities" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.092347 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="231a1216-2a55-4e7b-b026-104624c69857" containerName="extract-utilities" Feb 26 17:22:56 crc kubenswrapper[4805]: E0226 17:22:56.092357 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="043bfd8c-1387-4b00-ad52-1e4efd43c942" containerName="registry-server" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.092364 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="043bfd8c-1387-4b00-ad52-1e4efd43c942" containerName="registry-server" Feb 26 17:22:56 crc kubenswrapper[4805]: E0226 17:22:56.092379 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f5ab03e-b223-4e5b-8c9f-3d350a66156e" containerName="registry-server" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.092386 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f5ab03e-b223-4e5b-8c9f-3d350a66156e" containerName="registry-server" Feb 26 17:22:56 crc kubenswrapper[4805]: E0226 17:22:56.092397 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="231a1216-2a55-4e7b-b026-104624c69857" containerName="registry-server" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.092405 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="231a1216-2a55-4e7b-b026-104624c69857" containerName="registry-server" Feb 26 17:22:56 crc kubenswrapper[4805]: E0226 17:22:56.092417 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="231a1216-2a55-4e7b-b026-104624c69857" containerName="extract-content" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.092425 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="231a1216-2a55-4e7b-b026-104624c69857" containerName="extract-content" Feb 26 17:22:56 crc kubenswrapper[4805]: E0226 17:22:56.092436 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="043bfd8c-1387-4b00-ad52-1e4efd43c942" containerName="extract-content" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.092443 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="043bfd8c-1387-4b00-ad52-1e4efd43c942" containerName="extract-content" Feb 26 17:22:56 crc kubenswrapper[4805]: E0226 17:22:56.092454 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f5ab03e-b223-4e5b-8c9f-3d350a66156e" containerName="extract-content" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.092463 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f5ab03e-b223-4e5b-8c9f-3d350a66156e" containerName="extract-content" Feb 26 17:22:56 crc kubenswrapper[4805]: E0226 17:22:56.092473 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="043bfd8c-1387-4b00-ad52-1e4efd43c942" containerName="extract-utilities" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.092481 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="043bfd8c-1387-4b00-ad52-1e4efd43c942" containerName="extract-utilities" Feb 26 17:22:56 crc kubenswrapper[4805]: E0226 17:22:56.092489 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6239f68a-d80a-4fd4-9a6d-69bd48b1c015" containerName="registry-server" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.092497 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="6239f68a-d80a-4fd4-9a6d-69bd48b1c015" containerName="registry-server" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.092595 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca3d06f8-3cc9-4e77-9d45-e1232c00b04f" containerName="marketplace-operator" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.092615 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f5ab03e-b223-4e5b-8c9f-3d350a66156e" containerName="registry-server" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.092624 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="043bfd8c-1387-4b00-ad52-1e4efd43c942" containerName="registry-server" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.092636 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="231a1216-2a55-4e7b-b026-104624c69857" containerName="registry-server" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.092644 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="6239f68a-d80a-4fd4-9a6d-69bd48b1c015" containerName="registry-server" Feb 26 17:22:56 crc kubenswrapper[4805]: E0226 17:22:56.092741 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca3d06f8-3cc9-4e77-9d45-e1232c00b04f" containerName="marketplace-operator" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.092752 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca3d06f8-3cc9-4e77-9d45-e1232c00b04f" containerName="marketplace-operator" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.092849 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca3d06f8-3cc9-4e77-9d45-e1232c00b04f" containerName="marketplace-operator" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.093492 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwn5f" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.096424 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.107301 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwn5f"] Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.141364 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cab13165-dd85-4398-996a-f9795912f12e-utilities\") pod \"redhat-marketplace-zwn5f\" (UID: \"cab13165-dd85-4398-996a-f9795912f12e\") " pod="openshift-marketplace/redhat-marketplace-zwn5f" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.141421 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cab13165-dd85-4398-996a-f9795912f12e-catalog-content\") pod \"redhat-marketplace-zwn5f\" (UID: \"cab13165-dd85-4398-996a-f9795912f12e\") " pod="openshift-marketplace/redhat-marketplace-zwn5f" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.141456 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkk86\" (UniqueName: \"kubernetes.io/projected/cab13165-dd85-4398-996a-f9795912f12e-kube-api-access-hkk86\") pod \"redhat-marketplace-zwn5f\" (UID: \"cab13165-dd85-4398-996a-f9795912f12e\") " pod="openshift-marketplace/redhat-marketplace-zwn5f" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.242290 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cab13165-dd85-4398-996a-f9795912f12e-utilities\") pod \"redhat-marketplace-zwn5f\" (UID: \"cab13165-dd85-4398-996a-f9795912f12e\") " pod="openshift-marketplace/redhat-marketplace-zwn5f" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.242386 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cab13165-dd85-4398-996a-f9795912f12e-catalog-content\") pod \"redhat-marketplace-zwn5f\" (UID: \"cab13165-dd85-4398-996a-f9795912f12e\") " pod="openshift-marketplace/redhat-marketplace-zwn5f" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.242425 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkk86\" (UniqueName: \"kubernetes.io/projected/cab13165-dd85-4398-996a-f9795912f12e-kube-api-access-hkk86\") pod \"redhat-marketplace-zwn5f\" (UID: \"cab13165-dd85-4398-996a-f9795912f12e\") " pod="openshift-marketplace/redhat-marketplace-zwn5f" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.243154 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cab13165-dd85-4398-996a-f9795912f12e-utilities\") pod \"redhat-marketplace-zwn5f\" (UID: \"cab13165-dd85-4398-996a-f9795912f12e\") " pod="openshift-marketplace/redhat-marketplace-zwn5f" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.243242 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cab13165-dd85-4398-996a-f9795912f12e-catalog-content\") pod \"redhat-marketplace-zwn5f\" (UID: \"cab13165-dd85-4398-996a-f9795912f12e\") " pod="openshift-marketplace/redhat-marketplace-zwn5f" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.260980 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkk86\" (UniqueName: \"kubernetes.io/projected/cab13165-dd85-4398-996a-f9795912f12e-kube-api-access-hkk86\") pod \"redhat-marketplace-zwn5f\" (UID: \"cab13165-dd85-4398-996a-f9795912f12e\") " pod="openshift-marketplace/redhat-marketplace-zwn5f" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.283773 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8k8mj"] Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.284996 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8k8mj" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.288814 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.297116 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8k8mj"] Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.343589 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wnbd\" (UniqueName: \"kubernetes.io/projected/366c763a-8d22-4e08-a81e-77464e51ad74-kube-api-access-5wnbd\") pod \"redhat-operators-8k8mj\" (UID: \"366c763a-8d22-4e08-a81e-77464e51ad74\") " pod="openshift-marketplace/redhat-operators-8k8mj" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.343637 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/366c763a-8d22-4e08-a81e-77464e51ad74-utilities\") pod \"redhat-operators-8k8mj\" (UID: \"366c763a-8d22-4e08-a81e-77464e51ad74\") " pod="openshift-marketplace/redhat-operators-8k8mj" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.343721 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/366c763a-8d22-4e08-a81e-77464e51ad74-catalog-content\") pod \"redhat-operators-8k8mj\" (UID: \"366c763a-8d22-4e08-a81e-77464e51ad74\") " pod="openshift-marketplace/redhat-operators-8k8mj" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.408303 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwn5f" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.444976 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/366c763a-8d22-4e08-a81e-77464e51ad74-catalog-content\") pod \"redhat-operators-8k8mj\" (UID: \"366c763a-8d22-4e08-a81e-77464e51ad74\") " pod="openshift-marketplace/redhat-operators-8k8mj" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.445088 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wnbd\" (UniqueName: \"kubernetes.io/projected/366c763a-8d22-4e08-a81e-77464e51ad74-kube-api-access-5wnbd\") pod \"redhat-operators-8k8mj\" (UID: \"366c763a-8d22-4e08-a81e-77464e51ad74\") " pod="openshift-marketplace/redhat-operators-8k8mj" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.445106 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/366c763a-8d22-4e08-a81e-77464e51ad74-utilities\") pod \"redhat-operators-8k8mj\" (UID: \"366c763a-8d22-4e08-a81e-77464e51ad74\") " pod="openshift-marketplace/redhat-operators-8k8mj" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.445553 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/366c763a-8d22-4e08-a81e-77464e51ad74-catalog-content\") pod \"redhat-operators-8k8mj\" (UID: \"366c763a-8d22-4e08-a81e-77464e51ad74\") " pod="openshift-marketplace/redhat-operators-8k8mj" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.445563 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/366c763a-8d22-4e08-a81e-77464e51ad74-utilities\") pod \"redhat-operators-8k8mj\" (UID: \"366c763a-8d22-4e08-a81e-77464e51ad74\") " pod="openshift-marketplace/redhat-operators-8k8mj" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.467370 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wnbd\" (UniqueName: \"kubernetes.io/projected/366c763a-8d22-4e08-a81e-77464e51ad74-kube-api-access-5wnbd\") pod \"redhat-operators-8k8mj\" (UID: \"366c763a-8d22-4e08-a81e-77464e51ad74\") " pod="openshift-marketplace/redhat-operators-8k8mj" Feb 26 17:22:56 crc kubenswrapper[4805]: I0226 17:22:56.606631 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8k8mj" Feb 26 17:22:57 crc kubenswrapper[4805]: I0226 17:22:56.776715 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-j8q9k"] Feb 26 17:22:57 crc kubenswrapper[4805]: I0226 17:22:56.777515 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-j8q9k" Feb 26 17:22:57 crc kubenswrapper[4805]: I0226 17:22:56.808727 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-j8q9k"] Feb 26 17:22:57 crc kubenswrapper[4805]: I0226 17:22:56.819194 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwn5f"] Feb 26 17:22:57 crc kubenswrapper[4805]: W0226 17:22:56.824543 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcab13165_dd85_4398_996a_f9795912f12e.slice/crio-6940c8289cc0be884cab6a03ec4d82d3594ca483b9512fc922afea0162f89af3 WatchSource:0}: Error finding container 6940c8289cc0be884cab6a03ec4d82d3594ca483b9512fc922afea0162f89af3: Status 404 returned error can't find the container with id 6940c8289cc0be884cab6a03ec4d82d3594ca483b9512fc922afea0162f89af3 Feb 26 17:22:57 crc kubenswrapper[4805]: I0226 17:22:56.852529 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bfabe3f5-8232-490b-ae30-662ec3729910-ca-trust-extracted\") pod \"image-registry-66df7c8f76-j8q9k\" (UID: \"bfabe3f5-8232-490b-ae30-662ec3729910\") " pod="openshift-image-registry/image-registry-66df7c8f76-j8q9k" Feb 26 17:22:57 crc kubenswrapper[4805]: I0226 17:22:56.852572 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bfabe3f5-8232-490b-ae30-662ec3729910-bound-sa-token\") pod \"image-registry-66df7c8f76-j8q9k\" (UID: \"bfabe3f5-8232-490b-ae30-662ec3729910\") " pod="openshift-image-registry/image-registry-66df7c8f76-j8q9k" Feb 26 17:22:57 crc kubenswrapper[4805]: I0226 17:22:56.852630 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bfabe3f5-8232-490b-ae30-662ec3729910-registry-certificates\") pod \"image-registry-66df7c8f76-j8q9k\" (UID: \"bfabe3f5-8232-490b-ae30-662ec3729910\") " pod="openshift-image-registry/image-registry-66df7c8f76-j8q9k" Feb 26 17:22:57 crc kubenswrapper[4805]: I0226 17:22:56.852667 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdnjl\" (UniqueName: \"kubernetes.io/projected/bfabe3f5-8232-490b-ae30-662ec3729910-kube-api-access-zdnjl\") pod \"image-registry-66df7c8f76-j8q9k\" (UID: \"bfabe3f5-8232-490b-ae30-662ec3729910\") " pod="openshift-image-registry/image-registry-66df7c8f76-j8q9k" Feb 26 17:22:57 crc kubenswrapper[4805]: I0226 17:22:56.852693 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bfabe3f5-8232-490b-ae30-662ec3729910-installation-pull-secrets\") pod \"image-registry-66df7c8f76-j8q9k\" (UID: \"bfabe3f5-8232-490b-ae30-662ec3729910\") " pod="openshift-image-registry/image-registry-66df7c8f76-j8q9k" Feb 26 17:22:57 crc kubenswrapper[4805]: I0226 17:22:56.852803 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-j8q9k\" (UID: \"bfabe3f5-8232-490b-ae30-662ec3729910\") " pod="openshift-image-registry/image-registry-66df7c8f76-j8q9k" Feb 26 17:22:57 crc kubenswrapper[4805]: I0226 17:22:56.853083 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bfabe3f5-8232-490b-ae30-662ec3729910-trusted-ca\") pod \"image-registry-66df7c8f76-j8q9k\" (UID: \"bfabe3f5-8232-490b-ae30-662ec3729910\") " pod="openshift-image-registry/image-registry-66df7c8f76-j8q9k" Feb 26 17:22:57 crc kubenswrapper[4805]: I0226 17:22:56.853107 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bfabe3f5-8232-490b-ae30-662ec3729910-registry-tls\") pod \"image-registry-66df7c8f76-j8q9k\" (UID: \"bfabe3f5-8232-490b-ae30-662ec3729910\") " pod="openshift-image-registry/image-registry-66df7c8f76-j8q9k" Feb 26 17:22:57 crc kubenswrapper[4805]: I0226 17:22:56.881586 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-j8q9k\" (UID: \"bfabe3f5-8232-490b-ae30-662ec3729910\") " pod="openshift-image-registry/image-registry-66df7c8f76-j8q9k" Feb 26 17:22:57 crc kubenswrapper[4805]: I0226 17:22:56.954076 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bfabe3f5-8232-490b-ae30-662ec3729910-bound-sa-token\") pod \"image-registry-66df7c8f76-j8q9k\" (UID: \"bfabe3f5-8232-490b-ae30-662ec3729910\") " pod="openshift-image-registry/image-registry-66df7c8f76-j8q9k" Feb 26 17:22:57 crc kubenswrapper[4805]: I0226 17:22:56.954125 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bfabe3f5-8232-490b-ae30-662ec3729910-ca-trust-extracted\") pod \"image-registry-66df7c8f76-j8q9k\" (UID: \"bfabe3f5-8232-490b-ae30-662ec3729910\") " pod="openshift-image-registry/image-registry-66df7c8f76-j8q9k" Feb 26 17:22:57 crc kubenswrapper[4805]: I0226 17:22:56.954163 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bfabe3f5-8232-490b-ae30-662ec3729910-registry-certificates\") pod \"image-registry-66df7c8f76-j8q9k\" (UID: \"bfabe3f5-8232-490b-ae30-662ec3729910\") " pod="openshift-image-registry/image-registry-66df7c8f76-j8q9k" Feb 26 17:22:57 crc kubenswrapper[4805]: I0226 17:22:56.954204 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdnjl\" (UniqueName: \"kubernetes.io/projected/bfabe3f5-8232-490b-ae30-662ec3729910-kube-api-access-zdnjl\") pod \"image-registry-66df7c8f76-j8q9k\" (UID: \"bfabe3f5-8232-490b-ae30-662ec3729910\") " pod="openshift-image-registry/image-registry-66df7c8f76-j8q9k" Feb 26 17:22:57 crc kubenswrapper[4805]: I0226 17:22:56.954239 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bfabe3f5-8232-490b-ae30-662ec3729910-installation-pull-secrets\") pod \"image-registry-66df7c8f76-j8q9k\" (UID: \"bfabe3f5-8232-490b-ae30-662ec3729910\") " pod="openshift-image-registry/image-registry-66df7c8f76-j8q9k" Feb 26 17:22:57 crc kubenswrapper[4805]: I0226 17:22:56.954274 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bfabe3f5-8232-490b-ae30-662ec3729910-trusted-ca\") pod \"image-registry-66df7c8f76-j8q9k\" (UID: \"bfabe3f5-8232-490b-ae30-662ec3729910\") " pod="openshift-image-registry/image-registry-66df7c8f76-j8q9k" Feb 26 17:22:57 crc kubenswrapper[4805]: I0226 17:22:56.954296 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bfabe3f5-8232-490b-ae30-662ec3729910-registry-tls\") pod \"image-registry-66df7c8f76-j8q9k\" (UID: \"bfabe3f5-8232-490b-ae30-662ec3729910\") " pod="openshift-image-registry/image-registry-66df7c8f76-j8q9k" Feb 26 17:22:57 crc kubenswrapper[4805]: I0226 17:22:56.955955 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bfabe3f5-8232-490b-ae30-662ec3729910-registry-certificates\") pod \"image-registry-66df7c8f76-j8q9k\" (UID: \"bfabe3f5-8232-490b-ae30-662ec3729910\") " pod="openshift-image-registry/image-registry-66df7c8f76-j8q9k" Feb 26 17:22:57 crc kubenswrapper[4805]: I0226 17:22:56.956518 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bfabe3f5-8232-490b-ae30-662ec3729910-ca-trust-extracted\") pod \"image-registry-66df7c8f76-j8q9k\" (UID: \"bfabe3f5-8232-490b-ae30-662ec3729910\") " pod="openshift-image-registry/image-registry-66df7c8f76-j8q9k" Feb 26 17:22:57 crc kubenswrapper[4805]: I0226 17:22:56.957670 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bfabe3f5-8232-490b-ae30-662ec3729910-trusted-ca\") pod \"image-registry-66df7c8f76-j8q9k\" (UID: \"bfabe3f5-8232-490b-ae30-662ec3729910\") " pod="openshift-image-registry/image-registry-66df7c8f76-j8q9k" Feb 26 17:22:57 crc kubenswrapper[4805]: I0226 17:22:56.959634 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bfabe3f5-8232-490b-ae30-662ec3729910-installation-pull-secrets\") pod \"image-registry-66df7c8f76-j8q9k\" (UID: \"bfabe3f5-8232-490b-ae30-662ec3729910\") " pod="openshift-image-registry/image-registry-66df7c8f76-j8q9k" Feb 26 17:22:57 crc kubenswrapper[4805]: I0226 17:22:56.961947 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="231a1216-2a55-4e7b-b026-104624c69857" path="/var/lib/kubelet/pods/231a1216-2a55-4e7b-b026-104624c69857/volumes" Feb 26 17:22:57 crc kubenswrapper[4805]: I0226 17:22:56.965770 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bfabe3f5-8232-490b-ae30-662ec3729910-registry-tls\") pod \"image-registry-66df7c8f76-j8q9k\" (UID: \"bfabe3f5-8232-490b-ae30-662ec3729910\") " pod="openshift-image-registry/image-registry-66df7c8f76-j8q9k" Feb 26 17:22:57 crc kubenswrapper[4805]: I0226 17:22:56.979564 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdnjl\" (UniqueName: \"kubernetes.io/projected/bfabe3f5-8232-490b-ae30-662ec3729910-kube-api-access-zdnjl\") pod \"image-registry-66df7c8f76-j8q9k\" (UID: \"bfabe3f5-8232-490b-ae30-662ec3729910\") " pod="openshift-image-registry/image-registry-66df7c8f76-j8q9k" Feb 26 17:22:57 crc kubenswrapper[4805]: I0226 17:22:56.980621 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bfabe3f5-8232-490b-ae30-662ec3729910-bound-sa-token\") pod \"image-registry-66df7c8f76-j8q9k\" (UID: \"bfabe3f5-8232-490b-ae30-662ec3729910\") " pod="openshift-image-registry/image-registry-66df7c8f76-j8q9k" Feb 26 17:22:57 crc kubenswrapper[4805]: I0226 17:22:57.093621 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-j8q9k" Feb 26 17:22:57 crc kubenswrapper[4805]: I0226 17:22:57.668897 4805 generic.go:334] "Generic (PLEG): container finished" podID="cab13165-dd85-4398-996a-f9795912f12e" containerID="f58099a039ca5f05db29e7a07a1e3c9f707ee4381281377a621fb30f4d4b5fdf" exitCode=0 Feb 26 17:22:57 crc kubenswrapper[4805]: I0226 17:22:57.669050 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwn5f" event={"ID":"cab13165-dd85-4398-996a-f9795912f12e","Type":"ContainerDied","Data":"f58099a039ca5f05db29e7a07a1e3c9f707ee4381281377a621fb30f4d4b5fdf"} Feb 26 17:22:57 crc kubenswrapper[4805]: I0226 17:22:57.669323 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwn5f" event={"ID":"cab13165-dd85-4398-996a-f9795912f12e","Type":"ContainerStarted","Data":"6940c8289cc0be884cab6a03ec4d82d3594ca483b9512fc922afea0162f89af3"} Feb 26 17:22:57 crc kubenswrapper[4805]: I0226 17:22:57.721767 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8k8mj"] Feb 26 17:22:57 crc kubenswrapper[4805]: W0226 17:22:57.725944 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod366c763a_8d22_4e08_a81e_77464e51ad74.slice/crio-ecdfda411d571a2b5140566f8916f7d37ea214036f8a7e9472f94e1e66bc725b WatchSource:0}: Error finding container ecdfda411d571a2b5140566f8916f7d37ea214036f8a7e9472f94e1e66bc725b: Status 404 returned error can't find the container with id ecdfda411d571a2b5140566f8916f7d37ea214036f8a7e9472f94e1e66bc725b Feb 26 17:22:57 crc kubenswrapper[4805]: I0226 17:22:57.788475 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-j8q9k"] Feb 26 17:22:57 crc kubenswrapper[4805]: W0226 17:22:57.793030 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfabe3f5_8232_490b_ae30_662ec3729910.slice/crio-f96f03c1287bc117c413b65e5d5c0d7d74e82b63e801755a1440d0b17cd3f8c3 WatchSource:0}: Error finding container f96f03c1287bc117c413b65e5d5c0d7d74e82b63e801755a1440d0b17cd3f8c3: Status 404 returned error can't find the container with id f96f03c1287bc117c413b65e5d5c0d7d74e82b63e801755a1440d0b17cd3f8c3 Feb 26 17:22:58 crc kubenswrapper[4805]: I0226 17:22:58.486147 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-r5crp"] Feb 26 17:22:58 crc kubenswrapper[4805]: I0226 17:22:58.487569 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r5crp" Feb 26 17:22:58 crc kubenswrapper[4805]: I0226 17:22:58.491251 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 26 17:22:58 crc kubenswrapper[4805]: I0226 17:22:58.498443 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r5crp"] Feb 26 17:22:58 crc kubenswrapper[4805]: I0226 17:22:58.574887 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d021e872-9d99-40a3-8b0d-865aa5c8b287-catalog-content\") pod \"community-operators-r5crp\" (UID: \"d021e872-9d99-40a3-8b0d-865aa5c8b287\") " pod="openshift-marketplace/community-operators-r5crp" Feb 26 17:22:58 crc kubenswrapper[4805]: I0226 17:22:58.575005 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg5kf\" (UniqueName: \"kubernetes.io/projected/d021e872-9d99-40a3-8b0d-865aa5c8b287-kube-api-access-bg5kf\") pod \"community-operators-r5crp\" (UID: \"d021e872-9d99-40a3-8b0d-865aa5c8b287\") " pod="openshift-marketplace/community-operators-r5crp" Feb 26 17:22:58 crc kubenswrapper[4805]: I0226 17:22:58.575101 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d021e872-9d99-40a3-8b0d-865aa5c8b287-utilities\") pod \"community-operators-r5crp\" (UID: \"d021e872-9d99-40a3-8b0d-865aa5c8b287\") " pod="openshift-marketplace/community-operators-r5crp" Feb 26 17:22:58 crc kubenswrapper[4805]: I0226 17:22:58.675869 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d021e872-9d99-40a3-8b0d-865aa5c8b287-catalog-content\") pod \"community-operators-r5crp\" (UID: \"d021e872-9d99-40a3-8b0d-865aa5c8b287\") " pod="openshift-marketplace/community-operators-r5crp" Feb 26 17:22:58 crc kubenswrapper[4805]: I0226 17:22:58.675968 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg5kf\" (UniqueName: \"kubernetes.io/projected/d021e872-9d99-40a3-8b0d-865aa5c8b287-kube-api-access-bg5kf\") pod \"community-operators-r5crp\" (UID: \"d021e872-9d99-40a3-8b0d-865aa5c8b287\") " pod="openshift-marketplace/community-operators-r5crp" Feb 26 17:22:58 crc kubenswrapper[4805]: I0226 17:22:58.675997 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d021e872-9d99-40a3-8b0d-865aa5c8b287-utilities\") pod \"community-operators-r5crp\" (UID: \"d021e872-9d99-40a3-8b0d-865aa5c8b287\") " pod="openshift-marketplace/community-operators-r5crp" Feb 26 17:22:58 crc kubenswrapper[4805]: I0226 17:22:58.676124 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-j8q9k" event={"ID":"bfabe3f5-8232-490b-ae30-662ec3729910","Type":"ContainerStarted","Data":"b64835158aa589afcf0dc6485abb737f98317fdd06b8e535ae2e79678bb2b8a7"} Feb 26 17:22:58 crc kubenswrapper[4805]: I0226 17:22:58.676164 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-j8q9k" event={"ID":"bfabe3f5-8232-490b-ae30-662ec3729910","Type":"ContainerStarted","Data":"f96f03c1287bc117c413b65e5d5c0d7d74e82b63e801755a1440d0b17cd3f8c3"} Feb 26 17:22:58 crc kubenswrapper[4805]: I0226 17:22:58.676508 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-j8q9k" Feb 26 17:22:58 crc kubenswrapper[4805]: I0226 17:22:58.677714 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d021e872-9d99-40a3-8b0d-865aa5c8b287-catalog-content\") pod \"community-operators-r5crp\" (UID: \"d021e872-9d99-40a3-8b0d-865aa5c8b287\") " pod="openshift-marketplace/community-operators-r5crp" Feb 26 17:22:58 crc kubenswrapper[4805]: I0226 17:22:58.677842 4805 generic.go:334] "Generic (PLEG): container finished" podID="cab13165-dd85-4398-996a-f9795912f12e" containerID="0bd98378cbcc4bbbc05f721c485ccb48bbbd7273973bfef696a2ec93f1429c18" exitCode=0 Feb 26 17:22:58 crc kubenswrapper[4805]: I0226 17:22:58.677974 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwn5f" event={"ID":"cab13165-dd85-4398-996a-f9795912f12e","Type":"ContainerDied","Data":"0bd98378cbcc4bbbc05f721c485ccb48bbbd7273973bfef696a2ec93f1429c18"} Feb 26 17:22:58 crc kubenswrapper[4805]: I0226 17:22:58.678809 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d021e872-9d99-40a3-8b0d-865aa5c8b287-utilities\") pod \"community-operators-r5crp\" (UID: \"d021e872-9d99-40a3-8b0d-865aa5c8b287\") " pod="openshift-marketplace/community-operators-r5crp" Feb 26 17:22:58 crc kubenswrapper[4805]: I0226 17:22:58.680988 4805 generic.go:334] "Generic (PLEG): container finished" podID="366c763a-8d22-4e08-a81e-77464e51ad74" containerID="f017b019df6cf6c4e4dd68e20a302751a4e4f67fe6d75a605164b81a1e2ab278" exitCode=0 Feb 26 17:22:58 crc kubenswrapper[4805]: I0226 17:22:58.681058 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8k8mj" event={"ID":"366c763a-8d22-4e08-a81e-77464e51ad74","Type":"ContainerDied","Data":"f017b019df6cf6c4e4dd68e20a302751a4e4f67fe6d75a605164b81a1e2ab278"} Feb 26 17:22:58 crc kubenswrapper[4805]: I0226 17:22:58.681091 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8k8mj" event={"ID":"366c763a-8d22-4e08-a81e-77464e51ad74","Type":"ContainerStarted","Data":"ecdfda411d571a2b5140566f8916f7d37ea214036f8a7e9472f94e1e66bc725b"} Feb 26 17:22:58 crc kubenswrapper[4805]: I0226 17:22:58.693241 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z8np4"] Feb 26 17:22:58 crc kubenswrapper[4805]: I0226 17:22:58.694701 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z8np4" Feb 26 17:22:58 crc kubenswrapper[4805]: I0226 17:22:58.698846 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 26 17:22:58 crc kubenswrapper[4805]: I0226 17:22:58.708616 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg5kf\" (UniqueName: \"kubernetes.io/projected/d021e872-9d99-40a3-8b0d-865aa5c8b287-kube-api-access-bg5kf\") pod \"community-operators-r5crp\" (UID: \"d021e872-9d99-40a3-8b0d-865aa5c8b287\") " pod="openshift-marketplace/community-operators-r5crp" Feb 26 17:22:58 crc kubenswrapper[4805]: I0226 17:22:58.713253 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z8np4"] Feb 26 17:22:58 crc kubenswrapper[4805]: I0226 17:22:58.717166 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-j8q9k" podStartSLOduration=2.717146278 podStartE2EDuration="2.717146278s" podCreationTimestamp="2026-02-26 17:22:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:22:58.705697891 +0000 UTC m=+493.267452240" watchObservedRunningTime="2026-02-26 17:22:58.717146278 +0000 UTC m=+493.278900627" Feb 26 17:22:58 crc kubenswrapper[4805]: I0226 17:22:58.777959 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brm4n\" (UniqueName: \"kubernetes.io/projected/f0f39e3f-788e-466a-bb81-0278246ad4b6-kube-api-access-brm4n\") pod \"certified-operators-z8np4\" (UID: \"f0f39e3f-788e-466a-bb81-0278246ad4b6\") " pod="openshift-marketplace/certified-operators-z8np4" Feb 26 17:22:58 crc kubenswrapper[4805]: I0226 17:22:58.778191 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0f39e3f-788e-466a-bb81-0278246ad4b6-catalog-content\") pod \"certified-operators-z8np4\" (UID: \"f0f39e3f-788e-466a-bb81-0278246ad4b6\") " pod="openshift-marketplace/certified-operators-z8np4" Feb 26 17:22:58 crc kubenswrapper[4805]: I0226 17:22:58.779244 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0f39e3f-788e-466a-bb81-0278246ad4b6-utilities\") pod \"certified-operators-z8np4\" (UID: \"f0f39e3f-788e-466a-bb81-0278246ad4b6\") " pod="openshift-marketplace/certified-operators-z8np4" Feb 26 17:22:58 crc kubenswrapper[4805]: I0226 17:22:58.808274 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r5crp" Feb 26 17:22:58 crc kubenswrapper[4805]: I0226 17:22:58.882143 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0f39e3f-788e-466a-bb81-0278246ad4b6-catalog-content\") pod \"certified-operators-z8np4\" (UID: \"f0f39e3f-788e-466a-bb81-0278246ad4b6\") " pod="openshift-marketplace/certified-operators-z8np4" Feb 26 17:22:58 crc kubenswrapper[4805]: I0226 17:22:58.882493 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0f39e3f-788e-466a-bb81-0278246ad4b6-utilities\") pod \"certified-operators-z8np4\" (UID: \"f0f39e3f-788e-466a-bb81-0278246ad4b6\") " pod="openshift-marketplace/certified-operators-z8np4" Feb 26 17:22:58 crc kubenswrapper[4805]: I0226 17:22:58.882564 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brm4n\" (UniqueName: \"kubernetes.io/projected/f0f39e3f-788e-466a-bb81-0278246ad4b6-kube-api-access-brm4n\") pod \"certified-operators-z8np4\" (UID: \"f0f39e3f-788e-466a-bb81-0278246ad4b6\") " pod="openshift-marketplace/certified-operators-z8np4" Feb 26 17:22:58 crc kubenswrapper[4805]: I0226 17:22:58.883036 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0f39e3f-788e-466a-bb81-0278246ad4b6-catalog-content\") pod \"certified-operators-z8np4\" (UID: \"f0f39e3f-788e-466a-bb81-0278246ad4b6\") " pod="openshift-marketplace/certified-operators-z8np4" Feb 26 17:22:58 crc kubenswrapper[4805]: I0226 17:22:58.883433 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0f39e3f-788e-466a-bb81-0278246ad4b6-utilities\") pod \"certified-operators-z8np4\" (UID: \"f0f39e3f-788e-466a-bb81-0278246ad4b6\") " pod="openshift-marketplace/certified-operators-z8np4" Feb 26 17:22:58 crc kubenswrapper[4805]: I0226 17:22:58.910245 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brm4n\" (UniqueName: \"kubernetes.io/projected/f0f39e3f-788e-466a-bb81-0278246ad4b6-kube-api-access-brm4n\") pod \"certified-operators-z8np4\" (UID: \"f0f39e3f-788e-466a-bb81-0278246ad4b6\") " pod="openshift-marketplace/certified-operators-z8np4" Feb 26 17:22:59 crc kubenswrapper[4805]: I0226 17:22:59.049582 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z8np4" Feb 26 17:22:59 crc kubenswrapper[4805]: I0226 17:22:59.222618 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r5crp"] Feb 26 17:22:59 crc kubenswrapper[4805]: W0226 17:22:59.234679 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd021e872_9d99_40a3_8b0d_865aa5c8b287.slice/crio-ff20e9bdf7412a756bf454fc45ef64b6938a7cb050bcd2efefcb3fda3a74051e WatchSource:0}: Error finding container ff20e9bdf7412a756bf454fc45ef64b6938a7cb050bcd2efefcb3fda3a74051e: Status 404 returned error can't find the container with id ff20e9bdf7412a756bf454fc45ef64b6938a7cb050bcd2efefcb3fda3a74051e Feb 26 17:22:59 crc kubenswrapper[4805]: I0226 17:22:59.485770 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z8np4"] Feb 26 17:22:59 crc kubenswrapper[4805]: I0226 17:22:59.688076 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8np4" event={"ID":"f0f39e3f-788e-466a-bb81-0278246ad4b6","Type":"ContainerStarted","Data":"da0936d9074ea36d06e970cd5561354be6ada9fa8c361576e2c02c9afb6cecd1"} Feb 26 17:22:59 crc kubenswrapper[4805]: I0226 17:22:59.689971 4805 generic.go:334] "Generic (PLEG): container finished" podID="d021e872-9d99-40a3-8b0d-865aa5c8b287" containerID="d299feee60a77fef57f5e06ca27aaebcb186e66aefb61f9e622e9adeda9a4756" exitCode=0 Feb 26 17:22:59 crc kubenswrapper[4805]: I0226 17:22:59.690050 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5crp" event={"ID":"d021e872-9d99-40a3-8b0d-865aa5c8b287","Type":"ContainerDied","Data":"d299feee60a77fef57f5e06ca27aaebcb186e66aefb61f9e622e9adeda9a4756"} Feb 26 17:22:59 crc kubenswrapper[4805]: I0226 17:22:59.690103 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5crp" event={"ID":"d021e872-9d99-40a3-8b0d-865aa5c8b287","Type":"ContainerStarted","Data":"ff20e9bdf7412a756bf454fc45ef64b6938a7cb050bcd2efefcb3fda3a74051e"} Feb 26 17:22:59 crc kubenswrapper[4805]: I0226 17:22:59.692502 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwn5f" event={"ID":"cab13165-dd85-4398-996a-f9795912f12e","Type":"ContainerStarted","Data":"43a6dc736beafe58ddc4a2fdaeaa5e42fc1a872ecf39e56259ec3e29f5c4cc41"} Feb 26 17:23:00 crc kubenswrapper[4805]: I0226 17:23:00.699053 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8k8mj" event={"ID":"366c763a-8d22-4e08-a81e-77464e51ad74","Type":"ContainerStarted","Data":"9208f8cd37127487d5b9a25b492eeb6499de05e250cc8d0409946a3e9c5a8526"} Feb 26 17:23:00 crc kubenswrapper[4805]: I0226 17:23:00.700802 4805 generic.go:334] "Generic (PLEG): container finished" podID="f0f39e3f-788e-466a-bb81-0278246ad4b6" containerID="b7703d0850cab3d8e777e15c08a113ad35d0b661d9546046be3d82b666593681" exitCode=0 Feb 26 17:23:00 crc kubenswrapper[4805]: I0226 17:23:00.701567 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8np4" event={"ID":"f0f39e3f-788e-466a-bb81-0278246ad4b6","Type":"ContainerDied","Data":"b7703d0850cab3d8e777e15c08a113ad35d0b661d9546046be3d82b666593681"} Feb 26 17:23:00 crc kubenswrapper[4805]: I0226 17:23:00.727468 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zwn5f" podStartSLOduration=3.217536076 podStartE2EDuration="4.727449327s" podCreationTimestamp="2026-02-26 17:22:56 +0000 UTC" firstStartedPulling="2026-02-26 17:22:57.670782623 +0000 UTC m=+492.232536952" lastFinishedPulling="2026-02-26 17:22:59.180695864 +0000 UTC m=+493.742450203" observedRunningTime="2026-02-26 17:22:59.724304473 +0000 UTC m=+494.286058822" watchObservedRunningTime="2026-02-26 17:23:00.727449327 +0000 UTC m=+495.289203666" Feb 26 17:23:01 crc kubenswrapper[4805]: I0226 17:23:01.712554 4805 generic.go:334] "Generic (PLEG): container finished" podID="d021e872-9d99-40a3-8b0d-865aa5c8b287" containerID="1af151861f61a51c1572a850f8fcb4621c3d83d801533caf06b0a4dc6e1e8c48" exitCode=0 Feb 26 17:23:01 crc kubenswrapper[4805]: I0226 17:23:01.712641 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5crp" event={"ID":"d021e872-9d99-40a3-8b0d-865aa5c8b287","Type":"ContainerDied","Data":"1af151861f61a51c1572a850f8fcb4621c3d83d801533caf06b0a4dc6e1e8c48"} Feb 26 17:23:01 crc kubenswrapper[4805]: I0226 17:23:01.714793 4805 generic.go:334] "Generic (PLEG): container finished" podID="366c763a-8d22-4e08-a81e-77464e51ad74" containerID="9208f8cd37127487d5b9a25b492eeb6499de05e250cc8d0409946a3e9c5a8526" exitCode=0 Feb 26 17:23:01 crc kubenswrapper[4805]: I0226 17:23:01.714853 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8k8mj" event={"ID":"366c763a-8d22-4e08-a81e-77464e51ad74","Type":"ContainerDied","Data":"9208f8cd37127487d5b9a25b492eeb6499de05e250cc8d0409946a3e9c5a8526"} Feb 26 17:23:02 crc kubenswrapper[4805]: I0226 17:23:02.014417 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-cd9f4966d-6glgz"] Feb 26 17:23:02 crc kubenswrapper[4805]: I0226 17:23:02.015716 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-cd9f4966d-6glgz" podUID="93b3feae-d3a5-4818-94c6-ac59f5ec20e9" containerName="controller-manager" containerID="cri-o://9492072be6cab00a0af5648c3f6aa728cfa52f348513b0d0cf83a33c1fa6e767" gracePeriod=30 Feb 26 17:23:02 crc kubenswrapper[4805]: I0226 17:23:02.447489 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cd9f4966d-6glgz" Feb 26 17:23:02 crc kubenswrapper[4805]: I0226 17:23:02.631996 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93b3feae-d3a5-4818-94c6-ac59f5ec20e9-client-ca\") pod \"93b3feae-d3a5-4818-94c6-ac59f5ec20e9\" (UID: \"93b3feae-d3a5-4818-94c6-ac59f5ec20e9\") " Feb 26 17:23:02 crc kubenswrapper[4805]: I0226 17:23:02.632089 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93b3feae-d3a5-4818-94c6-ac59f5ec20e9-proxy-ca-bundles\") pod \"93b3feae-d3a5-4818-94c6-ac59f5ec20e9\" (UID: \"93b3feae-d3a5-4818-94c6-ac59f5ec20e9\") " Feb 26 17:23:02 crc kubenswrapper[4805]: I0226 17:23:02.632139 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93b3feae-d3a5-4818-94c6-ac59f5ec20e9-serving-cert\") pod \"93b3feae-d3a5-4818-94c6-ac59f5ec20e9\" (UID: \"93b3feae-d3a5-4818-94c6-ac59f5ec20e9\") " Feb 26 17:23:02 crc kubenswrapper[4805]: I0226 17:23:02.632172 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4ql2\" (UniqueName: \"kubernetes.io/projected/93b3feae-d3a5-4818-94c6-ac59f5ec20e9-kube-api-access-s4ql2\") pod \"93b3feae-d3a5-4818-94c6-ac59f5ec20e9\" (UID: \"93b3feae-d3a5-4818-94c6-ac59f5ec20e9\") " Feb 26 17:23:02 crc kubenswrapper[4805]: I0226 17:23:02.632219 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93b3feae-d3a5-4818-94c6-ac59f5ec20e9-config\") pod \"93b3feae-d3a5-4818-94c6-ac59f5ec20e9\" (UID: \"93b3feae-d3a5-4818-94c6-ac59f5ec20e9\") " Feb 26 17:23:02 crc kubenswrapper[4805]: I0226 17:23:02.633809 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93b3feae-d3a5-4818-94c6-ac59f5ec20e9-client-ca" (OuterVolumeSpecName: "client-ca") pod "93b3feae-d3a5-4818-94c6-ac59f5ec20e9" (UID: "93b3feae-d3a5-4818-94c6-ac59f5ec20e9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:23:02 crc kubenswrapper[4805]: I0226 17:23:02.633853 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93b3feae-d3a5-4818-94c6-ac59f5ec20e9-config" (OuterVolumeSpecName: "config") pod "93b3feae-d3a5-4818-94c6-ac59f5ec20e9" (UID: "93b3feae-d3a5-4818-94c6-ac59f5ec20e9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:23:02 crc kubenswrapper[4805]: I0226 17:23:02.634417 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93b3feae-d3a5-4818-94c6-ac59f5ec20e9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "93b3feae-d3a5-4818-94c6-ac59f5ec20e9" (UID: "93b3feae-d3a5-4818-94c6-ac59f5ec20e9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:23:02 crc kubenswrapper[4805]: I0226 17:23:02.640610 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93b3feae-d3a5-4818-94c6-ac59f5ec20e9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "93b3feae-d3a5-4818-94c6-ac59f5ec20e9" (UID: "93b3feae-d3a5-4818-94c6-ac59f5ec20e9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:23:02 crc kubenswrapper[4805]: I0226 17:23:02.640712 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93b3feae-d3a5-4818-94c6-ac59f5ec20e9-kube-api-access-s4ql2" (OuterVolumeSpecName: "kube-api-access-s4ql2") pod "93b3feae-d3a5-4818-94c6-ac59f5ec20e9" (UID: "93b3feae-d3a5-4818-94c6-ac59f5ec20e9"). InnerVolumeSpecName "kube-api-access-s4ql2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:23:02 crc kubenswrapper[4805]: I0226 17:23:02.722847 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8np4" event={"ID":"f0f39e3f-788e-466a-bb81-0278246ad4b6","Type":"ContainerStarted","Data":"dd14d4185306070f237d914b42b2604e0d0a6925abb73aeae3ed35797fa5b8bc"} Feb 26 17:23:02 crc kubenswrapper[4805]: I0226 17:23:02.725640 4805 generic.go:334] "Generic (PLEG): container finished" podID="93b3feae-d3a5-4818-94c6-ac59f5ec20e9" containerID="9492072be6cab00a0af5648c3f6aa728cfa52f348513b0d0cf83a33c1fa6e767" exitCode=0 Feb 26 17:23:02 crc kubenswrapper[4805]: I0226 17:23:02.725693 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cd9f4966d-6glgz" Feb 26 17:23:02 crc kubenswrapper[4805]: I0226 17:23:02.725697 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cd9f4966d-6glgz" event={"ID":"93b3feae-d3a5-4818-94c6-ac59f5ec20e9","Type":"ContainerDied","Data":"9492072be6cab00a0af5648c3f6aa728cfa52f348513b0d0cf83a33c1fa6e767"} Feb 26 17:23:02 crc kubenswrapper[4805]: I0226 17:23:02.725903 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cd9f4966d-6glgz" event={"ID":"93b3feae-d3a5-4818-94c6-ac59f5ec20e9","Type":"ContainerDied","Data":"89c0ee7fe47543ee3856b89e095e66c170d56de3af3483c7094fa01a966e5eb1"} Feb 26 17:23:02 crc kubenswrapper[4805]: I0226 17:23:02.725935 4805 scope.go:117] "RemoveContainer" containerID="9492072be6cab00a0af5648c3f6aa728cfa52f348513b0d0cf83a33c1fa6e767" Feb 26 17:23:02 crc kubenswrapper[4805]: I0226 17:23:02.731869 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8k8mj" event={"ID":"366c763a-8d22-4e08-a81e-77464e51ad74","Type":"ContainerStarted","Data":"1c8f3d618f51bc29cf3e0ad8f3fbd235e2676ae68aa6eb812e228daeb756ac9c"} Feb 26 17:23:02 crc kubenswrapper[4805]: I0226 17:23:02.733116 4805 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93b3feae-d3a5-4818-94c6-ac59f5ec20e9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 26 17:23:02 crc kubenswrapper[4805]: I0226 17:23:02.733142 4805 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93b3feae-d3a5-4818-94c6-ac59f5ec20e9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 26 17:23:02 crc kubenswrapper[4805]: I0226 17:23:02.733159 4805 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93b3feae-d3a5-4818-94c6-ac59f5ec20e9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:23:02 crc kubenswrapper[4805]: I0226 17:23:02.733170 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4ql2\" (UniqueName: \"kubernetes.io/projected/93b3feae-d3a5-4818-94c6-ac59f5ec20e9-kube-api-access-s4ql2\") on node \"crc\" DevicePath \"\"" Feb 26 17:23:02 crc kubenswrapper[4805]: I0226 17:23:02.733183 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93b3feae-d3a5-4818-94c6-ac59f5ec20e9-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:23:02 crc kubenswrapper[4805]: I0226 17:23:02.746224 4805 scope.go:117] "RemoveContainer" containerID="9492072be6cab00a0af5648c3f6aa728cfa52f348513b0d0cf83a33c1fa6e767" Feb 26 17:23:02 crc kubenswrapper[4805]: E0226 17:23:02.747477 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9492072be6cab00a0af5648c3f6aa728cfa52f348513b0d0cf83a33c1fa6e767\": container with ID starting with 9492072be6cab00a0af5648c3f6aa728cfa52f348513b0d0cf83a33c1fa6e767 not found: ID does not exist" containerID="9492072be6cab00a0af5648c3f6aa728cfa52f348513b0d0cf83a33c1fa6e767" Feb 26 17:23:02 crc kubenswrapper[4805]: I0226 17:23:02.747522 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9492072be6cab00a0af5648c3f6aa728cfa52f348513b0d0cf83a33c1fa6e767"} err="failed to get container status \"9492072be6cab00a0af5648c3f6aa728cfa52f348513b0d0cf83a33c1fa6e767\": rpc error: code = NotFound desc = could not find container \"9492072be6cab00a0af5648c3f6aa728cfa52f348513b0d0cf83a33c1fa6e767\": container with ID starting with 9492072be6cab00a0af5648c3f6aa728cfa52f348513b0d0cf83a33c1fa6e767 not found: ID does not exist" Feb 26 17:23:02 crc kubenswrapper[4805]: I0226 17:23:02.767062 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8k8mj" podStartSLOduration=2.903351745 podStartE2EDuration="6.767044568s" podCreationTimestamp="2026-02-26 17:22:56 +0000 UTC" firstStartedPulling="2026-02-26 17:22:58.682194083 +0000 UTC m=+493.243948422" lastFinishedPulling="2026-02-26 17:23:02.545886906 +0000 UTC m=+497.107641245" observedRunningTime="2026-02-26 17:23:02.764830403 +0000 UTC m=+497.326584752" watchObservedRunningTime="2026-02-26 17:23:02.767044568 +0000 UTC m=+497.328798907" Feb 26 17:23:02 crc kubenswrapper[4805]: I0226 17:23:02.782376 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-cd9f4966d-6glgz"] Feb 26 17:23:02 crc kubenswrapper[4805]: I0226 17:23:02.785670 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-cd9f4966d-6glgz"] Feb 26 17:23:02 crc kubenswrapper[4805]: I0226 17:23:02.963009 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93b3feae-d3a5-4818-94c6-ac59f5ec20e9" path="/var/lib/kubelet/pods/93b3feae-d3a5-4818-94c6-ac59f5ec20e9/volumes" Feb 26 17:23:02 crc kubenswrapper[4805]: I0226 17:23:02.977941 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 17:23:02 crc kubenswrapper[4805]: I0226 17:23:02.978128 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 17:23:03 crc kubenswrapper[4805]: I0226 17:23:03.746468 4805 generic.go:334] "Generic (PLEG): container finished" podID="f0f39e3f-788e-466a-bb81-0278246ad4b6" containerID="dd14d4185306070f237d914b42b2604e0d0a6925abb73aeae3ed35797fa5b8bc" exitCode=0 Feb 26 17:23:03 crc kubenswrapper[4805]: I0226 17:23:03.746596 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8np4" event={"ID":"f0f39e3f-788e-466a-bb81-0278246ad4b6","Type":"ContainerDied","Data":"dd14d4185306070f237d914b42b2604e0d0a6925abb73aeae3ed35797fa5b8bc"} Feb 26 17:23:03 crc kubenswrapper[4805]: I0226 17:23:03.752676 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5crp" event={"ID":"d021e872-9d99-40a3-8b0d-865aa5c8b287","Type":"ContainerStarted","Data":"7ce177960c69d751c01a15f9ac6c84bafa73ef8d8d52db947cafa092339a4212"} Feb 26 17:23:03 crc kubenswrapper[4805]: I0226 17:23:03.818236 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-r5crp" podStartSLOduration=2.799882669 podStartE2EDuration="5.818212334s" podCreationTimestamp="2026-02-26 17:22:58 +0000 UTC" firstStartedPulling="2026-02-26 17:22:59.691626815 +0000 UTC m=+494.253381154" lastFinishedPulling="2026-02-26 17:23:02.70995646 +0000 UTC m=+497.271710819" observedRunningTime="2026-02-26 17:23:03.783291431 +0000 UTC m=+498.345045950" watchObservedRunningTime="2026-02-26 17:23:03.818212334 +0000 UTC m=+498.379966673" Feb 26 17:23:03 crc kubenswrapper[4805]: I0226 17:23:03.818868 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7f86856858-4xb7s"] Feb 26 17:23:03 crc kubenswrapper[4805]: E0226 17:23:03.819203 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93b3feae-d3a5-4818-94c6-ac59f5ec20e9" containerName="controller-manager" Feb 26 17:23:03 crc kubenswrapper[4805]: I0226 17:23:03.819228 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="93b3feae-d3a5-4818-94c6-ac59f5ec20e9" containerName="controller-manager" Feb 26 17:23:03 crc kubenswrapper[4805]: I0226 17:23:03.819488 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="93b3feae-d3a5-4818-94c6-ac59f5ec20e9" containerName="controller-manager" Feb 26 17:23:03 crc kubenswrapper[4805]: I0226 17:23:03.819946 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f86856858-4xb7s" Feb 26 17:23:03 crc kubenswrapper[4805]: I0226 17:23:03.821712 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 26 17:23:03 crc kubenswrapper[4805]: I0226 17:23:03.821984 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 26 17:23:03 crc kubenswrapper[4805]: I0226 17:23:03.822293 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 26 17:23:03 crc kubenswrapper[4805]: I0226 17:23:03.822885 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 26 17:23:03 crc kubenswrapper[4805]: I0226 17:23:03.822942 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 26 17:23:03 crc kubenswrapper[4805]: I0226 17:23:03.823315 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 26 17:23:03 crc kubenswrapper[4805]: I0226 17:23:03.833374 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 26 17:23:03 crc kubenswrapper[4805]: I0226 17:23:03.836481 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f86856858-4xb7s"] Feb 26 17:23:03 crc kubenswrapper[4805]: I0226 17:23:03.951368 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95298430-dff6-465b-a059-28fafe0d6e38-client-ca\") pod \"controller-manager-7f86856858-4xb7s\" (UID: \"95298430-dff6-465b-a059-28fafe0d6e38\") " pod="openshift-controller-manager/controller-manager-7f86856858-4xb7s" Feb 26 17:23:03 crc kubenswrapper[4805]: I0226 17:23:03.951462 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95298430-dff6-465b-a059-28fafe0d6e38-config\") pod \"controller-manager-7f86856858-4xb7s\" (UID: \"95298430-dff6-465b-a059-28fafe0d6e38\") " pod="openshift-controller-manager/controller-manager-7f86856858-4xb7s" Feb 26 17:23:03 crc kubenswrapper[4805]: I0226 17:23:03.951568 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95298430-dff6-465b-a059-28fafe0d6e38-proxy-ca-bundles\") pod \"controller-manager-7f86856858-4xb7s\" (UID: \"95298430-dff6-465b-a059-28fafe0d6e38\") " pod="openshift-controller-manager/controller-manager-7f86856858-4xb7s" Feb 26 17:23:03 crc kubenswrapper[4805]: I0226 17:23:03.951617 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95298430-dff6-465b-a059-28fafe0d6e38-serving-cert\") pod \"controller-manager-7f86856858-4xb7s\" (UID: \"95298430-dff6-465b-a059-28fafe0d6e38\") " pod="openshift-controller-manager/controller-manager-7f86856858-4xb7s" Feb 26 17:23:03 crc kubenswrapper[4805]: I0226 17:23:03.951841 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96pbl\" (UniqueName: \"kubernetes.io/projected/95298430-dff6-465b-a059-28fafe0d6e38-kube-api-access-96pbl\") pod \"controller-manager-7f86856858-4xb7s\" (UID: \"95298430-dff6-465b-a059-28fafe0d6e38\") " pod="openshift-controller-manager/controller-manager-7f86856858-4xb7s" Feb 26 17:23:04 crc kubenswrapper[4805]: I0226 17:23:04.052724 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95298430-dff6-465b-a059-28fafe0d6e38-proxy-ca-bundles\") pod \"controller-manager-7f86856858-4xb7s\" (UID: \"95298430-dff6-465b-a059-28fafe0d6e38\") " pod="openshift-controller-manager/controller-manager-7f86856858-4xb7s" Feb 26 17:23:04 crc kubenswrapper[4805]: I0226 17:23:04.052806 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95298430-dff6-465b-a059-28fafe0d6e38-serving-cert\") pod \"controller-manager-7f86856858-4xb7s\" (UID: \"95298430-dff6-465b-a059-28fafe0d6e38\") " pod="openshift-controller-manager/controller-manager-7f86856858-4xb7s" Feb 26 17:23:04 crc kubenswrapper[4805]: I0226 17:23:04.052846 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96pbl\" (UniqueName: \"kubernetes.io/projected/95298430-dff6-465b-a059-28fafe0d6e38-kube-api-access-96pbl\") pod \"controller-manager-7f86856858-4xb7s\" (UID: \"95298430-dff6-465b-a059-28fafe0d6e38\") " pod="openshift-controller-manager/controller-manager-7f86856858-4xb7s" Feb 26 17:23:04 crc kubenswrapper[4805]: I0226 17:23:04.052892 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95298430-dff6-465b-a059-28fafe0d6e38-client-ca\") pod \"controller-manager-7f86856858-4xb7s\" (UID: \"95298430-dff6-465b-a059-28fafe0d6e38\") " pod="openshift-controller-manager/controller-manager-7f86856858-4xb7s" Feb 26 17:23:04 crc kubenswrapper[4805]: I0226 17:23:04.052923 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95298430-dff6-465b-a059-28fafe0d6e38-config\") pod \"controller-manager-7f86856858-4xb7s\" (UID: \"95298430-dff6-465b-a059-28fafe0d6e38\") " pod="openshift-controller-manager/controller-manager-7f86856858-4xb7s" Feb 26 17:23:04 crc kubenswrapper[4805]: I0226 17:23:04.053993 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95298430-dff6-465b-a059-28fafe0d6e38-proxy-ca-bundles\") pod \"controller-manager-7f86856858-4xb7s\" (UID: \"95298430-dff6-465b-a059-28fafe0d6e38\") " pod="openshift-controller-manager/controller-manager-7f86856858-4xb7s" Feb 26 17:23:04 crc kubenswrapper[4805]: I0226 17:23:04.054224 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95298430-dff6-465b-a059-28fafe0d6e38-config\") pod \"controller-manager-7f86856858-4xb7s\" (UID: \"95298430-dff6-465b-a059-28fafe0d6e38\") " pod="openshift-controller-manager/controller-manager-7f86856858-4xb7s" Feb 26 17:23:04 crc kubenswrapper[4805]: I0226 17:23:04.054793 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95298430-dff6-465b-a059-28fafe0d6e38-client-ca\") pod \"controller-manager-7f86856858-4xb7s\" (UID: \"95298430-dff6-465b-a059-28fafe0d6e38\") " pod="openshift-controller-manager/controller-manager-7f86856858-4xb7s" Feb 26 17:23:04 crc kubenswrapper[4805]: I0226 17:23:04.059760 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95298430-dff6-465b-a059-28fafe0d6e38-serving-cert\") pod \"controller-manager-7f86856858-4xb7s\" (UID: \"95298430-dff6-465b-a059-28fafe0d6e38\") " pod="openshift-controller-manager/controller-manager-7f86856858-4xb7s" Feb 26 17:23:04 crc kubenswrapper[4805]: I0226 17:23:04.073912 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96pbl\" (UniqueName: \"kubernetes.io/projected/95298430-dff6-465b-a059-28fafe0d6e38-kube-api-access-96pbl\") pod \"controller-manager-7f86856858-4xb7s\" (UID: \"95298430-dff6-465b-a059-28fafe0d6e38\") " pod="openshift-controller-manager/controller-manager-7f86856858-4xb7s" Feb 26 17:23:04 crc kubenswrapper[4805]: I0226 17:23:04.134861 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f86856858-4xb7s" Feb 26 17:23:04 crc kubenswrapper[4805]: I0226 17:23:04.326126 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f86856858-4xb7s"] Feb 26 17:23:04 crc kubenswrapper[4805]: W0226 17:23:04.334749 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95298430_dff6_465b_a059_28fafe0d6e38.slice/crio-beb49ddea6cad423e6af69d39a3468beda045c40e5c788fc0e064ba63f4a4e2e WatchSource:0}: Error finding container beb49ddea6cad423e6af69d39a3468beda045c40e5c788fc0e064ba63f4a4e2e: Status 404 returned error can't find the container with id beb49ddea6cad423e6af69d39a3468beda045c40e5c788fc0e064ba63f4a4e2e Feb 26 17:23:04 crc kubenswrapper[4805]: I0226 17:23:04.761489 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f86856858-4xb7s" event={"ID":"95298430-dff6-465b-a059-28fafe0d6e38","Type":"ContainerStarted","Data":"20ecdc5e58fa7614689e749cccdeffffab8e58ef943963dad450b011440cf9e5"} Feb 26 17:23:04 crc kubenswrapper[4805]: I0226 17:23:04.761555 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f86856858-4xb7s" event={"ID":"95298430-dff6-465b-a059-28fafe0d6e38","Type":"ContainerStarted","Data":"beb49ddea6cad423e6af69d39a3468beda045c40e5c788fc0e064ba63f4a4e2e"} Feb 26 17:23:04 crc kubenswrapper[4805]: I0226 17:23:04.781908 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7f86856858-4xb7s" podStartSLOduration=2.781890131 podStartE2EDuration="2.781890131s" podCreationTimestamp="2026-02-26 17:23:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:23:04.77743892 +0000 UTC m=+499.339193269" watchObservedRunningTime="2026-02-26 17:23:04.781890131 +0000 UTC m=+499.343644470" Feb 26 17:23:05 crc kubenswrapper[4805]: I0226 17:23:05.771110 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8np4" event={"ID":"f0f39e3f-788e-466a-bb81-0278246ad4b6","Type":"ContainerStarted","Data":"a9cde173988286a34105c76ddf1a3b310b8bc3dfabdfe7bba456c11331ad0fcd"} Feb 26 17:23:05 crc kubenswrapper[4805]: I0226 17:23:05.771162 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7f86856858-4xb7s" Feb 26 17:23:05 crc kubenswrapper[4805]: I0226 17:23:05.777566 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7f86856858-4xb7s" Feb 26 17:23:05 crc kubenswrapper[4805]: I0226 17:23:05.791230 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z8np4" podStartSLOduration=3.348664297 podStartE2EDuration="7.79121029s" podCreationTimestamp="2026-02-26 17:22:58 +0000 UTC" firstStartedPulling="2026-02-26 17:23:00.703121859 +0000 UTC m=+495.264876218" lastFinishedPulling="2026-02-26 17:23:05.145667872 +0000 UTC m=+499.707422211" observedRunningTime="2026-02-26 17:23:05.789669242 +0000 UTC m=+500.351423581" watchObservedRunningTime="2026-02-26 17:23:05.79121029 +0000 UTC m=+500.352964629" Feb 26 17:23:06 crc kubenswrapper[4805]: I0226 17:23:06.408898 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zwn5f" Feb 26 17:23:06 crc kubenswrapper[4805]: I0226 17:23:06.408975 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zwn5f" Feb 26 17:23:06 crc kubenswrapper[4805]: I0226 17:23:06.457846 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zwn5f" Feb 26 17:23:06 crc kubenswrapper[4805]: I0226 17:23:06.607134 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8k8mj" Feb 26 17:23:06 crc kubenswrapper[4805]: I0226 17:23:06.607202 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8k8mj" Feb 26 17:23:06 crc kubenswrapper[4805]: I0226 17:23:06.830104 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zwn5f" Feb 26 17:23:07 crc kubenswrapper[4805]: I0226 17:23:07.644377 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8k8mj" podUID="366c763a-8d22-4e08-a81e-77464e51ad74" containerName="registry-server" probeResult="failure" output=< Feb 26 17:23:07 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 26 17:23:07 crc kubenswrapper[4805]: > Feb 26 17:23:08 crc kubenswrapper[4805]: I0226 17:23:08.808510 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-r5crp" Feb 26 17:23:08 crc kubenswrapper[4805]: I0226 17:23:08.808600 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-r5crp" Feb 26 17:23:08 crc kubenswrapper[4805]: I0226 17:23:08.849977 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-r5crp" Feb 26 17:23:09 crc kubenswrapper[4805]: I0226 17:23:09.049871 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z8np4" Feb 26 17:23:09 crc kubenswrapper[4805]: I0226 17:23:09.050158 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z8np4" Feb 26 17:23:09 crc kubenswrapper[4805]: I0226 17:23:09.083552 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z8np4" Feb 26 17:23:09 crc kubenswrapper[4805]: I0226 17:23:09.834922 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-r5crp" Feb 26 17:23:10 crc kubenswrapper[4805]: I0226 17:23:10.837941 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z8np4" Feb 26 17:23:16 crc kubenswrapper[4805]: I0226 17:23:16.650974 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8k8mj" Feb 26 17:23:16 crc kubenswrapper[4805]: I0226 17:23:16.697533 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8k8mj" Feb 26 17:23:17 crc kubenswrapper[4805]: I0226 17:23:17.100061 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-j8q9k" Feb 26 17:23:17 crc kubenswrapper[4805]: I0226 17:23:17.159050 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tshb4"] Feb 26 17:23:32 crc kubenswrapper[4805]: I0226 17:23:32.977983 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 17:23:32 crc kubenswrapper[4805]: I0226 17:23:32.978632 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 17:23:32 crc kubenswrapper[4805]: I0226 17:23:32.978694 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" Feb 26 17:23:32 crc kubenswrapper[4805]: I0226 17:23:32.979381 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"042dcdf4837fac5099eaa927fbb96bd6244e875ff2c0526c5bf517e80e23ce1d"} pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 17:23:32 crc kubenswrapper[4805]: I0226 17:23:32.979445 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" containerID="cri-o://042dcdf4837fac5099eaa927fbb96bd6244e875ff2c0526c5bf517e80e23ce1d" gracePeriod=600 Feb 26 17:23:33 crc kubenswrapper[4805]: I0226 17:23:33.926135 4805 generic.go:334] "Generic (PLEG): container finished" podID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerID="042dcdf4837fac5099eaa927fbb96bd6244e875ff2c0526c5bf517e80e23ce1d" exitCode=0 Feb 26 17:23:33 crc kubenswrapper[4805]: I0226 17:23:33.926210 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" event={"ID":"25e83477-65d0-41be-8e55-fdacfc5871a8","Type":"ContainerDied","Data":"042dcdf4837fac5099eaa927fbb96bd6244e875ff2c0526c5bf517e80e23ce1d"} Feb 26 17:23:33 crc kubenswrapper[4805]: I0226 17:23:33.926756 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" event={"ID":"25e83477-65d0-41be-8e55-fdacfc5871a8","Type":"ContainerStarted","Data":"2ad740ea60d5ab6cc5388ad856d449d28f7d0452892c7d5d07969ba24766bab4"} Feb 26 17:23:33 crc kubenswrapper[4805]: I0226 17:23:33.926783 4805 scope.go:117] "RemoveContainer" containerID="d7f80b34e3665661c1452d156de81be86bde44bd78e72eddf52da56e7b3231bb" Feb 26 17:23:42 crc kubenswrapper[4805]: I0226 17:23:42.199517 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" podUID="c631c898-5180-424c-8cae-922d1a709938" containerName="registry" containerID="cri-o://20e4d2e4365c766bfebae89a1bee804b406e2753f3ed41103e567b372d7a4ba8" gracePeriod=30 Feb 26 17:23:42 crc kubenswrapper[4805]: I0226 17:23:42.592606 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:23:42 crc kubenswrapper[4805]: I0226 17:23:42.777247 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c631c898-5180-424c-8cae-922d1a709938-registry-tls\") pod \"c631c898-5180-424c-8cae-922d1a709938\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " Feb 26 17:23:42 crc kubenswrapper[4805]: I0226 17:23:42.777408 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c631c898-5180-424c-8cae-922d1a709938-installation-pull-secrets\") pod \"c631c898-5180-424c-8cae-922d1a709938\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " Feb 26 17:23:42 crc kubenswrapper[4805]: I0226 17:23:42.777528 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c631c898-5180-424c-8cae-922d1a709938-bound-sa-token\") pod \"c631c898-5180-424c-8cae-922d1a709938\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " Feb 26 17:23:42 crc kubenswrapper[4805]: I0226 17:23:42.777593 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sm2sp\" (UniqueName: \"kubernetes.io/projected/c631c898-5180-424c-8cae-922d1a709938-kube-api-access-sm2sp\") pod \"c631c898-5180-424c-8cae-922d1a709938\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " Feb 26 17:23:42 crc kubenswrapper[4805]: I0226 17:23:42.777627 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c631c898-5180-424c-8cae-922d1a709938-trusted-ca\") pod \"c631c898-5180-424c-8cae-922d1a709938\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " Feb 26 17:23:42 crc kubenswrapper[4805]: I0226 17:23:42.777764 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"c631c898-5180-424c-8cae-922d1a709938\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " Feb 26 17:23:42 crc kubenswrapper[4805]: I0226 17:23:42.777823 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c631c898-5180-424c-8cae-922d1a709938-registry-certificates\") pod \"c631c898-5180-424c-8cae-922d1a709938\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " Feb 26 17:23:42 crc kubenswrapper[4805]: I0226 17:23:42.777859 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c631c898-5180-424c-8cae-922d1a709938-ca-trust-extracted\") pod \"c631c898-5180-424c-8cae-922d1a709938\" (UID: \"c631c898-5180-424c-8cae-922d1a709938\") " Feb 26 17:23:42 crc kubenswrapper[4805]: I0226 17:23:42.778605 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c631c898-5180-424c-8cae-922d1a709938-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "c631c898-5180-424c-8cae-922d1a709938" (UID: "c631c898-5180-424c-8cae-922d1a709938"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:23:42 crc kubenswrapper[4805]: I0226 17:23:42.779173 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c631c898-5180-424c-8cae-922d1a709938-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "c631c898-5180-424c-8cae-922d1a709938" (UID: "c631c898-5180-424c-8cae-922d1a709938"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:23:42 crc kubenswrapper[4805]: I0226 17:23:42.785792 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c631c898-5180-424c-8cae-922d1a709938-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "c631c898-5180-424c-8cae-922d1a709938" (UID: "c631c898-5180-424c-8cae-922d1a709938"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:23:42 crc kubenswrapper[4805]: I0226 17:23:42.786805 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c631c898-5180-424c-8cae-922d1a709938-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "c631c898-5180-424c-8cae-922d1a709938" (UID: "c631c898-5180-424c-8cae-922d1a709938"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:23:42 crc kubenswrapper[4805]: I0226 17:23:42.787654 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c631c898-5180-424c-8cae-922d1a709938-kube-api-access-sm2sp" (OuterVolumeSpecName: "kube-api-access-sm2sp") pod "c631c898-5180-424c-8cae-922d1a709938" (UID: "c631c898-5180-424c-8cae-922d1a709938"). InnerVolumeSpecName "kube-api-access-sm2sp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:23:42 crc kubenswrapper[4805]: I0226 17:23:42.790793 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c631c898-5180-424c-8cae-922d1a709938-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "c631c898-5180-424c-8cae-922d1a709938" (UID: "c631c898-5180-424c-8cae-922d1a709938"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:23:42 crc kubenswrapper[4805]: I0226 17:23:42.794141 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "c631c898-5180-424c-8cae-922d1a709938" (UID: "c631c898-5180-424c-8cae-922d1a709938"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 17:23:42 crc kubenswrapper[4805]: I0226 17:23:42.798046 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c631c898-5180-424c-8cae-922d1a709938-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "c631c898-5180-424c-8cae-922d1a709938" (UID: "c631c898-5180-424c-8cae-922d1a709938"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:23:42 crc kubenswrapper[4805]: I0226 17:23:42.879472 4805 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c631c898-5180-424c-8cae-922d1a709938-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 26 17:23:42 crc kubenswrapper[4805]: I0226 17:23:42.879531 4805 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c631c898-5180-424c-8cae-922d1a709938-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 26 17:23:42 crc kubenswrapper[4805]: I0226 17:23:42.879549 4805 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c631c898-5180-424c-8cae-922d1a709938-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 26 17:23:42 crc kubenswrapper[4805]: I0226 17:23:42.879559 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sm2sp\" (UniqueName: \"kubernetes.io/projected/c631c898-5180-424c-8cae-922d1a709938-kube-api-access-sm2sp\") on node \"crc\" DevicePath \"\"" Feb 26 17:23:42 crc kubenswrapper[4805]: I0226 17:23:42.879570 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c631c898-5180-424c-8cae-922d1a709938-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 26 17:23:42 crc kubenswrapper[4805]: I0226 17:23:42.879579 4805 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c631c898-5180-424c-8cae-922d1a709938-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 26 17:23:42 crc kubenswrapper[4805]: I0226 17:23:42.879588 4805 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c631c898-5180-424c-8cae-922d1a709938-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 26 17:23:42 crc kubenswrapper[4805]: I0226 17:23:42.989482 4805 generic.go:334] "Generic (PLEG): container finished" podID="c631c898-5180-424c-8cae-922d1a709938" containerID="20e4d2e4365c766bfebae89a1bee804b406e2753f3ed41103e567b372d7a4ba8" exitCode=0 Feb 26 17:23:42 crc kubenswrapper[4805]: I0226 17:23:42.989561 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" event={"ID":"c631c898-5180-424c-8cae-922d1a709938","Type":"ContainerDied","Data":"20e4d2e4365c766bfebae89a1bee804b406e2753f3ed41103e567b372d7a4ba8"} Feb 26 17:23:42 crc kubenswrapper[4805]: I0226 17:23:42.989648 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" event={"ID":"c631c898-5180-424c-8cae-922d1a709938","Type":"ContainerDied","Data":"ff7cd83cbaf11d1c207c6f8602ebc000bea62031cb792a62dc9432eb4f16d7ce"} Feb 26 17:23:42 crc kubenswrapper[4805]: I0226 17:23:42.989682 4805 scope.go:117] "RemoveContainer" containerID="20e4d2e4365c766bfebae89a1bee804b406e2753f3ed41103e567b372d7a4ba8" Feb 26 17:23:42 crc kubenswrapper[4805]: I0226 17:23:42.989584 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-tshb4" Feb 26 17:23:43 crc kubenswrapper[4805]: I0226 17:23:43.013063 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tshb4"] Feb 26 17:23:43 crc kubenswrapper[4805]: I0226 17:23:43.017342 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tshb4"] Feb 26 17:23:43 crc kubenswrapper[4805]: I0226 17:23:43.019352 4805 scope.go:117] "RemoveContainer" containerID="20e4d2e4365c766bfebae89a1bee804b406e2753f3ed41103e567b372d7a4ba8" Feb 26 17:23:43 crc kubenswrapper[4805]: E0226 17:23:43.020139 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20e4d2e4365c766bfebae89a1bee804b406e2753f3ed41103e567b372d7a4ba8\": container with ID starting with 20e4d2e4365c766bfebae89a1bee804b406e2753f3ed41103e567b372d7a4ba8 not found: ID does not exist" containerID="20e4d2e4365c766bfebae89a1bee804b406e2753f3ed41103e567b372d7a4ba8" Feb 26 17:23:43 crc kubenswrapper[4805]: I0226 17:23:43.020192 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20e4d2e4365c766bfebae89a1bee804b406e2753f3ed41103e567b372d7a4ba8"} err="failed to get container status \"20e4d2e4365c766bfebae89a1bee804b406e2753f3ed41103e567b372d7a4ba8\": rpc error: code = NotFound desc = could not find container \"20e4d2e4365c766bfebae89a1bee804b406e2753f3ed41103e567b372d7a4ba8\": container with ID starting with 20e4d2e4365c766bfebae89a1bee804b406e2753f3ed41103e567b372d7a4ba8 not found: ID does not exist" Feb 26 17:23:44 crc kubenswrapper[4805]: I0226 17:23:44.964735 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c631c898-5180-424c-8cae-922d1a709938" path="/var/lib/kubelet/pods/c631c898-5180-424c-8cae-922d1a709938/volumes" Feb 26 17:24:00 crc kubenswrapper[4805]: I0226 17:24:00.136743 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535444-vbq7n"] Feb 26 17:24:00 crc kubenswrapper[4805]: E0226 17:24:00.137485 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c631c898-5180-424c-8cae-922d1a709938" containerName="registry" Feb 26 17:24:00 crc kubenswrapper[4805]: I0226 17:24:00.137498 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c631c898-5180-424c-8cae-922d1a709938" containerName="registry" Feb 26 17:24:00 crc kubenswrapper[4805]: I0226 17:24:00.137581 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c631c898-5180-424c-8cae-922d1a709938" containerName="registry" Feb 26 17:24:00 crc kubenswrapper[4805]: I0226 17:24:00.137937 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535444-vbq7n" Feb 26 17:24:00 crc kubenswrapper[4805]: I0226 17:24:00.140001 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 17:24:00 crc kubenswrapper[4805]: I0226 17:24:00.140008 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 17:24:00 crc kubenswrapper[4805]: I0226 17:24:00.140600 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 17:24:00 crc kubenswrapper[4805]: I0226 17:24:00.148871 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535444-vbq7n"] Feb 26 17:24:00 crc kubenswrapper[4805]: I0226 17:24:00.309403 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d65hs\" (UniqueName: \"kubernetes.io/projected/31ac4c66-1f9e-4091-b057-ebac30de30ec-kube-api-access-d65hs\") pod \"auto-csr-approver-29535444-vbq7n\" (UID: \"31ac4c66-1f9e-4091-b057-ebac30de30ec\") " pod="openshift-infra/auto-csr-approver-29535444-vbq7n" Feb 26 17:24:00 crc kubenswrapper[4805]: I0226 17:24:00.411205 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d65hs\" (UniqueName: \"kubernetes.io/projected/31ac4c66-1f9e-4091-b057-ebac30de30ec-kube-api-access-d65hs\") pod \"auto-csr-approver-29535444-vbq7n\" (UID: \"31ac4c66-1f9e-4091-b057-ebac30de30ec\") " pod="openshift-infra/auto-csr-approver-29535444-vbq7n" Feb 26 17:24:00 crc kubenswrapper[4805]: I0226 17:24:00.435549 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d65hs\" (UniqueName: \"kubernetes.io/projected/31ac4c66-1f9e-4091-b057-ebac30de30ec-kube-api-access-d65hs\") pod \"auto-csr-approver-29535444-vbq7n\" (UID: \"31ac4c66-1f9e-4091-b057-ebac30de30ec\") " pod="openshift-infra/auto-csr-approver-29535444-vbq7n" Feb 26 17:24:00 crc kubenswrapper[4805]: I0226 17:24:00.460123 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535444-vbq7n" Feb 26 17:24:00 crc kubenswrapper[4805]: I0226 17:24:00.854110 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535444-vbq7n"] Feb 26 17:24:00 crc kubenswrapper[4805]: I0226 17:24:00.864780 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 17:24:01 crc kubenswrapper[4805]: I0226 17:24:01.103524 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535444-vbq7n" event={"ID":"31ac4c66-1f9e-4091-b057-ebac30de30ec","Type":"ContainerStarted","Data":"4b6b483e6731f3ad7e4d48e8031f07158e2f634be04aa9cba8141e39e925729d"} Feb 26 17:24:03 crc kubenswrapper[4805]: I0226 17:24:03.124256 4805 generic.go:334] "Generic (PLEG): container finished" podID="31ac4c66-1f9e-4091-b057-ebac30de30ec" containerID="82ff1457db2fa0cb9203af37c2fad5c9123810bcd878a8e5cb8bc28ef40f472e" exitCode=0 Feb 26 17:24:03 crc kubenswrapper[4805]: I0226 17:24:03.124345 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535444-vbq7n" event={"ID":"31ac4c66-1f9e-4091-b057-ebac30de30ec","Type":"ContainerDied","Data":"82ff1457db2fa0cb9203af37c2fad5c9123810bcd878a8e5cb8bc28ef40f472e"} Feb 26 17:24:04 crc kubenswrapper[4805]: I0226 17:24:04.392362 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535444-vbq7n" Feb 26 17:24:04 crc kubenswrapper[4805]: I0226 17:24:04.565511 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d65hs\" (UniqueName: \"kubernetes.io/projected/31ac4c66-1f9e-4091-b057-ebac30de30ec-kube-api-access-d65hs\") pod \"31ac4c66-1f9e-4091-b057-ebac30de30ec\" (UID: \"31ac4c66-1f9e-4091-b057-ebac30de30ec\") " Feb 26 17:24:04 crc kubenswrapper[4805]: I0226 17:24:04.573679 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31ac4c66-1f9e-4091-b057-ebac30de30ec-kube-api-access-d65hs" (OuterVolumeSpecName: "kube-api-access-d65hs") pod "31ac4c66-1f9e-4091-b057-ebac30de30ec" (UID: "31ac4c66-1f9e-4091-b057-ebac30de30ec"). InnerVolumeSpecName "kube-api-access-d65hs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:24:04 crc kubenswrapper[4805]: I0226 17:24:04.667137 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d65hs\" (UniqueName: \"kubernetes.io/projected/31ac4c66-1f9e-4091-b057-ebac30de30ec-kube-api-access-d65hs\") on node \"crc\" DevicePath \"\"" Feb 26 17:24:05 crc kubenswrapper[4805]: I0226 17:24:05.137360 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535444-vbq7n" event={"ID":"31ac4c66-1f9e-4091-b057-ebac30de30ec","Type":"ContainerDied","Data":"4b6b483e6731f3ad7e4d48e8031f07158e2f634be04aa9cba8141e39e925729d"} Feb 26 17:24:05 crc kubenswrapper[4805]: I0226 17:24:05.137403 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b6b483e6731f3ad7e4d48e8031f07158e2f634be04aa9cba8141e39e925729d" Feb 26 17:24:05 crc kubenswrapper[4805]: I0226 17:24:05.137456 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535444-vbq7n" Feb 26 17:24:05 crc kubenswrapper[4805]: I0226 17:24:05.440453 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535438-d8kz2"] Feb 26 17:24:05 crc kubenswrapper[4805]: I0226 17:24:05.445305 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535438-d8kz2"] Feb 26 17:24:06 crc kubenswrapper[4805]: I0226 17:24:06.962183 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce8a1740-3334-42a4-af1d-1a7de4758c9c" path="/var/lib/kubelet/pods/ce8a1740-3334-42a4-af1d-1a7de4758c9c/volumes" Feb 26 17:26:00 crc kubenswrapper[4805]: I0226 17:26:00.150370 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535446-6bz22"] Feb 26 17:26:00 crc kubenswrapper[4805]: E0226 17:26:00.151372 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31ac4c66-1f9e-4091-b057-ebac30de30ec" containerName="oc" Feb 26 17:26:00 crc kubenswrapper[4805]: I0226 17:26:00.151394 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ac4c66-1f9e-4091-b057-ebac30de30ec" containerName="oc" Feb 26 17:26:00 crc kubenswrapper[4805]: I0226 17:26:00.151530 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="31ac4c66-1f9e-4091-b057-ebac30de30ec" containerName="oc" Feb 26 17:26:00 crc kubenswrapper[4805]: I0226 17:26:00.152069 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535446-6bz22" Feb 26 17:26:00 crc kubenswrapper[4805]: I0226 17:26:00.154077 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 17:26:00 crc kubenswrapper[4805]: I0226 17:26:00.154554 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 17:26:00 crc kubenswrapper[4805]: I0226 17:26:00.156163 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 17:26:00 crc kubenswrapper[4805]: I0226 17:26:00.158810 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535446-6bz22"] Feb 26 17:26:00 crc kubenswrapper[4805]: I0226 17:26:00.337236 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtvtx\" (UniqueName: \"kubernetes.io/projected/80bff9b9-9a19-4950-b00a-395ca080797c-kube-api-access-jtvtx\") pod \"auto-csr-approver-29535446-6bz22\" (UID: \"80bff9b9-9a19-4950-b00a-395ca080797c\") " pod="openshift-infra/auto-csr-approver-29535446-6bz22" Feb 26 17:26:00 crc kubenswrapper[4805]: I0226 17:26:00.438823 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtvtx\" (UniqueName: \"kubernetes.io/projected/80bff9b9-9a19-4950-b00a-395ca080797c-kube-api-access-jtvtx\") pod \"auto-csr-approver-29535446-6bz22\" (UID: \"80bff9b9-9a19-4950-b00a-395ca080797c\") " pod="openshift-infra/auto-csr-approver-29535446-6bz22" Feb 26 17:26:00 crc kubenswrapper[4805]: I0226 17:26:00.463689 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtvtx\" (UniqueName: \"kubernetes.io/projected/80bff9b9-9a19-4950-b00a-395ca080797c-kube-api-access-jtvtx\") pod \"auto-csr-approver-29535446-6bz22\" (UID: \"80bff9b9-9a19-4950-b00a-395ca080797c\") " pod="openshift-infra/auto-csr-approver-29535446-6bz22" Feb 26 17:26:00 crc kubenswrapper[4805]: I0226 17:26:00.474498 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535446-6bz22" Feb 26 17:26:00 crc kubenswrapper[4805]: I0226 17:26:00.672447 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535446-6bz22"] Feb 26 17:26:00 crc kubenswrapper[4805]: I0226 17:26:00.832103 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535446-6bz22" event={"ID":"80bff9b9-9a19-4950-b00a-395ca080797c","Type":"ContainerStarted","Data":"54ef7d630063d794fea241eb66efbf94d7380e009f0afe3de98eec9d367cf816"} Feb 26 17:26:02 crc kubenswrapper[4805]: I0226 17:26:02.977812 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 17:26:02 crc kubenswrapper[4805]: I0226 17:26:02.977888 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 17:26:03 crc kubenswrapper[4805]: I0226 17:26:03.847888 4805 generic.go:334] "Generic (PLEG): container finished" podID="80bff9b9-9a19-4950-b00a-395ca080797c" containerID="4ef3f1e43cc6400150fb7d1ee5c46370790b44e1041fdf1f06bf0af1bb23ae5d" exitCode=0 Feb 26 17:26:03 crc kubenswrapper[4805]: I0226 17:26:03.847983 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535446-6bz22" event={"ID":"80bff9b9-9a19-4950-b00a-395ca080797c","Type":"ContainerDied","Data":"4ef3f1e43cc6400150fb7d1ee5c46370790b44e1041fdf1f06bf0af1bb23ae5d"} Feb 26 17:26:05 crc kubenswrapper[4805]: I0226 17:26:05.067813 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535446-6bz22" Feb 26 17:26:05 crc kubenswrapper[4805]: I0226 17:26:05.193261 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtvtx\" (UniqueName: \"kubernetes.io/projected/80bff9b9-9a19-4950-b00a-395ca080797c-kube-api-access-jtvtx\") pod \"80bff9b9-9a19-4950-b00a-395ca080797c\" (UID: \"80bff9b9-9a19-4950-b00a-395ca080797c\") " Feb 26 17:26:05 crc kubenswrapper[4805]: I0226 17:26:05.200875 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80bff9b9-9a19-4950-b00a-395ca080797c-kube-api-access-jtvtx" (OuterVolumeSpecName: "kube-api-access-jtvtx") pod "80bff9b9-9a19-4950-b00a-395ca080797c" (UID: "80bff9b9-9a19-4950-b00a-395ca080797c"). InnerVolumeSpecName "kube-api-access-jtvtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:26:05 crc kubenswrapper[4805]: I0226 17:26:05.294667 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtvtx\" (UniqueName: \"kubernetes.io/projected/80bff9b9-9a19-4950-b00a-395ca080797c-kube-api-access-jtvtx\") on node \"crc\" DevicePath \"\"" Feb 26 17:26:05 crc kubenswrapper[4805]: I0226 17:26:05.858176 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535446-6bz22" event={"ID":"80bff9b9-9a19-4950-b00a-395ca080797c","Type":"ContainerDied","Data":"54ef7d630063d794fea241eb66efbf94d7380e009f0afe3de98eec9d367cf816"} Feb 26 17:26:05 crc kubenswrapper[4805]: I0226 17:26:05.858425 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54ef7d630063d794fea241eb66efbf94d7380e009f0afe3de98eec9d367cf816" Feb 26 17:26:05 crc kubenswrapper[4805]: I0226 17:26:05.858226 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535446-6bz22" Feb 26 17:26:06 crc kubenswrapper[4805]: I0226 17:26:06.123380 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535440-nztxm"] Feb 26 17:26:06 crc kubenswrapper[4805]: I0226 17:26:06.127937 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535440-nztxm"] Feb 26 17:26:06 crc kubenswrapper[4805]: I0226 17:26:06.962336 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e191647-de53-41b3-b7b9-5cb11ccb9f87" path="/var/lib/kubelet/pods/3e191647-de53-41b3-b7b9-5cb11ccb9f87/volumes" Feb 26 17:26:32 crc kubenswrapper[4805]: I0226 17:26:32.978473 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 17:26:32 crc kubenswrapper[4805]: I0226 17:26:32.979150 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 17:26:48 crc kubenswrapper[4805]: I0226 17:26:48.916783 4805 scope.go:117] "RemoveContainer" containerID="6a678337576ab19bb9ec69028e8af9e0e6aac50abc487291596e0d465fff2805" Feb 26 17:26:48 crc kubenswrapper[4805]: I0226 17:26:48.951852 4805 scope.go:117] "RemoveContainer" containerID="0a52605714e6f534fc7ed16c2450ffcae0ceeeda0f1395acca87897d11095cdd" Feb 26 17:27:02 crc kubenswrapper[4805]: I0226 17:27:02.977626 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 17:27:02 crc kubenswrapper[4805]: I0226 17:27:02.978308 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 17:27:02 crc kubenswrapper[4805]: I0226 17:27:02.978367 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" Feb 26 17:27:02 crc kubenswrapper[4805]: I0226 17:27:02.979163 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2ad740ea60d5ab6cc5388ad856d449d28f7d0452892c7d5d07969ba24766bab4"} pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 17:27:02 crc kubenswrapper[4805]: I0226 17:27:02.979269 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" containerID="cri-o://2ad740ea60d5ab6cc5388ad856d449d28f7d0452892c7d5d07969ba24766bab4" gracePeriod=600 Feb 26 17:27:03 crc kubenswrapper[4805]: I0226 17:27:03.197971 4805 generic.go:334] "Generic (PLEG): container finished" podID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerID="2ad740ea60d5ab6cc5388ad856d449d28f7d0452892c7d5d07969ba24766bab4" exitCode=0 Feb 26 17:27:03 crc kubenswrapper[4805]: I0226 17:27:03.198049 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" event={"ID":"25e83477-65d0-41be-8e55-fdacfc5871a8","Type":"ContainerDied","Data":"2ad740ea60d5ab6cc5388ad856d449d28f7d0452892c7d5d07969ba24766bab4"} Feb 26 17:27:03 crc kubenswrapper[4805]: I0226 17:27:03.198219 4805 scope.go:117] "RemoveContainer" containerID="042dcdf4837fac5099eaa927fbb96bd6244e875ff2c0526c5bf517e80e23ce1d" Feb 26 17:27:04 crc kubenswrapper[4805]: I0226 17:27:04.206321 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" event={"ID":"25e83477-65d0-41be-8e55-fdacfc5871a8","Type":"ContainerStarted","Data":"1d44138a55ef33aa1de9eac7f541bad377db04ef7075e41168f322227c042d08"} Feb 26 17:28:00 crc kubenswrapper[4805]: I0226 17:28:00.149474 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535448-8n8lp"] Feb 26 17:28:00 crc kubenswrapper[4805]: E0226 17:28:00.150842 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80bff9b9-9a19-4950-b00a-395ca080797c" containerName="oc" Feb 26 17:28:00 crc kubenswrapper[4805]: I0226 17:28:00.150870 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="80bff9b9-9a19-4950-b00a-395ca080797c" containerName="oc" Feb 26 17:28:00 crc kubenswrapper[4805]: I0226 17:28:00.151237 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="80bff9b9-9a19-4950-b00a-395ca080797c" containerName="oc" Feb 26 17:28:00 crc kubenswrapper[4805]: I0226 17:28:00.152052 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535448-8n8lp" Feb 26 17:28:00 crc kubenswrapper[4805]: I0226 17:28:00.161052 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 17:28:00 crc kubenswrapper[4805]: I0226 17:28:00.162112 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 17:28:00 crc kubenswrapper[4805]: I0226 17:28:00.162405 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 17:28:00 crc kubenswrapper[4805]: I0226 17:28:00.164722 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535448-8n8lp"] Feb 26 17:28:00 crc kubenswrapper[4805]: I0226 17:28:00.312892 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm252\" (UniqueName: \"kubernetes.io/projected/d139485b-01a9-4993-b8a7-66dcc1008841-kube-api-access-wm252\") pod \"auto-csr-approver-29535448-8n8lp\" (UID: \"d139485b-01a9-4993-b8a7-66dcc1008841\") " pod="openshift-infra/auto-csr-approver-29535448-8n8lp" Feb 26 17:28:00 crc kubenswrapper[4805]: I0226 17:28:00.414340 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm252\" (UniqueName: \"kubernetes.io/projected/d139485b-01a9-4993-b8a7-66dcc1008841-kube-api-access-wm252\") pod \"auto-csr-approver-29535448-8n8lp\" (UID: \"d139485b-01a9-4993-b8a7-66dcc1008841\") " pod="openshift-infra/auto-csr-approver-29535448-8n8lp" Feb 26 17:28:00 crc kubenswrapper[4805]: I0226 17:28:00.442152 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm252\" (UniqueName: \"kubernetes.io/projected/d139485b-01a9-4993-b8a7-66dcc1008841-kube-api-access-wm252\") pod \"auto-csr-approver-29535448-8n8lp\" (UID: \"d139485b-01a9-4993-b8a7-66dcc1008841\") " pod="openshift-infra/auto-csr-approver-29535448-8n8lp" Feb 26 17:28:00 crc kubenswrapper[4805]: I0226 17:28:00.489161 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535448-8n8lp" Feb 26 17:28:00 crc kubenswrapper[4805]: I0226 17:28:00.720650 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535448-8n8lp"] Feb 26 17:28:01 crc kubenswrapper[4805]: I0226 17:28:01.554766 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535448-8n8lp" event={"ID":"d139485b-01a9-4993-b8a7-66dcc1008841","Type":"ContainerStarted","Data":"ed7eeac351c09d1c41898394bad9042144b87e565f769e450d12beb6c962e27a"} Feb 26 17:28:02 crc kubenswrapper[4805]: I0226 17:28:02.562986 4805 generic.go:334] "Generic (PLEG): container finished" podID="d139485b-01a9-4993-b8a7-66dcc1008841" containerID="bf3e363664202893aa2a5173369d952647d645a954f1e3faac6b4a08d210a3b2" exitCode=0 Feb 26 17:28:02 crc kubenswrapper[4805]: I0226 17:28:02.563071 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535448-8n8lp" event={"ID":"d139485b-01a9-4993-b8a7-66dcc1008841","Type":"ContainerDied","Data":"bf3e363664202893aa2a5173369d952647d645a954f1e3faac6b4a08d210a3b2"} Feb 26 17:28:03 crc kubenswrapper[4805]: I0226 17:28:03.773056 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535448-8n8lp" Feb 26 17:28:03 crc kubenswrapper[4805]: I0226 17:28:03.960630 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm252\" (UniqueName: \"kubernetes.io/projected/d139485b-01a9-4993-b8a7-66dcc1008841-kube-api-access-wm252\") pod \"d139485b-01a9-4993-b8a7-66dcc1008841\" (UID: \"d139485b-01a9-4993-b8a7-66dcc1008841\") " Feb 26 17:28:03 crc kubenswrapper[4805]: I0226 17:28:03.966352 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d139485b-01a9-4993-b8a7-66dcc1008841-kube-api-access-wm252" (OuterVolumeSpecName: "kube-api-access-wm252") pod "d139485b-01a9-4993-b8a7-66dcc1008841" (UID: "d139485b-01a9-4993-b8a7-66dcc1008841"). InnerVolumeSpecName "kube-api-access-wm252". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:28:04 crc kubenswrapper[4805]: I0226 17:28:04.061983 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm252\" (UniqueName: \"kubernetes.io/projected/d139485b-01a9-4993-b8a7-66dcc1008841-kube-api-access-wm252\") on node \"crc\" DevicePath \"\"" Feb 26 17:28:04 crc kubenswrapper[4805]: I0226 17:28:04.574964 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535448-8n8lp" event={"ID":"d139485b-01a9-4993-b8a7-66dcc1008841","Type":"ContainerDied","Data":"ed7eeac351c09d1c41898394bad9042144b87e565f769e450d12beb6c962e27a"} Feb 26 17:28:04 crc kubenswrapper[4805]: I0226 17:28:04.575011 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed7eeac351c09d1c41898394bad9042144b87e565f769e450d12beb6c962e27a" Feb 26 17:28:04 crc kubenswrapper[4805]: I0226 17:28:04.575066 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535448-8n8lp" Feb 26 17:28:04 crc kubenswrapper[4805]: I0226 17:28:04.828593 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535442-dbgws"] Feb 26 17:28:04 crc kubenswrapper[4805]: I0226 17:28:04.833950 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535442-dbgws"] Feb 26 17:28:04 crc kubenswrapper[4805]: I0226 17:28:04.964795 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="781869ac-5375-418c-a9db-22ce36c326fe" path="/var/lib/kubelet/pods/781869ac-5375-418c-a9db-22ce36c326fe/volumes" Feb 26 17:28:43 crc kubenswrapper[4805]: I0226 17:28:43.667240 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0865zbx"] Feb 26 17:28:43 crc kubenswrapper[4805]: E0226 17:28:43.668263 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d139485b-01a9-4993-b8a7-66dcc1008841" containerName="oc" Feb 26 17:28:43 crc kubenswrapper[4805]: I0226 17:28:43.668287 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d139485b-01a9-4993-b8a7-66dcc1008841" containerName="oc" Feb 26 17:28:43 crc kubenswrapper[4805]: I0226 17:28:43.668468 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="d139485b-01a9-4993-b8a7-66dcc1008841" containerName="oc" Feb 26 17:28:43 crc kubenswrapper[4805]: I0226 17:28:43.670162 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0865zbx" Feb 26 17:28:43 crc kubenswrapper[4805]: I0226 17:28:43.673970 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0865zbx"] Feb 26 17:28:43 crc kubenswrapper[4805]: I0226 17:28:43.674174 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 26 17:28:43 crc kubenswrapper[4805]: I0226 17:28:43.791161 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c45bc675-11fe-4450-b640-fa2d62126bda-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0865zbx\" (UID: \"c45bc675-11fe-4450-b640-fa2d62126bda\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0865zbx" Feb 26 17:28:43 crc kubenswrapper[4805]: I0226 17:28:43.791525 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c45bc675-11fe-4450-b640-fa2d62126bda-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0865zbx\" (UID: \"c45bc675-11fe-4450-b640-fa2d62126bda\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0865zbx" Feb 26 17:28:43 crc kubenswrapper[4805]: I0226 17:28:43.791729 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4h9k\" (UniqueName: \"kubernetes.io/projected/c45bc675-11fe-4450-b640-fa2d62126bda-kube-api-access-j4h9k\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0865zbx\" (UID: \"c45bc675-11fe-4450-b640-fa2d62126bda\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0865zbx" Feb 26 17:28:43 crc kubenswrapper[4805]: I0226 17:28:43.892968 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c45bc675-11fe-4450-b640-fa2d62126bda-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0865zbx\" (UID: \"c45bc675-11fe-4450-b640-fa2d62126bda\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0865zbx" Feb 26 17:28:43 crc kubenswrapper[4805]: I0226 17:28:43.893057 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4h9k\" (UniqueName: \"kubernetes.io/projected/c45bc675-11fe-4450-b640-fa2d62126bda-kube-api-access-j4h9k\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0865zbx\" (UID: \"c45bc675-11fe-4450-b640-fa2d62126bda\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0865zbx" Feb 26 17:28:43 crc kubenswrapper[4805]: I0226 17:28:43.893117 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c45bc675-11fe-4450-b640-fa2d62126bda-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0865zbx\" (UID: \"c45bc675-11fe-4450-b640-fa2d62126bda\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0865zbx" Feb 26 17:28:43 crc kubenswrapper[4805]: I0226 17:28:43.893581 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c45bc675-11fe-4450-b640-fa2d62126bda-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0865zbx\" (UID: \"c45bc675-11fe-4450-b640-fa2d62126bda\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0865zbx" Feb 26 17:28:43 crc kubenswrapper[4805]: I0226 17:28:43.893652 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c45bc675-11fe-4450-b640-fa2d62126bda-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0865zbx\" (UID: \"c45bc675-11fe-4450-b640-fa2d62126bda\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0865zbx" Feb 26 17:28:43 crc kubenswrapper[4805]: I0226 17:28:43.921285 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4h9k\" (UniqueName: \"kubernetes.io/projected/c45bc675-11fe-4450-b640-fa2d62126bda-kube-api-access-j4h9k\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0865zbx\" (UID: \"c45bc675-11fe-4450-b640-fa2d62126bda\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0865zbx" Feb 26 17:28:44 crc kubenswrapper[4805]: I0226 17:28:44.036079 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0865zbx" Feb 26 17:28:44 crc kubenswrapper[4805]: I0226 17:28:44.249836 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0865zbx"] Feb 26 17:28:44 crc kubenswrapper[4805]: I0226 17:28:44.819687 4805 generic.go:334] "Generic (PLEG): container finished" podID="c45bc675-11fe-4450-b640-fa2d62126bda" containerID="2fed14518862c8c705e2a79cd06d3b6fd35f9641ed9ba0f07be0aa33479991d2" exitCode=0 Feb 26 17:28:44 crc kubenswrapper[4805]: I0226 17:28:44.819783 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0865zbx" event={"ID":"c45bc675-11fe-4450-b640-fa2d62126bda","Type":"ContainerDied","Data":"2fed14518862c8c705e2a79cd06d3b6fd35f9641ed9ba0f07be0aa33479991d2"} Feb 26 17:28:44 crc kubenswrapper[4805]: I0226 17:28:44.820081 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0865zbx" event={"ID":"c45bc675-11fe-4450-b640-fa2d62126bda","Type":"ContainerStarted","Data":"d15cc072cf15e8f33c2c4c241579922563ff9fdeb2899083d9aecc21b8137695"} Feb 26 17:28:45 crc kubenswrapper[4805]: I0226 17:28:45.645642 4805 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 26 17:28:46 crc kubenswrapper[4805]: I0226 17:28:46.026490 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dmxg8"] Feb 26 17:28:46 crc kubenswrapper[4805]: I0226 17:28:46.027749 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dmxg8" Feb 26 17:28:46 crc kubenswrapper[4805]: I0226 17:28:46.041256 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dmxg8"] Feb 26 17:28:46 crc kubenswrapper[4805]: I0226 17:28:46.221236 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de2ab0e6-f960-4663-8b08-075f36a70c33-catalog-content\") pod \"redhat-operators-dmxg8\" (UID: \"de2ab0e6-f960-4663-8b08-075f36a70c33\") " pod="openshift-marketplace/redhat-operators-dmxg8" Feb 26 17:28:46 crc kubenswrapper[4805]: I0226 17:28:46.221349 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de2ab0e6-f960-4663-8b08-075f36a70c33-utilities\") pod \"redhat-operators-dmxg8\" (UID: \"de2ab0e6-f960-4663-8b08-075f36a70c33\") " pod="openshift-marketplace/redhat-operators-dmxg8" Feb 26 17:28:46 crc kubenswrapper[4805]: I0226 17:28:46.221398 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xccxh\" (UniqueName: \"kubernetes.io/projected/de2ab0e6-f960-4663-8b08-075f36a70c33-kube-api-access-xccxh\") pod \"redhat-operators-dmxg8\" (UID: \"de2ab0e6-f960-4663-8b08-075f36a70c33\") " pod="openshift-marketplace/redhat-operators-dmxg8" Feb 26 17:28:46 crc kubenswrapper[4805]: I0226 17:28:46.322678 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de2ab0e6-f960-4663-8b08-075f36a70c33-utilities\") pod \"redhat-operators-dmxg8\" (UID: \"de2ab0e6-f960-4663-8b08-075f36a70c33\") " pod="openshift-marketplace/redhat-operators-dmxg8" Feb 26 17:28:46 crc kubenswrapper[4805]: I0226 17:28:46.322721 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xccxh\" (UniqueName: \"kubernetes.io/projected/de2ab0e6-f960-4663-8b08-075f36a70c33-kube-api-access-xccxh\") pod \"redhat-operators-dmxg8\" (UID: \"de2ab0e6-f960-4663-8b08-075f36a70c33\") " pod="openshift-marketplace/redhat-operators-dmxg8" Feb 26 17:28:46 crc kubenswrapper[4805]: I0226 17:28:46.322763 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de2ab0e6-f960-4663-8b08-075f36a70c33-catalog-content\") pod \"redhat-operators-dmxg8\" (UID: \"de2ab0e6-f960-4663-8b08-075f36a70c33\") " pod="openshift-marketplace/redhat-operators-dmxg8" Feb 26 17:28:46 crc kubenswrapper[4805]: I0226 17:28:46.323177 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de2ab0e6-f960-4663-8b08-075f36a70c33-utilities\") pod \"redhat-operators-dmxg8\" (UID: \"de2ab0e6-f960-4663-8b08-075f36a70c33\") " pod="openshift-marketplace/redhat-operators-dmxg8" Feb 26 17:28:46 crc kubenswrapper[4805]: I0226 17:28:46.323228 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de2ab0e6-f960-4663-8b08-075f36a70c33-catalog-content\") pod \"redhat-operators-dmxg8\" (UID: \"de2ab0e6-f960-4663-8b08-075f36a70c33\") " pod="openshift-marketplace/redhat-operators-dmxg8" Feb 26 17:28:46 crc kubenswrapper[4805]: I0226 17:28:46.354362 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xccxh\" (UniqueName: \"kubernetes.io/projected/de2ab0e6-f960-4663-8b08-075f36a70c33-kube-api-access-xccxh\") pod \"redhat-operators-dmxg8\" (UID: \"de2ab0e6-f960-4663-8b08-075f36a70c33\") " pod="openshift-marketplace/redhat-operators-dmxg8" Feb 26 17:28:46 crc kubenswrapper[4805]: I0226 17:28:46.646388 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dmxg8" Feb 26 17:28:46 crc kubenswrapper[4805]: I0226 17:28:46.838349 4805 generic.go:334] "Generic (PLEG): container finished" podID="c45bc675-11fe-4450-b640-fa2d62126bda" containerID="9e4675c54efb3c346b4a1f538a4e6b9c9bb41411b8d6ff37e40839e1e79ee599" exitCode=0 Feb 26 17:28:46 crc kubenswrapper[4805]: I0226 17:28:46.838491 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0865zbx" event={"ID":"c45bc675-11fe-4450-b640-fa2d62126bda","Type":"ContainerDied","Data":"9e4675c54efb3c346b4a1f538a4e6b9c9bb41411b8d6ff37e40839e1e79ee599"} Feb 26 17:28:46 crc kubenswrapper[4805]: I0226 17:28:46.853873 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dmxg8"] Feb 26 17:28:46 crc kubenswrapper[4805]: W0226 17:28:46.863455 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde2ab0e6_f960_4663_8b08_075f36a70c33.slice/crio-f872b8ac879a14b3c6769d88d2a12783bc07c0ca7bc5f6706d4a023aea37e783 WatchSource:0}: Error finding container f872b8ac879a14b3c6769d88d2a12783bc07c0ca7bc5f6706d4a023aea37e783: Status 404 returned error can't find the container with id f872b8ac879a14b3c6769d88d2a12783bc07c0ca7bc5f6706d4a023aea37e783 Feb 26 17:28:47 crc kubenswrapper[4805]: I0226 17:28:47.848326 4805 generic.go:334] "Generic (PLEG): container finished" podID="c45bc675-11fe-4450-b640-fa2d62126bda" containerID="c7c3090871f5dc550750a42c15be813a0514de07e6d33b1ce7470f8403e26963" exitCode=0 Feb 26 17:28:47 crc kubenswrapper[4805]: I0226 17:28:47.848404 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0865zbx" event={"ID":"c45bc675-11fe-4450-b640-fa2d62126bda","Type":"ContainerDied","Data":"c7c3090871f5dc550750a42c15be813a0514de07e6d33b1ce7470f8403e26963"} Feb 26 17:28:47 crc kubenswrapper[4805]: I0226 17:28:47.850642 4805 generic.go:334] "Generic (PLEG): container finished" podID="de2ab0e6-f960-4663-8b08-075f36a70c33" containerID="99aea28984f2f683487ce78a88b3752cda6ae45186ff50ac3abd746efe5c82c0" exitCode=0 Feb 26 17:28:47 crc kubenswrapper[4805]: I0226 17:28:47.850672 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dmxg8" event={"ID":"de2ab0e6-f960-4663-8b08-075f36a70c33","Type":"ContainerDied","Data":"99aea28984f2f683487ce78a88b3752cda6ae45186ff50ac3abd746efe5c82c0"} Feb 26 17:28:47 crc kubenswrapper[4805]: I0226 17:28:47.850691 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dmxg8" event={"ID":"de2ab0e6-f960-4663-8b08-075f36a70c33","Type":"ContainerStarted","Data":"f872b8ac879a14b3c6769d88d2a12783bc07c0ca7bc5f6706d4a023aea37e783"} Feb 26 17:28:48 crc kubenswrapper[4805]: I0226 17:28:48.857065 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dmxg8" event={"ID":"de2ab0e6-f960-4663-8b08-075f36a70c33","Type":"ContainerStarted","Data":"89b87384bffa08e29578f143209054014d9537a39b54182db3ad6d81f1c825e1"} Feb 26 17:28:49 crc kubenswrapper[4805]: I0226 17:28:49.012290 4805 scope.go:117] "RemoveContainer" containerID="edddc47e4bc778bc90cada61a753d495ef64356fbc6a59d94d95221441be045f" Feb 26 17:28:49 crc kubenswrapper[4805]: I0226 17:28:49.077817 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0865zbx" Feb 26 17:28:49 crc kubenswrapper[4805]: I0226 17:28:49.157993 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c45bc675-11fe-4450-b640-fa2d62126bda-bundle\") pod \"c45bc675-11fe-4450-b640-fa2d62126bda\" (UID: \"c45bc675-11fe-4450-b640-fa2d62126bda\") " Feb 26 17:28:49 crc kubenswrapper[4805]: I0226 17:28:49.158116 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c45bc675-11fe-4450-b640-fa2d62126bda-util\") pod \"c45bc675-11fe-4450-b640-fa2d62126bda\" (UID: \"c45bc675-11fe-4450-b640-fa2d62126bda\") " Feb 26 17:28:49 crc kubenswrapper[4805]: I0226 17:28:49.158156 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4h9k\" (UniqueName: \"kubernetes.io/projected/c45bc675-11fe-4450-b640-fa2d62126bda-kube-api-access-j4h9k\") pod \"c45bc675-11fe-4450-b640-fa2d62126bda\" (UID: \"c45bc675-11fe-4450-b640-fa2d62126bda\") " Feb 26 17:28:49 crc kubenswrapper[4805]: I0226 17:28:49.161189 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c45bc675-11fe-4450-b640-fa2d62126bda-bundle" (OuterVolumeSpecName: "bundle") pod "c45bc675-11fe-4450-b640-fa2d62126bda" (UID: "c45bc675-11fe-4450-b640-fa2d62126bda"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:28:49 crc kubenswrapper[4805]: I0226 17:28:49.164131 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c45bc675-11fe-4450-b640-fa2d62126bda-kube-api-access-j4h9k" (OuterVolumeSpecName: "kube-api-access-j4h9k") pod "c45bc675-11fe-4450-b640-fa2d62126bda" (UID: "c45bc675-11fe-4450-b640-fa2d62126bda"). InnerVolumeSpecName "kube-api-access-j4h9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:28:49 crc kubenswrapper[4805]: I0226 17:28:49.189162 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c45bc675-11fe-4450-b640-fa2d62126bda-util" (OuterVolumeSpecName: "util") pod "c45bc675-11fe-4450-b640-fa2d62126bda" (UID: "c45bc675-11fe-4450-b640-fa2d62126bda"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:28:49 crc kubenswrapper[4805]: I0226 17:28:49.259823 4805 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c45bc675-11fe-4450-b640-fa2d62126bda-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:28:49 crc kubenswrapper[4805]: I0226 17:28:49.259858 4805 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c45bc675-11fe-4450-b640-fa2d62126bda-util\") on node \"crc\" DevicePath \"\"" Feb 26 17:28:49 crc kubenswrapper[4805]: I0226 17:28:49.259867 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4h9k\" (UniqueName: \"kubernetes.io/projected/c45bc675-11fe-4450-b640-fa2d62126bda-kube-api-access-j4h9k\") on node \"crc\" DevicePath \"\"" Feb 26 17:28:49 crc kubenswrapper[4805]: I0226 17:28:49.871258 4805 generic.go:334] "Generic (PLEG): container finished" podID="de2ab0e6-f960-4663-8b08-075f36a70c33" containerID="89b87384bffa08e29578f143209054014d9537a39b54182db3ad6d81f1c825e1" exitCode=0 Feb 26 17:28:49 crc kubenswrapper[4805]: I0226 17:28:49.871360 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dmxg8" event={"ID":"de2ab0e6-f960-4663-8b08-075f36a70c33","Type":"ContainerDied","Data":"89b87384bffa08e29578f143209054014d9537a39b54182db3ad6d81f1c825e1"} Feb 26 17:28:49 crc kubenswrapper[4805]: I0226 17:28:49.875057 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0865zbx" event={"ID":"c45bc675-11fe-4450-b640-fa2d62126bda","Type":"ContainerDied","Data":"d15cc072cf15e8f33c2c4c241579922563ff9fdeb2899083d9aecc21b8137695"} Feb 26 17:28:49 crc kubenswrapper[4805]: I0226 17:28:49.875119 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d15cc072cf15e8f33c2c4c241579922563ff9fdeb2899083d9aecc21b8137695" Feb 26 17:28:49 crc kubenswrapper[4805]: I0226 17:28:49.875223 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0865zbx" Feb 26 17:28:50 crc kubenswrapper[4805]: I0226 17:28:50.881730 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dmxg8" event={"ID":"de2ab0e6-f960-4663-8b08-075f36a70c33","Type":"ContainerStarted","Data":"867e013d65f855ef552558779fe4b318a159151fd6ed9c45536f1360fa47e492"} Feb 26 17:28:50 crc kubenswrapper[4805]: I0226 17:28:50.902268 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dmxg8" podStartSLOduration=2.46386581 podStartE2EDuration="4.902246861s" podCreationTimestamp="2026-02-26 17:28:46 +0000 UTC" firstStartedPulling="2026-02-26 17:28:47.852402614 +0000 UTC m=+842.414156953" lastFinishedPulling="2026-02-26 17:28:50.290783605 +0000 UTC m=+844.852538004" observedRunningTime="2026-02-26 17:28:50.898042432 +0000 UTC m=+845.459796781" watchObservedRunningTime="2026-02-26 17:28:50.902246861 +0000 UTC m=+845.464001210" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.037616 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pqbgw"] Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.038237 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="ovn-controller" containerID="cri-o://f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98" gracePeriod=30 Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.038260 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="northd" containerID="cri-o://acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34" gracePeriod=30 Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.038315 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="kube-rbac-proxy-node" containerID="cri-o://a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553" gracePeriod=30 Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.038313 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="nbdb" containerID="cri-o://678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38" gracePeriod=30 Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.038369 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f" gracePeriod=30 Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.038389 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="ovn-acl-logging" containerID="cri-o://8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c" gracePeriod=30 Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.038318 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="sbdb" containerID="cri-o://4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa" gracePeriod=30 Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.092415 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="ovnkube-controller" containerID="cri-o://c203122f004abb6dc03d9a844cec9016420dc12551d7a69210b37d82829728bd" gracePeriod=30 Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.762413 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqbgw_1d434db3-db90-41b2-9bd3-e6ef3009f878/ovnkube-controller/3.log" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.765179 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqbgw_1d434db3-db90-41b2-9bd3-e6ef3009f878/ovn-acl-logging/0.log" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.767516 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqbgw_1d434db3-db90-41b2-9bd3-e6ef3009f878/ovn-controller/0.log" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.768202 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.818367 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2htqs"] Feb 26 17:28:55 crc kubenswrapper[4805]: E0226 17:28:55.818601 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c45bc675-11fe-4450-b640-fa2d62126bda" containerName="extract" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.818617 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c45bc675-11fe-4450-b640-fa2d62126bda" containerName="extract" Feb 26 17:28:55 crc kubenswrapper[4805]: E0226 17:28:55.818630 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="ovn-controller" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.818637 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="ovn-controller" Feb 26 17:28:55 crc kubenswrapper[4805]: E0226 17:28:55.818645 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="ovnkube-controller" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.818653 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="ovnkube-controller" Feb 26 17:28:55 crc kubenswrapper[4805]: E0226 17:28:55.818660 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="ovnkube-controller" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.818670 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="ovnkube-controller" Feb 26 17:28:55 crc kubenswrapper[4805]: E0226 17:28:55.818677 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c45bc675-11fe-4450-b640-fa2d62126bda" containerName="util" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.818685 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c45bc675-11fe-4450-b640-fa2d62126bda" containerName="util" Feb 26 17:28:55 crc kubenswrapper[4805]: E0226 17:28:55.818698 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="nbdb" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.818705 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="nbdb" Feb 26 17:28:55 crc kubenswrapper[4805]: E0226 17:28:55.818717 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="kubecfg-setup" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.818724 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="kubecfg-setup" Feb 26 17:28:55 crc kubenswrapper[4805]: E0226 17:28:55.818733 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="northd" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.818740 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="northd" Feb 26 17:28:55 crc kubenswrapper[4805]: E0226 17:28:55.818751 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="ovn-acl-logging" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.818758 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="ovn-acl-logging" Feb 26 17:28:55 crc kubenswrapper[4805]: E0226 17:28:55.818768 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="ovnkube-controller" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.818775 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="ovnkube-controller" Feb 26 17:28:55 crc kubenswrapper[4805]: E0226 17:28:55.818784 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="sbdb" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.818791 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="sbdb" Feb 26 17:28:55 crc kubenswrapper[4805]: E0226 17:28:55.818803 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="kube-rbac-proxy-ovn-metrics" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.818811 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="kube-rbac-proxy-ovn-metrics" Feb 26 17:28:55 crc kubenswrapper[4805]: E0226 17:28:55.818822 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="kube-rbac-proxy-node" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.818829 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="kube-rbac-proxy-node" Feb 26 17:28:55 crc kubenswrapper[4805]: E0226 17:28:55.818840 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c45bc675-11fe-4450-b640-fa2d62126bda" containerName="pull" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.818847 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c45bc675-11fe-4450-b640-fa2d62126bda" containerName="pull" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.818967 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="ovn-acl-logging" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.818982 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="northd" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.818992 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="ovnkube-controller" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.819000 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="kube-rbac-proxy-node" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.819011 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="kube-rbac-proxy-ovn-metrics" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.819035 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="nbdb" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.819045 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="ovnkube-controller" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.819055 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="sbdb" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.819064 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="ovnkube-controller" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.819073 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="ovnkube-controller" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.819084 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c45bc675-11fe-4450-b640-fa2d62126bda" containerName="extract" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.819094 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="ovn-controller" Feb 26 17:28:55 crc kubenswrapper[4805]: E0226 17:28:55.819210 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="ovnkube-controller" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.819220 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="ovnkube-controller" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.819334 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="ovnkube-controller" Feb 26 17:28:55 crc kubenswrapper[4805]: E0226 17:28:55.819446 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="ovnkube-controller" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.819455 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerName="ovnkube-controller" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.821208 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.835547 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-host-kubelet\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.835655 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-systemd-units\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.907432 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tv2pd_4cefacfa-0108-4252-aa69-4b35bcc0f69f/kube-multus/2.log" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.908110 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tv2pd_4cefacfa-0108-4252-aa69-4b35bcc0f69f/kube-multus/1.log" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.908156 4805 generic.go:334] "Generic (PLEG): container finished" podID="4cefacfa-0108-4252-aa69-4b35bcc0f69f" containerID="f05a848fc8fc044ab0d3f773b175d5c5a19a41680b5d67af5b6dabb86f31f070" exitCode=2 Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.908248 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tv2pd" event={"ID":"4cefacfa-0108-4252-aa69-4b35bcc0f69f","Type":"ContainerDied","Data":"f05a848fc8fc044ab0d3f773b175d5c5a19a41680b5d67af5b6dabb86f31f070"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.908334 4805 scope.go:117] "RemoveContainer" containerID="6b4d5739a7ef7ce9e6f2a3c653cc6d361b9bb0995d8899611d5196dfb304d82a" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.909067 4805 scope.go:117] "RemoveContainer" containerID="f05a848fc8fc044ab0d3f773b175d5c5a19a41680b5d67af5b6dabb86f31f070" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.911239 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqbgw_1d434db3-db90-41b2-9bd3-e6ef3009f878/ovnkube-controller/3.log" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.913290 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqbgw_1d434db3-db90-41b2-9bd3-e6ef3009f878/ovn-acl-logging/0.log" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.913818 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqbgw_1d434db3-db90-41b2-9bd3-e6ef3009f878/ovn-controller/0.log" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.914812 4805 generic.go:334] "Generic (PLEG): container finished" podID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerID="c203122f004abb6dc03d9a844cec9016420dc12551d7a69210b37d82829728bd" exitCode=0 Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.914832 4805 generic.go:334] "Generic (PLEG): container finished" podID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerID="4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa" exitCode=0 Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.914839 4805 generic.go:334] "Generic (PLEG): container finished" podID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerID="678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38" exitCode=0 Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.914845 4805 generic.go:334] "Generic (PLEG): container finished" podID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerID="acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34" exitCode=0 Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.914852 4805 generic.go:334] "Generic (PLEG): container finished" podID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerID="1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f" exitCode=0 Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.914859 4805 generic.go:334] "Generic (PLEG): container finished" podID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerID="a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553" exitCode=0 Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.914865 4805 generic.go:334] "Generic (PLEG): container finished" podID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerID="8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c" exitCode=143 Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.914871 4805 generic.go:334] "Generic (PLEG): container finished" podID="1d434db3-db90-41b2-9bd3-e6ef3009f878" containerID="f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98" exitCode=143 Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.914889 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" event={"ID":"1d434db3-db90-41b2-9bd3-e6ef3009f878","Type":"ContainerDied","Data":"c203122f004abb6dc03d9a844cec9016420dc12551d7a69210b37d82829728bd"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.914912 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" event={"ID":"1d434db3-db90-41b2-9bd3-e6ef3009f878","Type":"ContainerDied","Data":"4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.914922 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" event={"ID":"1d434db3-db90-41b2-9bd3-e6ef3009f878","Type":"ContainerDied","Data":"678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.914932 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" event={"ID":"1d434db3-db90-41b2-9bd3-e6ef3009f878","Type":"ContainerDied","Data":"acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.914940 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" event={"ID":"1d434db3-db90-41b2-9bd3-e6ef3009f878","Type":"ContainerDied","Data":"1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.914949 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" event={"ID":"1d434db3-db90-41b2-9bd3-e6ef3009f878","Type":"ContainerDied","Data":"a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.914959 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c203122f004abb6dc03d9a844cec9016420dc12551d7a69210b37d82829728bd"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.914968 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.914973 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.914978 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.914982 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.914987 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.914992 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.914996 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.915001 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.915005 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.915027 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" event={"ID":"1d434db3-db90-41b2-9bd3-e6ef3009f878","Type":"ContainerDied","Data":"8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.915036 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c203122f004abb6dc03d9a844cec9016420dc12551d7a69210b37d82829728bd"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.915042 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.915047 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.915052 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.915057 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.915062 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.915067 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.915073 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.915078 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.915083 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.915090 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" event={"ID":"1d434db3-db90-41b2-9bd3-e6ef3009f878","Type":"ContainerDied","Data":"f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.915097 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c203122f004abb6dc03d9a844cec9016420dc12551d7a69210b37d82829728bd"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.915103 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.915109 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.915115 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.915120 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.915125 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.915130 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.915135 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.915140 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.915145 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.915151 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" event={"ID":"1d434db3-db90-41b2-9bd3-e6ef3009f878","Type":"ContainerDied","Data":"cad97b9b4310c1cae6fdac72c6b42fc2ff79310dd1574698d457f241a171bf5f"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.915159 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c203122f004abb6dc03d9a844cec9016420dc12551d7a69210b37d82829728bd"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.915164 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.915170 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.915174 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.915179 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.915184 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.915188 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.915193 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.915198 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.915202 4805 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607"} Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.915276 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pqbgw" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.937294 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1d434db3-db90-41b2-9bd3-e6ef3009f878-ovn-node-metrics-cert\") pod \"1d434db3-db90-41b2-9bd3-e6ef3009f878\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.937360 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1d434db3-db90-41b2-9bd3-e6ef3009f878-ovnkube-config\") pod \"1d434db3-db90-41b2-9bd3-e6ef3009f878\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.937395 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-systemd-units\") pod \"1d434db3-db90-41b2-9bd3-e6ef3009f878\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.937420 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1d434db3-db90-41b2-9bd3-e6ef3009f878-ovnkube-script-lib\") pod \"1d434db3-db90-41b2-9bd3-e6ef3009f878\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.937437 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-run-ovn\") pod \"1d434db3-db90-41b2-9bd3-e6ef3009f878\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.937462 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-kubelet\") pod \"1d434db3-db90-41b2-9bd3-e6ef3009f878\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.937478 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-cni-bin\") pod \"1d434db3-db90-41b2-9bd3-e6ef3009f878\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.937504 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1d434db3-db90-41b2-9bd3-e6ef3009f878-env-overrides\") pod \"1d434db3-db90-41b2-9bd3-e6ef3009f878\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.937520 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-var-lib-openvswitch\") pod \"1d434db3-db90-41b2-9bd3-e6ef3009f878\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.937537 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-run-systemd\") pod \"1d434db3-db90-41b2-9bd3-e6ef3009f878\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.937553 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-run-netns\") pod \"1d434db3-db90-41b2-9bd3-e6ef3009f878\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.937601 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-run-openvswitch\") pod \"1d434db3-db90-41b2-9bd3-e6ef3009f878\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.937628 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-var-lib-cni-networks-ovn-kubernetes\") pod \"1d434db3-db90-41b2-9bd3-e6ef3009f878\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.937644 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-log-socket\") pod \"1d434db3-db90-41b2-9bd3-e6ef3009f878\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.937657 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-cni-netd\") pod \"1d434db3-db90-41b2-9bd3-e6ef3009f878\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.937689 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-etc-openvswitch\") pod \"1d434db3-db90-41b2-9bd3-e6ef3009f878\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.937703 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-run-ovn-kubernetes\") pod \"1d434db3-db90-41b2-9bd3-e6ef3009f878\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.937716 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-node-log\") pod \"1d434db3-db90-41b2-9bd3-e6ef3009f878\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.937730 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-slash\") pod \"1d434db3-db90-41b2-9bd3-e6ef3009f878\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.937756 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kffw\" (UniqueName: \"kubernetes.io/projected/1d434db3-db90-41b2-9bd3-e6ef3009f878-kube-api-access-5kffw\") pod \"1d434db3-db90-41b2-9bd3-e6ef3009f878\" (UID: \"1d434db3-db90-41b2-9bd3-e6ef3009f878\") " Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.937909 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-host-kubelet\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.937939 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-host-run-ovn-kubernetes\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.937958 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-etc-openvswitch\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.937973 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/88e6bbd3-22a7-42ca-9b31-85e3d641b946-ovn-node-metrics-cert\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.937994 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-log-socket\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.938007 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-run-openvswitch\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.938038 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-host-cni-netd\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.938058 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-run-ovn\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.938075 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/88e6bbd3-22a7-42ca-9b31-85e3d641b946-ovnkube-config\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.938089 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfhbg\" (UniqueName: \"kubernetes.io/projected/88e6bbd3-22a7-42ca-9b31-85e3d641b946-kube-api-access-mfhbg\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.938105 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-run-systemd\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.938123 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-host-cni-bin\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.938137 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-var-lib-openvswitch\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.938155 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-host-slash\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.938169 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/88e6bbd3-22a7-42ca-9b31-85e3d641b946-env-overrides\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.938188 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/88e6bbd3-22a7-42ca-9b31-85e3d641b946-ovnkube-script-lib\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.938207 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-host-run-netns\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.938220 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-node-log\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.938239 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.938258 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-systemd-units\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.938321 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-systemd-units\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.939509 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-host-kubelet\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.939540 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "1d434db3-db90-41b2-9bd3-e6ef3009f878" (UID: "1d434db3-db90-41b2-9bd3-e6ef3009f878"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.939660 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "1d434db3-db90-41b2-9bd3-e6ef3009f878" (UID: "1d434db3-db90-41b2-9bd3-e6ef3009f878"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.939567 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "1d434db3-db90-41b2-9bd3-e6ef3009f878" (UID: "1d434db3-db90-41b2-9bd3-e6ef3009f878"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.939586 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "1d434db3-db90-41b2-9bd3-e6ef3009f878" (UID: "1d434db3-db90-41b2-9bd3-e6ef3009f878"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.939605 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "1d434db3-db90-41b2-9bd3-e6ef3009f878" (UID: "1d434db3-db90-41b2-9bd3-e6ef3009f878"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.939623 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-log-socket" (OuterVolumeSpecName: "log-socket") pod "1d434db3-db90-41b2-9bd3-e6ef3009f878" (UID: "1d434db3-db90-41b2-9bd3-e6ef3009f878"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.939886 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "1d434db3-db90-41b2-9bd3-e6ef3009f878" (UID: "1d434db3-db90-41b2-9bd3-e6ef3009f878"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.939926 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "1d434db3-db90-41b2-9bd3-e6ef3009f878" (UID: "1d434db3-db90-41b2-9bd3-e6ef3009f878"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.940063 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "1d434db3-db90-41b2-9bd3-e6ef3009f878" (UID: "1d434db3-db90-41b2-9bd3-e6ef3009f878"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.940182 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d434db3-db90-41b2-9bd3-e6ef3009f878-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "1d434db3-db90-41b2-9bd3-e6ef3009f878" (UID: "1d434db3-db90-41b2-9bd3-e6ef3009f878"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.940186 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "1d434db3-db90-41b2-9bd3-e6ef3009f878" (UID: "1d434db3-db90-41b2-9bd3-e6ef3009f878"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.940203 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "1d434db3-db90-41b2-9bd3-e6ef3009f878" (UID: "1d434db3-db90-41b2-9bd3-e6ef3009f878"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.940223 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "1d434db3-db90-41b2-9bd3-e6ef3009f878" (UID: "1d434db3-db90-41b2-9bd3-e6ef3009f878"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.940251 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d434db3-db90-41b2-9bd3-e6ef3009f878-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "1d434db3-db90-41b2-9bd3-e6ef3009f878" (UID: "1d434db3-db90-41b2-9bd3-e6ef3009f878"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.940293 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-slash" (OuterVolumeSpecName: "host-slash") pod "1d434db3-db90-41b2-9bd3-e6ef3009f878" (UID: "1d434db3-db90-41b2-9bd3-e6ef3009f878"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.940302 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-node-log" (OuterVolumeSpecName: "node-log") pod "1d434db3-db90-41b2-9bd3-e6ef3009f878" (UID: "1d434db3-db90-41b2-9bd3-e6ef3009f878"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.940519 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d434db3-db90-41b2-9bd3-e6ef3009f878-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "1d434db3-db90-41b2-9bd3-e6ef3009f878" (UID: "1d434db3-db90-41b2-9bd3-e6ef3009f878"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.940637 4805 scope.go:117] "RemoveContainer" containerID="c203122f004abb6dc03d9a844cec9016420dc12551d7a69210b37d82829728bd" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.945122 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d434db3-db90-41b2-9bd3-e6ef3009f878-kube-api-access-5kffw" (OuterVolumeSpecName: "kube-api-access-5kffw") pod "1d434db3-db90-41b2-9bd3-e6ef3009f878" (UID: "1d434db3-db90-41b2-9bd3-e6ef3009f878"). InnerVolumeSpecName "kube-api-access-5kffw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.949510 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d434db3-db90-41b2-9bd3-e6ef3009f878-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "1d434db3-db90-41b2-9bd3-e6ef3009f878" (UID: "1d434db3-db90-41b2-9bd3-e6ef3009f878"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.953676 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "1d434db3-db90-41b2-9bd3-e6ef3009f878" (UID: "1d434db3-db90-41b2-9bd3-e6ef3009f878"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 17:28:55 crc kubenswrapper[4805]: I0226 17:28:55.977330 4805 scope.go:117] "RemoveContainer" containerID="74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.007820 4805 scope.go:117] "RemoveContainer" containerID="4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.019824 4805 scope.go:117] "RemoveContainer" containerID="678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.039155 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-host-run-ovn-kubernetes\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.039274 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-host-run-ovn-kubernetes\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.039986 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-etc-openvswitch\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.040102 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/88e6bbd3-22a7-42ca-9b31-85e3d641b946-ovn-node-metrics-cert\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.040166 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-log-socket\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.040196 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-run-openvswitch\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.040215 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-host-cni-netd\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.040255 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-run-ovn\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.040293 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/88e6bbd3-22a7-42ca-9b31-85e3d641b946-ovnkube-config\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.040313 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfhbg\" (UniqueName: \"kubernetes.io/projected/88e6bbd3-22a7-42ca-9b31-85e3d641b946-kube-api-access-mfhbg\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.040316 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-host-cni-netd\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.040345 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-run-systemd\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.040404 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-host-cni-bin\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.040406 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-run-ovn\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.040426 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-var-lib-openvswitch\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.040479 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-host-slash\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.040502 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/88e6bbd3-22a7-42ca-9b31-85e3d641b946-env-overrides\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.040544 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/88e6bbd3-22a7-42ca-9b31-85e3d641b946-ovnkube-script-lib\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.040584 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-host-run-netns\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.040621 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-node-log\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.040643 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-host-slash\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.040660 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.040678 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-host-cni-bin\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.040712 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-var-lib-openvswitch\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.040751 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-run-systemd\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.040773 4805 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1d434db3-db90-41b2-9bd3-e6ef3009f878-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.040785 4805 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.040796 4805 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.040798 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-host-run-netns\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.040806 4805 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.040814 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-node-log\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.040839 4805 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.040843 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.040856 4805 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.040898 4805 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.040914 4805 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-log-socket\") on node \"crc\" DevicePath \"\"" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.040930 4805 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.040942 4805 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-slash\") on node \"crc\" DevicePath \"\"" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.040957 4805 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.040971 4805 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-node-log\") on node \"crc\" DevicePath \"\"" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.041371 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kffw\" (UniqueName: \"kubernetes.io/projected/1d434db3-db90-41b2-9bd3-e6ef3009f878-kube-api-access-5kffw\") on node \"crc\" DevicePath \"\"" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.041391 4805 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1d434db3-db90-41b2-9bd3-e6ef3009f878-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.041400 4805 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1d434db3-db90-41b2-9bd3-e6ef3009f878-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.041409 4805 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.041417 4805 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1d434db3-db90-41b2-9bd3-e6ef3009f878-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.041426 4805 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.041436 4805 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.041444 4805 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1d434db3-db90-41b2-9bd3-e6ef3009f878-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.041564 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-run-openvswitch\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.041999 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-etc-openvswitch\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.042084 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/88e6bbd3-22a7-42ca-9b31-85e3d641b946-ovnkube-config\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.042156 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/88e6bbd3-22a7-42ca-9b31-85e3d641b946-ovnkube-script-lib\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.042036 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/88e6bbd3-22a7-42ca-9b31-85e3d641b946-env-overrides\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.042228 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/88e6bbd3-22a7-42ca-9b31-85e3d641b946-log-socket\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.048439 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/88e6bbd3-22a7-42ca-9b31-85e3d641b946-ovn-node-metrics-cert\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.050280 4805 scope.go:117] "RemoveContainer" containerID="acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.065799 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfhbg\" (UniqueName: \"kubernetes.io/projected/88e6bbd3-22a7-42ca-9b31-85e3d641b946-kube-api-access-mfhbg\") pod \"ovnkube-node-2htqs\" (UID: \"88e6bbd3-22a7-42ca-9b31-85e3d641b946\") " pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.085590 4805 scope.go:117] "RemoveContainer" containerID="1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.116249 4805 scope.go:117] "RemoveContainer" containerID="a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.130789 4805 scope.go:117] "RemoveContainer" containerID="8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.136574 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.143057 4805 scope.go:117] "RemoveContainer" containerID="f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.157650 4805 scope.go:117] "RemoveContainer" containerID="e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.214785 4805 scope.go:117] "RemoveContainer" containerID="c203122f004abb6dc03d9a844cec9016420dc12551d7a69210b37d82829728bd" Feb 26 17:28:56 crc kubenswrapper[4805]: E0226 17:28:56.215292 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c203122f004abb6dc03d9a844cec9016420dc12551d7a69210b37d82829728bd\": container with ID starting with c203122f004abb6dc03d9a844cec9016420dc12551d7a69210b37d82829728bd not found: ID does not exist" containerID="c203122f004abb6dc03d9a844cec9016420dc12551d7a69210b37d82829728bd" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.215351 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c203122f004abb6dc03d9a844cec9016420dc12551d7a69210b37d82829728bd"} err="failed to get container status \"c203122f004abb6dc03d9a844cec9016420dc12551d7a69210b37d82829728bd\": rpc error: code = NotFound desc = could not find container \"c203122f004abb6dc03d9a844cec9016420dc12551d7a69210b37d82829728bd\": container with ID starting with c203122f004abb6dc03d9a844cec9016420dc12551d7a69210b37d82829728bd not found: ID does not exist" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.215382 4805 scope.go:117] "RemoveContainer" containerID="74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623" Feb 26 17:28:56 crc kubenswrapper[4805]: E0226 17:28:56.215809 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623\": container with ID starting with 74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623 not found: ID does not exist" containerID="74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.215836 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623"} err="failed to get container status \"74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623\": rpc error: code = NotFound desc = could not find container \"74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623\": container with ID starting with 74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623 not found: ID does not exist" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.215859 4805 scope.go:117] "RemoveContainer" containerID="4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa" Feb 26 17:28:56 crc kubenswrapper[4805]: E0226 17:28:56.216363 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa\": container with ID starting with 4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa not found: ID does not exist" containerID="4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.216399 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa"} err="failed to get container status \"4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa\": rpc error: code = NotFound desc = could not find container \"4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa\": container with ID starting with 4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa not found: ID does not exist" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.216422 4805 scope.go:117] "RemoveContainer" containerID="678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38" Feb 26 17:28:56 crc kubenswrapper[4805]: E0226 17:28:56.216700 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38\": container with ID starting with 678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38 not found: ID does not exist" containerID="678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.216739 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38"} err="failed to get container status \"678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38\": rpc error: code = NotFound desc = could not find container \"678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38\": container with ID starting with 678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38 not found: ID does not exist" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.216767 4805 scope.go:117] "RemoveContainer" containerID="acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34" Feb 26 17:28:56 crc kubenswrapper[4805]: E0226 17:28:56.217149 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34\": container with ID starting with acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34 not found: ID does not exist" containerID="acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.217179 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34"} err="failed to get container status \"acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34\": rpc error: code = NotFound desc = could not find container \"acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34\": container with ID starting with acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34 not found: ID does not exist" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.217197 4805 scope.go:117] "RemoveContainer" containerID="1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f" Feb 26 17:28:56 crc kubenswrapper[4805]: E0226 17:28:56.217450 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f\": container with ID starting with 1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f not found: ID does not exist" containerID="1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.217530 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f"} err="failed to get container status \"1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f\": rpc error: code = NotFound desc = could not find container \"1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f\": container with ID starting with 1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f not found: ID does not exist" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.217550 4805 scope.go:117] "RemoveContainer" containerID="a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553" Feb 26 17:28:56 crc kubenswrapper[4805]: E0226 17:28:56.217951 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553\": container with ID starting with a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553 not found: ID does not exist" containerID="a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.217994 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553"} err="failed to get container status \"a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553\": rpc error: code = NotFound desc = could not find container \"a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553\": container with ID starting with a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553 not found: ID does not exist" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.218026 4805 scope.go:117] "RemoveContainer" containerID="8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c" Feb 26 17:28:56 crc kubenswrapper[4805]: E0226 17:28:56.218358 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c\": container with ID starting with 8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c not found: ID does not exist" containerID="8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.218405 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c"} err="failed to get container status \"8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c\": rpc error: code = NotFound desc = could not find container \"8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c\": container with ID starting with 8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c not found: ID does not exist" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.218433 4805 scope.go:117] "RemoveContainer" containerID="f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98" Feb 26 17:28:56 crc kubenswrapper[4805]: E0226 17:28:56.218809 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98\": container with ID starting with f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98 not found: ID does not exist" containerID="f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.218841 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98"} err="failed to get container status \"f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98\": rpc error: code = NotFound desc = could not find container \"f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98\": container with ID starting with f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98 not found: ID does not exist" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.218858 4805 scope.go:117] "RemoveContainer" containerID="e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607" Feb 26 17:28:56 crc kubenswrapper[4805]: E0226 17:28:56.219219 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\": container with ID starting with e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607 not found: ID does not exist" containerID="e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.219261 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607"} err="failed to get container status \"e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\": rpc error: code = NotFound desc = could not find container \"e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\": container with ID starting with e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607 not found: ID does not exist" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.219282 4805 scope.go:117] "RemoveContainer" containerID="c203122f004abb6dc03d9a844cec9016420dc12551d7a69210b37d82829728bd" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.219523 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c203122f004abb6dc03d9a844cec9016420dc12551d7a69210b37d82829728bd"} err="failed to get container status \"c203122f004abb6dc03d9a844cec9016420dc12551d7a69210b37d82829728bd\": rpc error: code = NotFound desc = could not find container \"c203122f004abb6dc03d9a844cec9016420dc12551d7a69210b37d82829728bd\": container with ID starting with c203122f004abb6dc03d9a844cec9016420dc12551d7a69210b37d82829728bd not found: ID does not exist" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.219542 4805 scope.go:117] "RemoveContainer" containerID="74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.219800 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623"} err="failed to get container status \"74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623\": rpc error: code = NotFound desc = could not find container \"74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623\": container with ID starting with 74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623 not found: ID does not exist" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.219830 4805 scope.go:117] "RemoveContainer" containerID="4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.220128 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa"} err="failed to get container status \"4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa\": rpc error: code = NotFound desc = could not find container \"4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa\": container with ID starting with 4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa not found: ID does not exist" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.220147 4805 scope.go:117] "RemoveContainer" containerID="678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.220390 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38"} err="failed to get container status \"678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38\": rpc error: code = NotFound desc = could not find container \"678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38\": container with ID starting with 678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38 not found: ID does not exist" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.220416 4805 scope.go:117] "RemoveContainer" containerID="acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.220647 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34"} err="failed to get container status \"acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34\": rpc error: code = NotFound desc = could not find container \"acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34\": container with ID starting with acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34 not found: ID does not exist" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.220664 4805 scope.go:117] "RemoveContainer" containerID="1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.221031 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f"} err="failed to get container status \"1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f\": rpc error: code = NotFound desc = could not find container \"1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f\": container with ID starting with 1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f not found: ID does not exist" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.221061 4805 scope.go:117] "RemoveContainer" containerID="a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.221312 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553"} err="failed to get container status \"a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553\": rpc error: code = NotFound desc = could not find container \"a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553\": container with ID starting with a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553 not found: ID does not exist" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.221340 4805 scope.go:117] "RemoveContainer" containerID="8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.221679 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c"} err="failed to get container status \"8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c\": rpc error: code = NotFound desc = could not find container \"8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c\": container with ID starting with 8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c not found: ID does not exist" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.221700 4805 scope.go:117] "RemoveContainer" containerID="f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.221908 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98"} err="failed to get container status \"f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98\": rpc error: code = NotFound desc = could not find container \"f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98\": container with ID starting with f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98 not found: ID does not exist" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.221933 4805 scope.go:117] "RemoveContainer" containerID="e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.222244 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607"} err="failed to get container status \"e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\": rpc error: code = NotFound desc = could not find container \"e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\": container with ID starting with e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607 not found: ID does not exist" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.222270 4805 scope.go:117] "RemoveContainer" containerID="c203122f004abb6dc03d9a844cec9016420dc12551d7a69210b37d82829728bd" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.222510 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c203122f004abb6dc03d9a844cec9016420dc12551d7a69210b37d82829728bd"} err="failed to get container status \"c203122f004abb6dc03d9a844cec9016420dc12551d7a69210b37d82829728bd\": rpc error: code = NotFound desc = could not find container \"c203122f004abb6dc03d9a844cec9016420dc12551d7a69210b37d82829728bd\": container with ID starting with c203122f004abb6dc03d9a844cec9016420dc12551d7a69210b37d82829728bd not found: ID does not exist" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.222533 4805 scope.go:117] "RemoveContainer" containerID="74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.222731 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623"} err="failed to get container status \"74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623\": rpc error: code = NotFound desc = could not find container \"74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623\": container with ID starting with 74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623 not found: ID does not exist" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.222750 4805 scope.go:117] "RemoveContainer" containerID="4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.222950 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa"} err="failed to get container status \"4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa\": rpc error: code = NotFound desc = could not find container \"4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa\": container with ID starting with 4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa not found: ID does not exist" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.222968 4805 scope.go:117] "RemoveContainer" containerID="678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.223247 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38"} err="failed to get container status \"678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38\": rpc error: code = NotFound desc = could not find container \"678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38\": container with ID starting with 678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38 not found: ID does not exist" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.223292 4805 scope.go:117] "RemoveContainer" containerID="acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.223552 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34"} err="failed to get container status \"acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34\": rpc error: code = NotFound desc = could not find container \"acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34\": container with ID starting with acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34 not found: ID does not exist" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.223579 4805 scope.go:117] "RemoveContainer" containerID="1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.223838 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f"} err="failed to get container status \"1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f\": rpc error: code = NotFound desc = could not find container \"1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f\": container with ID starting with 1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f not found: ID does not exist" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.223861 4805 scope.go:117] "RemoveContainer" containerID="a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.224314 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553"} err="failed to get container status \"a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553\": rpc error: code = NotFound desc = could not find container \"a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553\": container with ID starting with a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553 not found: ID does not exist" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.224343 4805 scope.go:117] "RemoveContainer" containerID="8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.224560 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c"} err="failed to get container status \"8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c\": rpc error: code = NotFound desc = could not find container \"8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c\": container with ID starting with 8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c not found: ID does not exist" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.224584 4805 scope.go:117] "RemoveContainer" containerID="f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.224833 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98"} err="failed to get container status \"f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98\": rpc error: code = NotFound desc = could not find container \"f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98\": container with ID starting with f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98 not found: ID does not exist" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.224863 4805 scope.go:117] "RemoveContainer" containerID="e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.225144 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607"} err="failed to get container status \"e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\": rpc error: code = NotFound desc = could not find container \"e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\": container with ID starting with e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607 not found: ID does not exist" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.225170 4805 scope.go:117] "RemoveContainer" containerID="c203122f004abb6dc03d9a844cec9016420dc12551d7a69210b37d82829728bd" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.225456 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c203122f004abb6dc03d9a844cec9016420dc12551d7a69210b37d82829728bd"} err="failed to get container status \"c203122f004abb6dc03d9a844cec9016420dc12551d7a69210b37d82829728bd\": rpc error: code = NotFound desc = could not find container \"c203122f004abb6dc03d9a844cec9016420dc12551d7a69210b37d82829728bd\": container with ID starting with c203122f004abb6dc03d9a844cec9016420dc12551d7a69210b37d82829728bd not found: ID does not exist" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.225479 4805 scope.go:117] "RemoveContainer" containerID="74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.225823 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623"} err="failed to get container status \"74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623\": rpc error: code = NotFound desc = could not find container \"74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623\": container with ID starting with 74a32d59b8199de8d73ca53480f21e331c011a1975faa42a6cf8d975276d3623 not found: ID does not exist" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.225848 4805 scope.go:117] "RemoveContainer" containerID="4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.226090 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa"} err="failed to get container status \"4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa\": rpc error: code = NotFound desc = could not find container \"4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa\": container with ID starting with 4fa2fe506150f96798dd3e5c8adce8106053f638bce9f1251eeaa67eed0147aa not found: ID does not exist" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.226116 4805 scope.go:117] "RemoveContainer" containerID="678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.235160 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38"} err="failed to get container status \"678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38\": rpc error: code = NotFound desc = could not find container \"678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38\": container with ID starting with 678723793a1cd30f5f5ee5d0ed5709c5abaf808b8a1eb750f30bead23d076a38 not found: ID does not exist" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.235191 4805 scope.go:117] "RemoveContainer" containerID="acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.235565 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34"} err="failed to get container status \"acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34\": rpc error: code = NotFound desc = could not find container \"acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34\": container with ID starting with acbf013d3350db9b6a99468ac6a24842f21c7c3f30653988bd7546e2b81c1b34 not found: ID does not exist" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.235591 4805 scope.go:117] "RemoveContainer" containerID="1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.235877 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f"} err="failed to get container status \"1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f\": rpc error: code = NotFound desc = could not find container \"1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f\": container with ID starting with 1456a1b666e82d0f53c6c185f695115cb2da2b438f0adab3dcdbf424e839664f not found: ID does not exist" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.235895 4805 scope.go:117] "RemoveContainer" containerID="a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.236298 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553"} err="failed to get container status \"a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553\": rpc error: code = NotFound desc = could not find container \"a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553\": container with ID starting with a2150574e0b29a8bb250311f98bcc51aecd5fae2990277a2789fc7878abc8553 not found: ID does not exist" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.236346 4805 scope.go:117] "RemoveContainer" containerID="8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.236697 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c"} err="failed to get container status \"8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c\": rpc error: code = NotFound desc = could not find container \"8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c\": container with ID starting with 8e559cea57d783893797b509de9a1f6d842616d6a43ab69c6ebdc6c16de3d15c not found: ID does not exist" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.236723 4805 scope.go:117] "RemoveContainer" containerID="f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.236979 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98"} err="failed to get container status \"f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98\": rpc error: code = NotFound desc = could not find container \"f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98\": container with ID starting with f99a15a92bea8598a08a35d6b4b65a6f53f822b495b5a90dc355784f1f7c1c98 not found: ID does not exist" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.237005 4805 scope.go:117] "RemoveContainer" containerID="e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.237464 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607"} err="failed to get container status \"e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\": rpc error: code = NotFound desc = could not find container \"e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607\": container with ID starting with e4238a503deb42cdb53162b712c419a5cb2f84b338dae2102affc73e70920607 not found: ID does not exist" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.267178 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pqbgw"] Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.270941 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pqbgw"] Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.647421 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dmxg8" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.647483 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dmxg8" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.922337 4805 generic.go:334] "Generic (PLEG): container finished" podID="88e6bbd3-22a7-42ca-9b31-85e3d641b946" containerID="f90aeec32bfb07ce8d5090ca3edf1209479ab630cf531d822d3eb877a2c79406" exitCode=0 Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.922432 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" event={"ID":"88e6bbd3-22a7-42ca-9b31-85e3d641b946","Type":"ContainerDied","Data":"f90aeec32bfb07ce8d5090ca3edf1209479ab630cf531d822d3eb877a2c79406"} Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.922770 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" event={"ID":"88e6bbd3-22a7-42ca-9b31-85e3d641b946","Type":"ContainerStarted","Data":"7ddc360cd9a0dd8640ed82657afa736101d6e8a68db743ba6dcd84dea3c23460"} Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.924915 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tv2pd_4cefacfa-0108-4252-aa69-4b35bcc0f69f/kube-multus/2.log" Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.924949 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tv2pd" event={"ID":"4cefacfa-0108-4252-aa69-4b35bcc0f69f","Type":"ContainerStarted","Data":"b6007537fb5548bfb1dc210ebac79a2fb59660c21768550f2895ad61f48abe42"} Feb 26 17:28:56 crc kubenswrapper[4805]: I0226 17:28:56.965809 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d434db3-db90-41b2-9bd3-e6ef3009f878" path="/var/lib/kubelet/pods/1d434db3-db90-41b2-9bd3-e6ef3009f878/volumes" Feb 26 17:28:57 crc kubenswrapper[4805]: I0226 17:28:57.673489 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-htjln"] Feb 26 17:28:57 crc kubenswrapper[4805]: I0226 17:28:57.674322 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-htjln" Feb 26 17:28:57 crc kubenswrapper[4805]: I0226 17:28:57.676937 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-bq6zj" Feb 26 17:28:57 crc kubenswrapper[4805]: I0226 17:28:57.677111 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 26 17:28:57 crc kubenswrapper[4805]: I0226 17:28:57.678507 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 26 17:28:57 crc kubenswrapper[4805]: I0226 17:28:57.689421 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dmxg8" podUID="de2ab0e6-f960-4663-8b08-075f36a70c33" containerName="registry-server" probeResult="failure" output=< Feb 26 17:28:57 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 26 17:28:57 crc kubenswrapper[4805]: > Feb 26 17:28:57 crc kubenswrapper[4805]: I0226 17:28:57.787841 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4"] Feb 26 17:28:57 crc kubenswrapper[4805]: I0226 17:28:57.788532 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4" Feb 26 17:28:57 crc kubenswrapper[4805]: I0226 17:28:57.791918 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 26 17:28:57 crc kubenswrapper[4805]: I0226 17:28:57.792314 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-5cmdw" Feb 26 17:28:57 crc kubenswrapper[4805]: I0226 17:28:57.800541 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p"] Feb 26 17:28:57 crc kubenswrapper[4805]: I0226 17:28:57.801214 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p" Feb 26 17:28:57 crc kubenswrapper[4805]: I0226 17:28:57.860799 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdhf9\" (UniqueName: \"kubernetes.io/projected/dfc0288a-d269-4568-a6f0-57bd9fa6cfcc-kube-api-access-rdhf9\") pod \"obo-prometheus-operator-68bc856cb9-htjln\" (UID: \"dfc0288a-d269-4568-a6f0-57bd9fa6cfcc\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-htjln" Feb 26 17:28:57 crc kubenswrapper[4805]: I0226 17:28:57.906896 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-t7gbj"] Feb 26 17:28:57 crc kubenswrapper[4805]: I0226 17:28:57.907534 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-t7gbj" Feb 26 17:28:57 crc kubenswrapper[4805]: I0226 17:28:57.909870 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 26 17:28:57 crc kubenswrapper[4805]: I0226 17:28:57.910431 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-7lb52" Feb 26 17:28:57 crc kubenswrapper[4805]: I0226 17:28:57.940644 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" event={"ID":"88e6bbd3-22a7-42ca-9b31-85e3d641b946","Type":"ContainerStarted","Data":"cdfaa3d65ff395e107508137cc0b2ecbd81846e305e9c814e7aa0c6b42c86b00"} Feb 26 17:28:57 crc kubenswrapper[4805]: I0226 17:28:57.940692 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" event={"ID":"88e6bbd3-22a7-42ca-9b31-85e3d641b946","Type":"ContainerStarted","Data":"2696f8b44c888f56fb37b7451986a6f573a2ea2ca0309106e75d2dea834c7a11"} Feb 26 17:28:57 crc kubenswrapper[4805]: I0226 17:28:57.940705 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" event={"ID":"88e6bbd3-22a7-42ca-9b31-85e3d641b946","Type":"ContainerStarted","Data":"96a0ef9f6040b1a97e073febf92b836d00f2e4c2e8c2992f839b356c2bd18d0f"} Feb 26 17:28:57 crc kubenswrapper[4805]: I0226 17:28:57.940715 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" event={"ID":"88e6bbd3-22a7-42ca-9b31-85e3d641b946","Type":"ContainerStarted","Data":"642a89aa2fbb735fd1af15de149d978d5b14a3125b39fb257ee31253c10bd917"} Feb 26 17:28:57 crc kubenswrapper[4805]: I0226 17:28:57.940723 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" event={"ID":"88e6bbd3-22a7-42ca-9b31-85e3d641b946","Type":"ContainerStarted","Data":"892e0b2fcc531ea21d9f40e59882d19f76590aec246288263964251fcd81f12c"} Feb 26 17:28:57 crc kubenswrapper[4805]: I0226 17:28:57.940731 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" event={"ID":"88e6bbd3-22a7-42ca-9b31-85e3d641b946","Type":"ContainerStarted","Data":"ebc87ed6174ce9f8687956909ef87eeaed2dd6ded26c8562b0858cdd0fc23bc1"} Feb 26 17:28:57 crc kubenswrapper[4805]: I0226 17:28:57.961653 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b3e82ddd-d1e2-4d50-a582-04f07e2a1fb4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4\" (UID: \"b3e82ddd-d1e2-4d50-a582-04f07e2a1fb4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4" Feb 26 17:28:57 crc kubenswrapper[4805]: I0226 17:28:57.961713 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdhf9\" (UniqueName: \"kubernetes.io/projected/dfc0288a-d269-4568-a6f0-57bd9fa6cfcc-kube-api-access-rdhf9\") pod \"obo-prometheus-operator-68bc856cb9-htjln\" (UID: \"dfc0288a-d269-4568-a6f0-57bd9fa6cfcc\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-htjln" Feb 26 17:28:57 crc kubenswrapper[4805]: I0226 17:28:57.961738 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/174cbe90-075b-4c73-ae20-cc8a47c42d06-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p\" (UID: \"174cbe90-075b-4c73-ae20-cc8a47c42d06\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p" Feb 26 17:28:57 crc kubenswrapper[4805]: I0226 17:28:57.961765 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b3e82ddd-d1e2-4d50-a582-04f07e2a1fb4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4\" (UID: \"b3e82ddd-d1e2-4d50-a582-04f07e2a1fb4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4" Feb 26 17:28:57 crc kubenswrapper[4805]: I0226 17:28:57.961782 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/174cbe90-075b-4c73-ae20-cc8a47c42d06-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p\" (UID: \"174cbe90-075b-4c73-ae20-cc8a47c42d06\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p" Feb 26 17:28:57 crc kubenswrapper[4805]: I0226 17:28:57.992952 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdhf9\" (UniqueName: \"kubernetes.io/projected/dfc0288a-d269-4568-a6f0-57bd9fa6cfcc-kube-api-access-rdhf9\") pod \"obo-prometheus-operator-68bc856cb9-htjln\" (UID: \"dfc0288a-d269-4568-a6f0-57bd9fa6cfcc\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-htjln" Feb 26 17:28:58 crc kubenswrapper[4805]: I0226 17:28:58.063146 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b3e82ddd-d1e2-4d50-a582-04f07e2a1fb4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4\" (UID: \"b3e82ddd-d1e2-4d50-a582-04f07e2a1fb4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4" Feb 26 17:28:58 crc kubenswrapper[4805]: I0226 17:28:58.063196 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jws9r\" (UniqueName: \"kubernetes.io/projected/df96d4df-f5b6-4b7b-956d-f957313d1914-kube-api-access-jws9r\") pod \"observability-operator-59bdc8b94-t7gbj\" (UID: \"df96d4df-f5b6-4b7b-956d-f957313d1914\") " pod="openshift-operators/observability-operator-59bdc8b94-t7gbj" Feb 26 17:28:58 crc kubenswrapper[4805]: I0226 17:28:58.063232 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/174cbe90-075b-4c73-ae20-cc8a47c42d06-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p\" (UID: \"174cbe90-075b-4c73-ae20-cc8a47c42d06\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p" Feb 26 17:28:58 crc kubenswrapper[4805]: I0226 17:28:58.063251 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/df96d4df-f5b6-4b7b-956d-f957313d1914-observability-operator-tls\") pod \"observability-operator-59bdc8b94-t7gbj\" (UID: \"df96d4df-f5b6-4b7b-956d-f957313d1914\") " pod="openshift-operators/observability-operator-59bdc8b94-t7gbj" Feb 26 17:28:58 crc kubenswrapper[4805]: I0226 17:28:58.063277 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b3e82ddd-d1e2-4d50-a582-04f07e2a1fb4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4\" (UID: \"b3e82ddd-d1e2-4d50-a582-04f07e2a1fb4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4" Feb 26 17:28:58 crc kubenswrapper[4805]: I0226 17:28:58.063293 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/174cbe90-075b-4c73-ae20-cc8a47c42d06-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p\" (UID: \"174cbe90-075b-4c73-ae20-cc8a47c42d06\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p" Feb 26 17:28:58 crc kubenswrapper[4805]: I0226 17:28:58.068600 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/174cbe90-075b-4c73-ae20-cc8a47c42d06-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p\" (UID: \"174cbe90-075b-4c73-ae20-cc8a47c42d06\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p" Feb 26 17:28:58 crc kubenswrapper[4805]: I0226 17:28:58.068683 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/174cbe90-075b-4c73-ae20-cc8a47c42d06-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p\" (UID: \"174cbe90-075b-4c73-ae20-cc8a47c42d06\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p" Feb 26 17:28:58 crc kubenswrapper[4805]: I0226 17:28:58.068791 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b3e82ddd-d1e2-4d50-a582-04f07e2a1fb4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4\" (UID: \"b3e82ddd-d1e2-4d50-a582-04f07e2a1fb4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4" Feb 26 17:28:58 crc kubenswrapper[4805]: I0226 17:28:58.069456 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b3e82ddd-d1e2-4d50-a582-04f07e2a1fb4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4\" (UID: \"b3e82ddd-d1e2-4d50-a582-04f07e2a1fb4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4" Feb 26 17:28:58 crc kubenswrapper[4805]: I0226 17:28:58.102981 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4" Feb 26 17:28:58 crc kubenswrapper[4805]: I0226 17:28:58.103542 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-2xtcm"] Feb 26 17:28:58 crc kubenswrapper[4805]: I0226 17:28:58.104206 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-2xtcm" Feb 26 17:28:58 crc kubenswrapper[4805]: I0226 17:28:58.107580 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-vx74k" Feb 26 17:28:58 crc kubenswrapper[4805]: I0226 17:28:58.117928 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p" Feb 26 17:28:58 crc kubenswrapper[4805]: E0226 17:28:58.138784 4805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4_openshift-operators_b3e82ddd-d1e2-4d50-a582-04f07e2a1fb4_0(ad555c54689f5f72d907a1d05a29930ae88d680ee854b34559fd0c793be6fe98): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 17:28:58 crc kubenswrapper[4805]: E0226 17:28:58.138850 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4_openshift-operators_b3e82ddd-d1e2-4d50-a582-04f07e2a1fb4_0(ad555c54689f5f72d907a1d05a29930ae88d680ee854b34559fd0c793be6fe98): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4" Feb 26 17:28:58 crc kubenswrapper[4805]: E0226 17:28:58.138872 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4_openshift-operators_b3e82ddd-d1e2-4d50-a582-04f07e2a1fb4_0(ad555c54689f5f72d907a1d05a29930ae88d680ee854b34559fd0c793be6fe98): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4" Feb 26 17:28:58 crc kubenswrapper[4805]: E0226 17:28:58.138913 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4_openshift-operators(b3e82ddd-d1e2-4d50-a582-04f07e2a1fb4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4_openshift-operators(b3e82ddd-d1e2-4d50-a582-04f07e2a1fb4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4_openshift-operators_b3e82ddd-d1e2-4d50-a582-04f07e2a1fb4_0(ad555c54689f5f72d907a1d05a29930ae88d680ee854b34559fd0c793be6fe98): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4" podUID="b3e82ddd-d1e2-4d50-a582-04f07e2a1fb4" Feb 26 17:28:58 crc kubenswrapper[4805]: E0226 17:28:58.149384 4805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p_openshift-operators_174cbe90-075b-4c73-ae20-cc8a47c42d06_0(2523164e94f116c246e18542311b162cb79ff5fd68f1aee7664e4e107089b8c0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 17:28:58 crc kubenswrapper[4805]: E0226 17:28:58.149457 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p_openshift-operators_174cbe90-075b-4c73-ae20-cc8a47c42d06_0(2523164e94f116c246e18542311b162cb79ff5fd68f1aee7664e4e107089b8c0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p" Feb 26 17:28:58 crc kubenswrapper[4805]: E0226 17:28:58.149488 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p_openshift-operators_174cbe90-075b-4c73-ae20-cc8a47c42d06_0(2523164e94f116c246e18542311b162cb79ff5fd68f1aee7664e4e107089b8c0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p" Feb 26 17:28:58 crc kubenswrapper[4805]: E0226 17:28:58.149549 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p_openshift-operators(174cbe90-075b-4c73-ae20-cc8a47c42d06)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p_openshift-operators(174cbe90-075b-4c73-ae20-cc8a47c42d06)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p_openshift-operators_174cbe90-075b-4c73-ae20-cc8a47c42d06_0(2523164e94f116c246e18542311b162cb79ff5fd68f1aee7664e4e107089b8c0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p" podUID="174cbe90-075b-4c73-ae20-cc8a47c42d06" Feb 26 17:28:58 crc kubenswrapper[4805]: I0226 17:28:58.163958 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/df96d4df-f5b6-4b7b-956d-f957313d1914-observability-operator-tls\") pod \"observability-operator-59bdc8b94-t7gbj\" (UID: \"df96d4df-f5b6-4b7b-956d-f957313d1914\") " pod="openshift-operators/observability-operator-59bdc8b94-t7gbj" Feb 26 17:28:58 crc kubenswrapper[4805]: I0226 17:28:58.164086 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jws9r\" (UniqueName: \"kubernetes.io/projected/df96d4df-f5b6-4b7b-956d-f957313d1914-kube-api-access-jws9r\") pod \"observability-operator-59bdc8b94-t7gbj\" (UID: \"df96d4df-f5b6-4b7b-956d-f957313d1914\") " pod="openshift-operators/observability-operator-59bdc8b94-t7gbj" Feb 26 17:28:58 crc kubenswrapper[4805]: I0226 17:28:58.168431 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/df96d4df-f5b6-4b7b-956d-f957313d1914-observability-operator-tls\") pod \"observability-operator-59bdc8b94-t7gbj\" (UID: \"df96d4df-f5b6-4b7b-956d-f957313d1914\") " pod="openshift-operators/observability-operator-59bdc8b94-t7gbj" Feb 26 17:28:58 crc kubenswrapper[4805]: I0226 17:28:58.189821 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jws9r\" (UniqueName: \"kubernetes.io/projected/df96d4df-f5b6-4b7b-956d-f957313d1914-kube-api-access-jws9r\") pod \"observability-operator-59bdc8b94-t7gbj\" (UID: \"df96d4df-f5b6-4b7b-956d-f957313d1914\") " pod="openshift-operators/observability-operator-59bdc8b94-t7gbj" Feb 26 17:28:58 crc kubenswrapper[4805]: I0226 17:28:58.221555 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-t7gbj" Feb 26 17:28:58 crc kubenswrapper[4805]: E0226 17:28:58.242677 4805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-t7gbj_openshift-operators_df96d4df-f5b6-4b7b-956d-f957313d1914_0(80ffbd3e38ab6cd03fa55748eda0d55f0b9092179a14f14af350c501f2deddea): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 17:28:58 crc kubenswrapper[4805]: E0226 17:28:58.242746 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-t7gbj_openshift-operators_df96d4df-f5b6-4b7b-956d-f957313d1914_0(80ffbd3e38ab6cd03fa55748eda0d55f0b9092179a14f14af350c501f2deddea): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-t7gbj" Feb 26 17:28:58 crc kubenswrapper[4805]: E0226 17:28:58.242769 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-t7gbj_openshift-operators_df96d4df-f5b6-4b7b-956d-f957313d1914_0(80ffbd3e38ab6cd03fa55748eda0d55f0b9092179a14f14af350c501f2deddea): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-t7gbj" Feb 26 17:28:58 crc kubenswrapper[4805]: E0226 17:28:58.242820 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-t7gbj_openshift-operators(df96d4df-f5b6-4b7b-956d-f957313d1914)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-t7gbj_openshift-operators(df96d4df-f5b6-4b7b-956d-f957313d1914)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-t7gbj_openshift-operators_df96d4df-f5b6-4b7b-956d-f957313d1914_0(80ffbd3e38ab6cd03fa55748eda0d55f0b9092179a14f14af350c501f2deddea): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-t7gbj" podUID="df96d4df-f5b6-4b7b-956d-f957313d1914" Feb 26 17:28:58 crc kubenswrapper[4805]: I0226 17:28:58.265746 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnjlh\" (UniqueName: \"kubernetes.io/projected/c48a9594-81fc-493b-98b0-fc3ad286abe2-kube-api-access-wnjlh\") pod \"perses-operator-5bf474d74f-2xtcm\" (UID: \"c48a9594-81fc-493b-98b0-fc3ad286abe2\") " pod="openshift-operators/perses-operator-5bf474d74f-2xtcm" Feb 26 17:28:58 crc kubenswrapper[4805]: I0226 17:28:58.265805 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c48a9594-81fc-493b-98b0-fc3ad286abe2-openshift-service-ca\") pod \"perses-operator-5bf474d74f-2xtcm\" (UID: \"c48a9594-81fc-493b-98b0-fc3ad286abe2\") " pod="openshift-operators/perses-operator-5bf474d74f-2xtcm" Feb 26 17:28:58 crc kubenswrapper[4805]: I0226 17:28:58.287640 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-htjln" Feb 26 17:28:58 crc kubenswrapper[4805]: E0226 17:28:58.306108 4805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-htjln_openshift-operators_dfc0288a-d269-4568-a6f0-57bd9fa6cfcc_0(0dc951f251a65e6178b1b3831378f2c75baf420a691a4b9f1cd4294adb7d7221): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 17:28:58 crc kubenswrapper[4805]: E0226 17:28:58.306176 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-htjln_openshift-operators_dfc0288a-d269-4568-a6f0-57bd9fa6cfcc_0(0dc951f251a65e6178b1b3831378f2c75baf420a691a4b9f1cd4294adb7d7221): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-htjln" Feb 26 17:28:58 crc kubenswrapper[4805]: E0226 17:28:58.306205 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-htjln_openshift-operators_dfc0288a-d269-4568-a6f0-57bd9fa6cfcc_0(0dc951f251a65e6178b1b3831378f2c75baf420a691a4b9f1cd4294adb7d7221): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-htjln" Feb 26 17:28:58 crc kubenswrapper[4805]: E0226 17:28:58.306257 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-htjln_openshift-operators(dfc0288a-d269-4568-a6f0-57bd9fa6cfcc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-htjln_openshift-operators(dfc0288a-d269-4568-a6f0-57bd9fa6cfcc)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-htjln_openshift-operators_dfc0288a-d269-4568-a6f0-57bd9fa6cfcc_0(0dc951f251a65e6178b1b3831378f2c75baf420a691a4b9f1cd4294adb7d7221): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-htjln" podUID="dfc0288a-d269-4568-a6f0-57bd9fa6cfcc" Feb 26 17:28:58 crc kubenswrapper[4805]: I0226 17:28:58.366711 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnjlh\" (UniqueName: \"kubernetes.io/projected/c48a9594-81fc-493b-98b0-fc3ad286abe2-kube-api-access-wnjlh\") pod \"perses-operator-5bf474d74f-2xtcm\" (UID: \"c48a9594-81fc-493b-98b0-fc3ad286abe2\") " pod="openshift-operators/perses-operator-5bf474d74f-2xtcm" Feb 26 17:28:58 crc kubenswrapper[4805]: I0226 17:28:58.366773 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c48a9594-81fc-493b-98b0-fc3ad286abe2-openshift-service-ca\") pod \"perses-operator-5bf474d74f-2xtcm\" (UID: \"c48a9594-81fc-493b-98b0-fc3ad286abe2\") " pod="openshift-operators/perses-operator-5bf474d74f-2xtcm" Feb 26 17:28:58 crc kubenswrapper[4805]: I0226 17:28:58.367825 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c48a9594-81fc-493b-98b0-fc3ad286abe2-openshift-service-ca\") pod \"perses-operator-5bf474d74f-2xtcm\" (UID: \"c48a9594-81fc-493b-98b0-fc3ad286abe2\") " pod="openshift-operators/perses-operator-5bf474d74f-2xtcm" Feb 26 17:28:58 crc kubenswrapper[4805]: I0226 17:28:58.390729 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnjlh\" (UniqueName: \"kubernetes.io/projected/c48a9594-81fc-493b-98b0-fc3ad286abe2-kube-api-access-wnjlh\") pod \"perses-operator-5bf474d74f-2xtcm\" (UID: \"c48a9594-81fc-493b-98b0-fc3ad286abe2\") " pod="openshift-operators/perses-operator-5bf474d74f-2xtcm" Feb 26 17:28:58 crc kubenswrapper[4805]: I0226 17:28:58.467167 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-2xtcm" Feb 26 17:28:58 crc kubenswrapper[4805]: E0226 17:28:58.500465 4805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-2xtcm_openshift-operators_c48a9594-81fc-493b-98b0-fc3ad286abe2_0(cb8d3533af752b847581d90dec133cb757b6f6dccbc662dcb4ba6b3fc40aaada): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 17:28:58 crc kubenswrapper[4805]: E0226 17:28:58.500541 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-2xtcm_openshift-operators_c48a9594-81fc-493b-98b0-fc3ad286abe2_0(cb8d3533af752b847581d90dec133cb757b6f6dccbc662dcb4ba6b3fc40aaada): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-2xtcm" Feb 26 17:28:58 crc kubenswrapper[4805]: E0226 17:28:58.500573 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-2xtcm_openshift-operators_c48a9594-81fc-493b-98b0-fc3ad286abe2_0(cb8d3533af752b847581d90dec133cb757b6f6dccbc662dcb4ba6b3fc40aaada): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-2xtcm" Feb 26 17:28:58 crc kubenswrapper[4805]: E0226 17:28:58.500628 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-2xtcm_openshift-operators(c48a9594-81fc-493b-98b0-fc3ad286abe2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-2xtcm_openshift-operators(c48a9594-81fc-493b-98b0-fc3ad286abe2)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-2xtcm_openshift-operators_c48a9594-81fc-493b-98b0-fc3ad286abe2_0(cb8d3533af752b847581d90dec133cb757b6f6dccbc662dcb4ba6b3fc40aaada): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-2xtcm" podUID="c48a9594-81fc-493b-98b0-fc3ad286abe2" Feb 26 17:29:02 crc kubenswrapper[4805]: I0226 17:29:02.978902 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" event={"ID":"88e6bbd3-22a7-42ca-9b31-85e3d641b946","Type":"ContainerStarted","Data":"7fefa94ae548a4e7f079335914e0279a24691f0bcdc248e7805d348d05312b1d"} Feb 26 17:29:04 crc kubenswrapper[4805]: I0226 17:29:04.843360 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p"] Feb 26 17:29:04 crc kubenswrapper[4805]: I0226 17:29:04.843840 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p" Feb 26 17:29:04 crc kubenswrapper[4805]: I0226 17:29:04.844508 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p" Feb 26 17:29:04 crc kubenswrapper[4805]: I0226 17:29:04.861729 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4"] Feb 26 17:29:04 crc kubenswrapper[4805]: I0226 17:29:04.861841 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4" Feb 26 17:29:04 crc kubenswrapper[4805]: I0226 17:29:04.862151 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4" Feb 26 17:29:04 crc kubenswrapper[4805]: E0226 17:29:04.885782 4805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p_openshift-operators_174cbe90-075b-4c73-ae20-cc8a47c42d06_0(2d494ce292f59a63e28d5b5fd92ad1192183601a476e7605f4caf0364ce7f90e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 17:29:04 crc kubenswrapper[4805]: E0226 17:29:04.886159 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p_openshift-operators_174cbe90-075b-4c73-ae20-cc8a47c42d06_0(2d494ce292f59a63e28d5b5fd92ad1192183601a476e7605f4caf0364ce7f90e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p" Feb 26 17:29:04 crc kubenswrapper[4805]: E0226 17:29:04.886186 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p_openshift-operators_174cbe90-075b-4c73-ae20-cc8a47c42d06_0(2d494ce292f59a63e28d5b5fd92ad1192183601a476e7605f4caf0364ce7f90e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p" Feb 26 17:29:04 crc kubenswrapper[4805]: E0226 17:29:04.886258 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p_openshift-operators(174cbe90-075b-4c73-ae20-cc8a47c42d06)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p_openshift-operators(174cbe90-075b-4c73-ae20-cc8a47c42d06)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p_openshift-operators_174cbe90-075b-4c73-ae20-cc8a47c42d06_0(2d494ce292f59a63e28d5b5fd92ad1192183601a476e7605f4caf0364ce7f90e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p" podUID="174cbe90-075b-4c73-ae20-cc8a47c42d06" Feb 26 17:29:04 crc kubenswrapper[4805]: I0226 17:29:04.889935 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-t7gbj"] Feb 26 17:29:04 crc kubenswrapper[4805]: I0226 17:29:04.890104 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-t7gbj" Feb 26 17:29:04 crc kubenswrapper[4805]: I0226 17:29:04.890569 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-t7gbj" Feb 26 17:29:04 crc kubenswrapper[4805]: E0226 17:29:04.893757 4805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4_openshift-operators_b3e82ddd-d1e2-4d50-a582-04f07e2a1fb4_0(28e0e410f8280e54a9741c28676805becad1cd87cf223df964c012d1844713c5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 17:29:04 crc kubenswrapper[4805]: E0226 17:29:04.893824 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4_openshift-operators_b3e82ddd-d1e2-4d50-a582-04f07e2a1fb4_0(28e0e410f8280e54a9741c28676805becad1cd87cf223df964c012d1844713c5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4" Feb 26 17:29:04 crc kubenswrapper[4805]: E0226 17:29:04.893848 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4_openshift-operators_b3e82ddd-d1e2-4d50-a582-04f07e2a1fb4_0(28e0e410f8280e54a9741c28676805becad1cd87cf223df964c012d1844713c5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4" Feb 26 17:29:04 crc kubenswrapper[4805]: E0226 17:29:04.893912 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4_openshift-operators(b3e82ddd-d1e2-4d50-a582-04f07e2a1fb4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4_openshift-operators(b3e82ddd-d1e2-4d50-a582-04f07e2a1fb4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4_openshift-operators_b3e82ddd-d1e2-4d50-a582-04f07e2a1fb4_0(28e0e410f8280e54a9741c28676805becad1cd87cf223df964c012d1844713c5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4" podUID="b3e82ddd-d1e2-4d50-a582-04f07e2a1fb4" Feb 26 17:29:04 crc kubenswrapper[4805]: I0226 17:29:04.896034 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-2xtcm"] Feb 26 17:29:04 crc kubenswrapper[4805]: I0226 17:29:04.896186 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-2xtcm" Feb 26 17:29:04 crc kubenswrapper[4805]: I0226 17:29:04.896680 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-2xtcm" Feb 26 17:29:04 crc kubenswrapper[4805]: I0226 17:29:04.914611 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-htjln"] Feb 26 17:29:04 crc kubenswrapper[4805]: I0226 17:29:04.914734 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-htjln" Feb 26 17:29:04 crc kubenswrapper[4805]: I0226 17:29:04.915104 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-htjln" Feb 26 17:29:04 crc kubenswrapper[4805]: E0226 17:29:04.956235 4805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-2xtcm_openshift-operators_c48a9594-81fc-493b-98b0-fc3ad286abe2_0(0f24126f74b6ab7b471bfe4ac41739424a095f1148bad6e9ff003f200c73f2da): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 17:29:04 crc kubenswrapper[4805]: E0226 17:29:04.956295 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-2xtcm_openshift-operators_c48a9594-81fc-493b-98b0-fc3ad286abe2_0(0f24126f74b6ab7b471bfe4ac41739424a095f1148bad6e9ff003f200c73f2da): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-2xtcm" Feb 26 17:29:04 crc kubenswrapper[4805]: E0226 17:29:04.956323 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-2xtcm_openshift-operators_c48a9594-81fc-493b-98b0-fc3ad286abe2_0(0f24126f74b6ab7b471bfe4ac41739424a095f1148bad6e9ff003f200c73f2da): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-2xtcm" Feb 26 17:29:04 crc kubenswrapper[4805]: E0226 17:29:04.956370 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-2xtcm_openshift-operators(c48a9594-81fc-493b-98b0-fc3ad286abe2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-2xtcm_openshift-operators(c48a9594-81fc-493b-98b0-fc3ad286abe2)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-2xtcm_openshift-operators_c48a9594-81fc-493b-98b0-fc3ad286abe2_0(0f24126f74b6ab7b471bfe4ac41739424a095f1148bad6e9ff003f200c73f2da): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-2xtcm" podUID="c48a9594-81fc-493b-98b0-fc3ad286abe2" Feb 26 17:29:04 crc kubenswrapper[4805]: E0226 17:29:04.962308 4805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-t7gbj_openshift-operators_df96d4df-f5b6-4b7b-956d-f957313d1914_0(dedaee903d17a02fdbb47c45d9b95be2548bfa5d10e33afd93808f6050f8a556): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 17:29:04 crc kubenswrapper[4805]: E0226 17:29:04.962355 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-t7gbj_openshift-operators_df96d4df-f5b6-4b7b-956d-f957313d1914_0(dedaee903d17a02fdbb47c45d9b95be2548bfa5d10e33afd93808f6050f8a556): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-t7gbj" Feb 26 17:29:04 crc kubenswrapper[4805]: E0226 17:29:04.962374 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-t7gbj_openshift-operators_df96d4df-f5b6-4b7b-956d-f957313d1914_0(dedaee903d17a02fdbb47c45d9b95be2548bfa5d10e33afd93808f6050f8a556): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-t7gbj" Feb 26 17:29:04 crc kubenswrapper[4805]: E0226 17:29:04.962407 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-t7gbj_openshift-operators(df96d4df-f5b6-4b7b-956d-f957313d1914)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-t7gbj_openshift-operators(df96d4df-f5b6-4b7b-956d-f957313d1914)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-t7gbj_openshift-operators_df96d4df-f5b6-4b7b-956d-f957313d1914_0(dedaee903d17a02fdbb47c45d9b95be2548bfa5d10e33afd93808f6050f8a556): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-t7gbj" podUID="df96d4df-f5b6-4b7b-956d-f957313d1914" Feb 26 17:29:04 crc kubenswrapper[4805]: E0226 17:29:04.975972 4805 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-htjln_openshift-operators_dfc0288a-d269-4568-a6f0-57bd9fa6cfcc_0(1d5ecef103cab0ac10f9cdc89dd1b102c0a8a1111a5743e0f705b63a1d2323b2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 26 17:29:04 crc kubenswrapper[4805]: E0226 17:29:04.976084 4805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-htjln_openshift-operators_dfc0288a-d269-4568-a6f0-57bd9fa6cfcc_0(1d5ecef103cab0ac10f9cdc89dd1b102c0a8a1111a5743e0f705b63a1d2323b2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-htjln" Feb 26 17:29:04 crc kubenswrapper[4805]: E0226 17:29:04.976110 4805 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-htjln_openshift-operators_dfc0288a-d269-4568-a6f0-57bd9fa6cfcc_0(1d5ecef103cab0ac10f9cdc89dd1b102c0a8a1111a5743e0f705b63a1d2323b2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-htjln" Feb 26 17:29:04 crc kubenswrapper[4805]: E0226 17:29:04.976169 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-htjln_openshift-operators(dfc0288a-d269-4568-a6f0-57bd9fa6cfcc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-htjln_openshift-operators(dfc0288a-d269-4568-a6f0-57bd9fa6cfcc)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-htjln_openshift-operators_dfc0288a-d269-4568-a6f0-57bd9fa6cfcc_0(1d5ecef103cab0ac10f9cdc89dd1b102c0a8a1111a5743e0f705b63a1d2323b2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-htjln" podUID="dfc0288a-d269-4568-a6f0-57bd9fa6cfcc" Feb 26 17:29:04 crc kubenswrapper[4805]: I0226 17:29:04.996710 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" event={"ID":"88e6bbd3-22a7-42ca-9b31-85e3d641b946","Type":"ContainerStarted","Data":"f86812ea58c3b87b5b827570be58089cc3181b5f1eb83a74a593c6e637a856f6"} Feb 26 17:29:04 crc kubenswrapper[4805]: I0226 17:29:04.997041 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:29:04 crc kubenswrapper[4805]: I0226 17:29:04.997061 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:29:05 crc kubenswrapper[4805]: I0226 17:29:05.024891 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:29:05 crc kubenswrapper[4805]: I0226 17:29:05.030230 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" podStartSLOduration=10.030210963 podStartE2EDuration="10.030210963s" podCreationTimestamp="2026-02-26 17:28:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:29:05.025721467 +0000 UTC m=+859.587475826" watchObservedRunningTime="2026-02-26 17:29:05.030210963 +0000 UTC m=+859.591965302" Feb 26 17:29:06 crc kubenswrapper[4805]: I0226 17:29:06.001829 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:29:06 crc kubenswrapper[4805]: I0226 17:29:06.045369 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:29:06 crc kubenswrapper[4805]: I0226 17:29:06.685688 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dmxg8" Feb 26 17:29:06 crc kubenswrapper[4805]: I0226 17:29:06.729527 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dmxg8" Feb 26 17:29:06 crc kubenswrapper[4805]: I0226 17:29:06.914045 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dmxg8"] Feb 26 17:29:08 crc kubenswrapper[4805]: I0226 17:29:08.011829 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dmxg8" podUID="de2ab0e6-f960-4663-8b08-075f36a70c33" containerName="registry-server" containerID="cri-o://867e013d65f855ef552558779fe4b318a159151fd6ed9c45536f1360fa47e492" gracePeriod=2 Feb 26 17:29:08 crc kubenswrapper[4805]: I0226 17:29:08.377285 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dmxg8" Feb 26 17:29:08 crc kubenswrapper[4805]: I0226 17:29:08.501315 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de2ab0e6-f960-4663-8b08-075f36a70c33-catalog-content\") pod \"de2ab0e6-f960-4663-8b08-075f36a70c33\" (UID: \"de2ab0e6-f960-4663-8b08-075f36a70c33\") " Feb 26 17:29:08 crc kubenswrapper[4805]: I0226 17:29:08.501516 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de2ab0e6-f960-4663-8b08-075f36a70c33-utilities\") pod \"de2ab0e6-f960-4663-8b08-075f36a70c33\" (UID: \"de2ab0e6-f960-4663-8b08-075f36a70c33\") " Feb 26 17:29:08 crc kubenswrapper[4805]: I0226 17:29:08.501582 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xccxh\" (UniqueName: \"kubernetes.io/projected/de2ab0e6-f960-4663-8b08-075f36a70c33-kube-api-access-xccxh\") pod \"de2ab0e6-f960-4663-8b08-075f36a70c33\" (UID: \"de2ab0e6-f960-4663-8b08-075f36a70c33\") " Feb 26 17:29:08 crc kubenswrapper[4805]: I0226 17:29:08.502331 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de2ab0e6-f960-4663-8b08-075f36a70c33-utilities" (OuterVolumeSpecName: "utilities") pod "de2ab0e6-f960-4663-8b08-075f36a70c33" (UID: "de2ab0e6-f960-4663-8b08-075f36a70c33"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:29:08 crc kubenswrapper[4805]: I0226 17:29:08.508263 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de2ab0e6-f960-4663-8b08-075f36a70c33-kube-api-access-xccxh" (OuterVolumeSpecName: "kube-api-access-xccxh") pod "de2ab0e6-f960-4663-8b08-075f36a70c33" (UID: "de2ab0e6-f960-4663-8b08-075f36a70c33"). InnerVolumeSpecName "kube-api-access-xccxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:29:08 crc kubenswrapper[4805]: I0226 17:29:08.602834 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de2ab0e6-f960-4663-8b08-075f36a70c33-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 17:29:08 crc kubenswrapper[4805]: I0226 17:29:08.602889 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xccxh\" (UniqueName: \"kubernetes.io/projected/de2ab0e6-f960-4663-8b08-075f36a70c33-kube-api-access-xccxh\") on node \"crc\" DevicePath \"\"" Feb 26 17:29:08 crc kubenswrapper[4805]: I0226 17:29:08.620293 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de2ab0e6-f960-4663-8b08-075f36a70c33-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de2ab0e6-f960-4663-8b08-075f36a70c33" (UID: "de2ab0e6-f960-4663-8b08-075f36a70c33"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:29:08 crc kubenswrapper[4805]: I0226 17:29:08.704107 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de2ab0e6-f960-4663-8b08-075f36a70c33-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 17:29:09 crc kubenswrapper[4805]: I0226 17:29:09.018812 4805 generic.go:334] "Generic (PLEG): container finished" podID="de2ab0e6-f960-4663-8b08-075f36a70c33" containerID="867e013d65f855ef552558779fe4b318a159151fd6ed9c45536f1360fa47e492" exitCode=0 Feb 26 17:29:09 crc kubenswrapper[4805]: I0226 17:29:09.018889 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dmxg8" Feb 26 17:29:09 crc kubenswrapper[4805]: I0226 17:29:09.018886 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dmxg8" event={"ID":"de2ab0e6-f960-4663-8b08-075f36a70c33","Type":"ContainerDied","Data":"867e013d65f855ef552558779fe4b318a159151fd6ed9c45536f1360fa47e492"} Feb 26 17:29:09 crc kubenswrapper[4805]: I0226 17:29:09.019407 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dmxg8" event={"ID":"de2ab0e6-f960-4663-8b08-075f36a70c33","Type":"ContainerDied","Data":"f872b8ac879a14b3c6769d88d2a12783bc07c0ca7bc5f6706d4a023aea37e783"} Feb 26 17:29:09 crc kubenswrapper[4805]: I0226 17:29:09.019438 4805 scope.go:117] "RemoveContainer" containerID="867e013d65f855ef552558779fe4b318a159151fd6ed9c45536f1360fa47e492" Feb 26 17:29:09 crc kubenswrapper[4805]: I0226 17:29:09.040443 4805 scope.go:117] "RemoveContainer" containerID="89b87384bffa08e29578f143209054014d9537a39b54182db3ad6d81f1c825e1" Feb 26 17:29:09 crc kubenswrapper[4805]: I0226 17:29:09.040565 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dmxg8"] Feb 26 17:29:09 crc kubenswrapper[4805]: I0226 17:29:09.053077 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dmxg8"] Feb 26 17:29:09 crc kubenswrapper[4805]: I0226 17:29:09.086223 4805 scope.go:117] "RemoveContainer" containerID="99aea28984f2f683487ce78a88b3752cda6ae45186ff50ac3abd746efe5c82c0" Feb 26 17:29:09 crc kubenswrapper[4805]: I0226 17:29:09.104254 4805 scope.go:117] "RemoveContainer" containerID="867e013d65f855ef552558779fe4b318a159151fd6ed9c45536f1360fa47e492" Feb 26 17:29:09 crc kubenswrapper[4805]: E0226 17:29:09.104664 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"867e013d65f855ef552558779fe4b318a159151fd6ed9c45536f1360fa47e492\": container with ID starting with 867e013d65f855ef552558779fe4b318a159151fd6ed9c45536f1360fa47e492 not found: ID does not exist" containerID="867e013d65f855ef552558779fe4b318a159151fd6ed9c45536f1360fa47e492" Feb 26 17:29:09 crc kubenswrapper[4805]: I0226 17:29:09.104704 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"867e013d65f855ef552558779fe4b318a159151fd6ed9c45536f1360fa47e492"} err="failed to get container status \"867e013d65f855ef552558779fe4b318a159151fd6ed9c45536f1360fa47e492\": rpc error: code = NotFound desc = could not find container \"867e013d65f855ef552558779fe4b318a159151fd6ed9c45536f1360fa47e492\": container with ID starting with 867e013d65f855ef552558779fe4b318a159151fd6ed9c45536f1360fa47e492 not found: ID does not exist" Feb 26 17:29:09 crc kubenswrapper[4805]: I0226 17:29:09.104730 4805 scope.go:117] "RemoveContainer" containerID="89b87384bffa08e29578f143209054014d9537a39b54182db3ad6d81f1c825e1" Feb 26 17:29:09 crc kubenswrapper[4805]: E0226 17:29:09.104981 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89b87384bffa08e29578f143209054014d9537a39b54182db3ad6d81f1c825e1\": container with ID starting with 89b87384bffa08e29578f143209054014d9537a39b54182db3ad6d81f1c825e1 not found: ID does not exist" containerID="89b87384bffa08e29578f143209054014d9537a39b54182db3ad6d81f1c825e1" Feb 26 17:29:09 crc kubenswrapper[4805]: I0226 17:29:09.105008 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89b87384bffa08e29578f143209054014d9537a39b54182db3ad6d81f1c825e1"} err="failed to get container status \"89b87384bffa08e29578f143209054014d9537a39b54182db3ad6d81f1c825e1\": rpc error: code = NotFound desc = could not find container \"89b87384bffa08e29578f143209054014d9537a39b54182db3ad6d81f1c825e1\": container with ID starting with 89b87384bffa08e29578f143209054014d9537a39b54182db3ad6d81f1c825e1 not found: ID does not exist" Feb 26 17:29:09 crc kubenswrapper[4805]: I0226 17:29:09.105051 4805 scope.go:117] "RemoveContainer" containerID="99aea28984f2f683487ce78a88b3752cda6ae45186ff50ac3abd746efe5c82c0" Feb 26 17:29:09 crc kubenswrapper[4805]: E0226 17:29:09.105351 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99aea28984f2f683487ce78a88b3752cda6ae45186ff50ac3abd746efe5c82c0\": container with ID starting with 99aea28984f2f683487ce78a88b3752cda6ae45186ff50ac3abd746efe5c82c0 not found: ID does not exist" containerID="99aea28984f2f683487ce78a88b3752cda6ae45186ff50ac3abd746efe5c82c0" Feb 26 17:29:09 crc kubenswrapper[4805]: I0226 17:29:09.105378 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99aea28984f2f683487ce78a88b3752cda6ae45186ff50ac3abd746efe5c82c0"} err="failed to get container status \"99aea28984f2f683487ce78a88b3752cda6ae45186ff50ac3abd746efe5c82c0\": rpc error: code = NotFound desc = could not find container \"99aea28984f2f683487ce78a88b3752cda6ae45186ff50ac3abd746efe5c82c0\": container with ID starting with 99aea28984f2f683487ce78a88b3752cda6ae45186ff50ac3abd746efe5c82c0 not found: ID does not exist" Feb 26 17:29:10 crc kubenswrapper[4805]: I0226 17:29:10.960136 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de2ab0e6-f960-4663-8b08-075f36a70c33" path="/var/lib/kubelet/pods/de2ab0e6-f960-4663-8b08-075f36a70c33/volumes" Feb 26 17:29:15 crc kubenswrapper[4805]: I0226 17:29:15.953073 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4" Feb 26 17:29:15 crc kubenswrapper[4805]: I0226 17:29:15.953110 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-htjln" Feb 26 17:29:15 crc kubenswrapper[4805]: I0226 17:29:15.953153 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-2xtcm" Feb 26 17:29:15 crc kubenswrapper[4805]: I0226 17:29:15.953572 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-htjln" Feb 26 17:29:15 crc kubenswrapper[4805]: I0226 17:29:15.953573 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4" Feb 26 17:29:15 crc kubenswrapper[4805]: I0226 17:29:15.954579 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-2xtcm" Feb 26 17:29:16 crc kubenswrapper[4805]: I0226 17:29:16.285567 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4"] Feb 26 17:29:16 crc kubenswrapper[4805]: W0226 17:29:16.316696 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3e82ddd_d1e2_4d50_a582_04f07e2a1fb4.slice/crio-4658a91dd9f579dd896dc0c3e3c259bac174dbe552f9e98e229076e20927df02 WatchSource:0}: Error finding container 4658a91dd9f579dd896dc0c3e3c259bac174dbe552f9e98e229076e20927df02: Status 404 returned error can't find the container with id 4658a91dd9f579dd896dc0c3e3c259bac174dbe552f9e98e229076e20927df02 Feb 26 17:29:16 crc kubenswrapper[4805]: I0226 17:29:16.319854 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 17:29:16 crc kubenswrapper[4805]: I0226 17:29:16.444248 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-2xtcm"] Feb 26 17:29:16 crc kubenswrapper[4805]: I0226 17:29:16.464493 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-htjln"] Feb 26 17:29:17 crc kubenswrapper[4805]: I0226 17:29:17.066838 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4" event={"ID":"b3e82ddd-d1e2-4d50-a582-04f07e2a1fb4","Type":"ContainerStarted","Data":"4658a91dd9f579dd896dc0c3e3c259bac174dbe552f9e98e229076e20927df02"} Feb 26 17:29:17 crc kubenswrapper[4805]: I0226 17:29:17.069259 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-htjln" event={"ID":"dfc0288a-d269-4568-a6f0-57bd9fa6cfcc","Type":"ContainerStarted","Data":"faefd7a8cb6df12a64fb9d1ff477d1703638f5ef3b03444190e1f187978cb2a9"} Feb 26 17:29:17 crc kubenswrapper[4805]: I0226 17:29:17.070265 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-2xtcm" event={"ID":"c48a9594-81fc-493b-98b0-fc3ad286abe2","Type":"ContainerStarted","Data":"8dd64aa91b89445529c5f4a883649f66c910b313980764916cb8c02716e9224e"} Feb 26 17:29:17 crc kubenswrapper[4805]: I0226 17:29:17.952461 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p" Feb 26 17:29:17 crc kubenswrapper[4805]: I0226 17:29:17.952918 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p" Feb 26 17:29:18 crc kubenswrapper[4805]: I0226 17:29:18.397786 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p"] Feb 26 17:29:18 crc kubenswrapper[4805]: I0226 17:29:18.956277 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-t7gbj" Feb 26 17:29:18 crc kubenswrapper[4805]: I0226 17:29:18.971901 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-t7gbj" Feb 26 17:29:19 crc kubenswrapper[4805]: I0226 17:29:19.083790 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p" event={"ID":"174cbe90-075b-4c73-ae20-cc8a47c42d06","Type":"ContainerStarted","Data":"d8f8dfd6dff6505272a3aef557d8aee84ea0e289227a03fc2020a1e40207af5e"} Feb 26 17:29:19 crc kubenswrapper[4805]: I0226 17:29:19.309352 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-t7gbj"] Feb 26 17:29:21 crc kubenswrapper[4805]: W0226 17:29:21.415575 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf96d4df_f5b6_4b7b_956d_f957313d1914.slice/crio-a9ccebcab44c6b46c2b11d8f1f9f610f744327f4eba2fc16db1b81559799cf35 WatchSource:0}: Error finding container a9ccebcab44c6b46c2b11d8f1f9f610f744327f4eba2fc16db1b81559799cf35: Status 404 returned error can't find the container with id a9ccebcab44c6b46c2b11d8f1f9f610f744327f4eba2fc16db1b81559799cf35 Feb 26 17:29:22 crc kubenswrapper[4805]: I0226 17:29:22.102756 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-t7gbj" event={"ID":"df96d4df-f5b6-4b7b-956d-f957313d1914","Type":"ContainerStarted","Data":"a9ccebcab44c6b46c2b11d8f1f9f610f744327f4eba2fc16db1b81559799cf35"} Feb 26 17:29:23 crc kubenswrapper[4805]: I0226 17:29:23.108925 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4" event={"ID":"b3e82ddd-d1e2-4d50-a582-04f07e2a1fb4","Type":"ContainerStarted","Data":"b38a7dc4bf54a3e7ed739ec84ebaedc874947df13b4167fbc97ca4ab3e15293e"} Feb 26 17:29:23 crc kubenswrapper[4805]: I0226 17:29:23.110927 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-htjln" event={"ID":"dfc0288a-d269-4568-a6f0-57bd9fa6cfcc","Type":"ContainerStarted","Data":"cba5636ff7bcea128f53fca55da298c42babc72cb5806f744034bd8dd02d6d7c"} Feb 26 17:29:23 crc kubenswrapper[4805]: I0226 17:29:23.113639 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-2xtcm" event={"ID":"c48a9594-81fc-493b-98b0-fc3ad286abe2","Type":"ContainerStarted","Data":"4d6086ac2c392239219a3e68ca493f1b30792921cf863c5e78a732ecf061da66"} Feb 26 17:29:23 crc kubenswrapper[4805]: I0226 17:29:23.113984 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-2xtcm" Feb 26 17:29:23 crc kubenswrapper[4805]: I0226 17:29:23.115377 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p" event={"ID":"174cbe90-075b-4c73-ae20-cc8a47c42d06","Type":"ContainerStarted","Data":"c5cc27f350c436de718ac1a1e1b35ceff8b0f48eceea507eb0e1087a0fa35b5e"} Feb 26 17:29:23 crc kubenswrapper[4805]: I0226 17:29:23.124625 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4" podStartSLOduration=19.976823177 podStartE2EDuration="26.12460982s" podCreationTimestamp="2026-02-26 17:28:57 +0000 UTC" firstStartedPulling="2026-02-26 17:29:16.3195465 +0000 UTC m=+870.881300839" lastFinishedPulling="2026-02-26 17:29:22.467333133 +0000 UTC m=+877.029087482" observedRunningTime="2026-02-26 17:29:23.123276116 +0000 UTC m=+877.685030455" watchObservedRunningTime="2026-02-26 17:29:23.12460982 +0000 UTC m=+877.686364159" Feb 26 17:29:23 crc kubenswrapper[4805]: I0226 17:29:23.146071 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p" podStartSLOduration=22.111528133 podStartE2EDuration="26.146055043s" podCreationTimestamp="2026-02-26 17:28:57 +0000 UTC" firstStartedPulling="2026-02-26 17:29:18.431823258 +0000 UTC m=+872.993577597" lastFinishedPulling="2026-02-26 17:29:22.466350158 +0000 UTC m=+877.028104507" observedRunningTime="2026-02-26 17:29:23.14362492 +0000 UTC m=+877.705379269" watchObservedRunningTime="2026-02-26 17:29:23.146055043 +0000 UTC m=+877.707809372" Feb 26 17:29:23 crc kubenswrapper[4805]: I0226 17:29:23.181481 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-2xtcm" podStartSLOduration=19.197402681 podStartE2EDuration="25.181460755s" podCreationTimestamp="2026-02-26 17:28:58 +0000 UTC" firstStartedPulling="2026-02-26 17:29:16.483218938 +0000 UTC m=+871.044973287" lastFinishedPulling="2026-02-26 17:29:22.467277022 +0000 UTC m=+877.029031361" observedRunningTime="2026-02-26 17:29:23.178482018 +0000 UTC m=+877.740236357" watchObservedRunningTime="2026-02-26 17:29:23.181460755 +0000 UTC m=+877.743215084" Feb 26 17:29:23 crc kubenswrapper[4805]: I0226 17:29:23.203821 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-htjln" podStartSLOduration=20.212013588 podStartE2EDuration="26.203806141s" podCreationTimestamp="2026-02-26 17:28:57 +0000 UTC" firstStartedPulling="2026-02-26 17:29:16.491100971 +0000 UTC m=+871.052855310" lastFinishedPulling="2026-02-26 17:29:22.482893524 +0000 UTC m=+877.044647863" observedRunningTime="2026-02-26 17:29:23.200705871 +0000 UTC m=+877.762460210" watchObservedRunningTime="2026-02-26 17:29:23.203806141 +0000 UTC m=+877.765560480" Feb 26 17:29:26 crc kubenswrapper[4805]: I0226 17:29:26.167866 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2htqs" Feb 26 17:29:28 crc kubenswrapper[4805]: I0226 17:29:28.147759 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-t7gbj" event={"ID":"df96d4df-f5b6-4b7b-956d-f957313d1914","Type":"ContainerStarted","Data":"c5d3ffc3d60f255398b73ac167e02406e300c083a1b9fa617fcc58bf5f8f45fa"} Feb 26 17:29:28 crc kubenswrapper[4805]: I0226 17:29:28.148337 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-t7gbj" Feb 26 17:29:28 crc kubenswrapper[4805]: I0226 17:29:28.150108 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-t7gbj" Feb 26 17:29:28 crc kubenswrapper[4805]: I0226 17:29:28.169265 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-t7gbj" podStartSLOduration=25.415397766 podStartE2EDuration="31.169245128s" podCreationTimestamp="2026-02-26 17:28:57 +0000 UTC" firstStartedPulling="2026-02-26 17:29:21.421978568 +0000 UTC m=+875.983732907" lastFinishedPulling="2026-02-26 17:29:27.17582593 +0000 UTC m=+881.737580269" observedRunningTime="2026-02-26 17:29:28.163574642 +0000 UTC m=+882.725328981" watchObservedRunningTime="2026-02-26 17:29:28.169245128 +0000 UTC m=+882.730999467" Feb 26 17:29:28 crc kubenswrapper[4805]: I0226 17:29:28.470102 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-2xtcm" Feb 26 17:29:32 crc kubenswrapper[4805]: I0226 17:29:32.977957 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 17:29:32 crc kubenswrapper[4805]: I0226 17:29:32.978049 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 17:29:37 crc kubenswrapper[4805]: I0226 17:29:37.410406 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-q4dc5"] Feb 26 17:29:37 crc kubenswrapper[4805]: E0226 17:29:37.411206 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de2ab0e6-f960-4663-8b08-075f36a70c33" containerName="registry-server" Feb 26 17:29:37 crc kubenswrapper[4805]: I0226 17:29:37.411223 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="de2ab0e6-f960-4663-8b08-075f36a70c33" containerName="registry-server" Feb 26 17:29:37 crc kubenswrapper[4805]: E0226 17:29:37.411238 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de2ab0e6-f960-4663-8b08-075f36a70c33" containerName="extract-content" Feb 26 17:29:37 crc kubenswrapper[4805]: I0226 17:29:37.411246 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="de2ab0e6-f960-4663-8b08-075f36a70c33" containerName="extract-content" Feb 26 17:29:37 crc kubenswrapper[4805]: E0226 17:29:37.411272 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de2ab0e6-f960-4663-8b08-075f36a70c33" containerName="extract-utilities" Feb 26 17:29:37 crc kubenswrapper[4805]: I0226 17:29:37.411280 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="de2ab0e6-f960-4663-8b08-075f36a70c33" containerName="extract-utilities" Feb 26 17:29:37 crc kubenswrapper[4805]: I0226 17:29:37.411403 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="de2ab0e6-f960-4663-8b08-075f36a70c33" containerName="registry-server" Feb 26 17:29:37 crc kubenswrapper[4805]: I0226 17:29:37.411868 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-q4dc5" Feb 26 17:29:37 crc kubenswrapper[4805]: I0226 17:29:37.413698 4805 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-rqcjw" Feb 26 17:29:37 crc kubenswrapper[4805]: I0226 17:29:37.414001 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 26 17:29:37 crc kubenswrapper[4805]: I0226 17:29:37.419349 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 26 17:29:37 crc kubenswrapper[4805]: I0226 17:29:37.469656 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-8ncfd"] Feb 26 17:29:37 crc kubenswrapper[4805]: I0226 17:29:37.485540 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-8ncfd" Feb 26 17:29:37 crc kubenswrapper[4805]: I0226 17:29:37.490392 4805 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-q22gm" Feb 26 17:29:37 crc kubenswrapper[4805]: I0226 17:29:37.493582 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-q4dc5"] Feb 26 17:29:37 crc kubenswrapper[4805]: I0226 17:29:37.503921 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-8ncfd"] Feb 26 17:29:37 crc kubenswrapper[4805]: I0226 17:29:37.510376 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-mzsw5"] Feb 26 17:29:37 crc kubenswrapper[4805]: I0226 17:29:37.511275 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-mzsw5" Feb 26 17:29:37 crc kubenswrapper[4805]: I0226 17:29:37.513708 4805 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-v2lsf" Feb 26 17:29:37 crc kubenswrapper[4805]: I0226 17:29:37.518405 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-mzsw5"] Feb 26 17:29:37 crc kubenswrapper[4805]: I0226 17:29:37.612454 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9r8m\" (UniqueName: \"kubernetes.io/projected/42dac4fd-2d52-471d-88df-5c9c12963936-kube-api-access-s9r8m\") pod \"cert-manager-cainjector-cf98fcc89-q4dc5\" (UID: \"42dac4fd-2d52-471d-88df-5c9c12963936\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-q4dc5" Feb 26 17:29:37 crc kubenswrapper[4805]: I0226 17:29:37.612796 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md8xd\" (UniqueName: \"kubernetes.io/projected/6474d164-4c68-4fba-9eaf-ec92e1636ea9-kube-api-access-md8xd\") pod \"cert-manager-858654f9db-8ncfd\" (UID: \"6474d164-4c68-4fba-9eaf-ec92e1636ea9\") " pod="cert-manager/cert-manager-858654f9db-8ncfd" Feb 26 17:29:37 crc kubenswrapper[4805]: I0226 17:29:37.714234 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md8xd\" (UniqueName: \"kubernetes.io/projected/6474d164-4c68-4fba-9eaf-ec92e1636ea9-kube-api-access-md8xd\") pod \"cert-manager-858654f9db-8ncfd\" (UID: \"6474d164-4c68-4fba-9eaf-ec92e1636ea9\") " pod="cert-manager/cert-manager-858654f9db-8ncfd" Feb 26 17:29:37 crc kubenswrapper[4805]: I0226 17:29:37.714309 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk556\" (UniqueName: \"kubernetes.io/projected/1916615f-2f09-479c-896e-6be0815477cf-kube-api-access-xk556\") pod \"cert-manager-webhook-687f57d79b-mzsw5\" (UID: \"1916615f-2f09-479c-896e-6be0815477cf\") " pod="cert-manager/cert-manager-webhook-687f57d79b-mzsw5" Feb 26 17:29:37 crc kubenswrapper[4805]: I0226 17:29:37.714394 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9r8m\" (UniqueName: \"kubernetes.io/projected/42dac4fd-2d52-471d-88df-5c9c12963936-kube-api-access-s9r8m\") pod \"cert-manager-cainjector-cf98fcc89-q4dc5\" (UID: \"42dac4fd-2d52-471d-88df-5c9c12963936\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-q4dc5" Feb 26 17:29:37 crc kubenswrapper[4805]: I0226 17:29:37.737774 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md8xd\" (UniqueName: \"kubernetes.io/projected/6474d164-4c68-4fba-9eaf-ec92e1636ea9-kube-api-access-md8xd\") pod \"cert-manager-858654f9db-8ncfd\" (UID: \"6474d164-4c68-4fba-9eaf-ec92e1636ea9\") " pod="cert-manager/cert-manager-858654f9db-8ncfd" Feb 26 17:29:37 crc kubenswrapper[4805]: I0226 17:29:37.741223 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9r8m\" (UniqueName: \"kubernetes.io/projected/42dac4fd-2d52-471d-88df-5c9c12963936-kube-api-access-s9r8m\") pod \"cert-manager-cainjector-cf98fcc89-q4dc5\" (UID: \"42dac4fd-2d52-471d-88df-5c9c12963936\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-q4dc5" Feb 26 17:29:37 crc kubenswrapper[4805]: I0226 17:29:37.804473 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-8ncfd" Feb 26 17:29:37 crc kubenswrapper[4805]: I0226 17:29:37.815612 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk556\" (UniqueName: \"kubernetes.io/projected/1916615f-2f09-479c-896e-6be0815477cf-kube-api-access-xk556\") pod \"cert-manager-webhook-687f57d79b-mzsw5\" (UID: \"1916615f-2f09-479c-896e-6be0815477cf\") " pod="cert-manager/cert-manager-webhook-687f57d79b-mzsw5" Feb 26 17:29:37 crc kubenswrapper[4805]: I0226 17:29:37.833402 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk556\" (UniqueName: \"kubernetes.io/projected/1916615f-2f09-479c-896e-6be0815477cf-kube-api-access-xk556\") pod \"cert-manager-webhook-687f57d79b-mzsw5\" (UID: \"1916615f-2f09-479c-896e-6be0815477cf\") " pod="cert-manager/cert-manager-webhook-687f57d79b-mzsw5" Feb 26 17:29:38 crc kubenswrapper[4805]: I0226 17:29:38.035134 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-q4dc5" Feb 26 17:29:38 crc kubenswrapper[4805]: I0226 17:29:38.044214 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-8ncfd"] Feb 26 17:29:38 crc kubenswrapper[4805]: W0226 17:29:38.057523 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6474d164_4c68_4fba_9eaf_ec92e1636ea9.slice/crio-df53b550c8ebdc2d8fb4650c367493b04e4c74a503af5c920e24ae470ca837c9 WatchSource:0}: Error finding container df53b550c8ebdc2d8fb4650c367493b04e4c74a503af5c920e24ae470ca837c9: Status 404 returned error can't find the container with id df53b550c8ebdc2d8fb4650c367493b04e4c74a503af5c920e24ae470ca837c9 Feb 26 17:29:38 crc kubenswrapper[4805]: I0226 17:29:38.125822 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-mzsw5" Feb 26 17:29:38 crc kubenswrapper[4805]: I0226 17:29:38.197922 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-8ncfd" event={"ID":"6474d164-4c68-4fba-9eaf-ec92e1636ea9","Type":"ContainerStarted","Data":"df53b550c8ebdc2d8fb4650c367493b04e4c74a503af5c920e24ae470ca837c9"} Feb 26 17:29:38 crc kubenswrapper[4805]: I0226 17:29:38.242170 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-q4dc5"] Feb 26 17:29:38 crc kubenswrapper[4805]: W0226 17:29:38.246501 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42dac4fd_2d52_471d_88df_5c9c12963936.slice/crio-93b3cf7807aa90b781c6815e75a46545a6d7e97ea5b3d855dc05e2c0deb5d05b WatchSource:0}: Error finding container 93b3cf7807aa90b781c6815e75a46545a6d7e97ea5b3d855dc05e2c0deb5d05b: Status 404 returned error can't find the container with id 93b3cf7807aa90b781c6815e75a46545a6d7e97ea5b3d855dc05e2c0deb5d05b Feb 26 17:29:38 crc kubenswrapper[4805]: I0226 17:29:38.327913 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-mzsw5"] Feb 26 17:29:39 crc kubenswrapper[4805]: I0226 17:29:39.231446 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-q4dc5" event={"ID":"42dac4fd-2d52-471d-88df-5c9c12963936","Type":"ContainerStarted","Data":"93b3cf7807aa90b781c6815e75a46545a6d7e97ea5b3d855dc05e2c0deb5d05b"} Feb 26 17:29:39 crc kubenswrapper[4805]: I0226 17:29:39.232982 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-mzsw5" event={"ID":"1916615f-2f09-479c-896e-6be0815477cf","Type":"ContainerStarted","Data":"ff56ab8732b649a6204206687f581685b05261e808b1707badfbfd9cca519ec9"} Feb 26 17:29:43 crc kubenswrapper[4805]: I0226 17:29:43.271161 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-mzsw5" event={"ID":"1916615f-2f09-479c-896e-6be0815477cf","Type":"ContainerStarted","Data":"8cc4a3ba51bf0449b044198895d3c00746ff5193eb107aea64a5c90785973d03"} Feb 26 17:29:43 crc kubenswrapper[4805]: I0226 17:29:43.271525 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-mzsw5" Feb 26 17:29:43 crc kubenswrapper[4805]: I0226 17:29:43.274320 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-8ncfd" event={"ID":"6474d164-4c68-4fba-9eaf-ec92e1636ea9","Type":"ContainerStarted","Data":"4d6618d7e8adcaa95bc6dec1c3409eb05cc92827371074ec5e3659e6819c7b0b"} Feb 26 17:29:43 crc kubenswrapper[4805]: I0226 17:29:43.275841 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-q4dc5" event={"ID":"42dac4fd-2d52-471d-88df-5c9c12963936","Type":"ContainerStarted","Data":"03920ca07b819495b1b16fd6ca6029331a78e6c6f41e407a9d5dd5f0bb674f49"} Feb 26 17:29:43 crc kubenswrapper[4805]: I0226 17:29:43.288742 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-mzsw5" podStartSLOduration=2.328040751 podStartE2EDuration="6.288723608s" podCreationTimestamp="2026-02-26 17:29:37 +0000 UTC" firstStartedPulling="2026-02-26 17:29:38.332307774 +0000 UTC m=+892.894062113" lastFinishedPulling="2026-02-26 17:29:42.292990631 +0000 UTC m=+896.854744970" observedRunningTime="2026-02-26 17:29:43.284587552 +0000 UTC m=+897.846341891" watchObservedRunningTime="2026-02-26 17:29:43.288723608 +0000 UTC m=+897.850477947" Feb 26 17:29:43 crc kubenswrapper[4805]: I0226 17:29:43.305044 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-8ncfd" podStartSLOduration=2.067059067 podStartE2EDuration="6.305011788s" podCreationTimestamp="2026-02-26 17:29:37 +0000 UTC" firstStartedPulling="2026-02-26 17:29:38.061705922 +0000 UTC m=+892.623460261" lastFinishedPulling="2026-02-26 17:29:42.299658643 +0000 UTC m=+896.861412982" observedRunningTime="2026-02-26 17:29:43.301346983 +0000 UTC m=+897.863101322" watchObservedRunningTime="2026-02-26 17:29:43.305011788 +0000 UTC m=+897.866766127" Feb 26 17:29:43 crc kubenswrapper[4805]: I0226 17:29:43.321247 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-q4dc5" podStartSLOduration=2.224236907 podStartE2EDuration="6.321224866s" podCreationTimestamp="2026-02-26 17:29:37 +0000 UTC" firstStartedPulling="2026-02-26 17:29:38.248214298 +0000 UTC m=+892.809968637" lastFinishedPulling="2026-02-26 17:29:42.345202247 +0000 UTC m=+896.906956596" observedRunningTime="2026-02-26 17:29:43.315857937 +0000 UTC m=+897.877612286" watchObservedRunningTime="2026-02-26 17:29:43.321224866 +0000 UTC m=+897.882979215" Feb 26 17:29:48 crc kubenswrapper[4805]: I0226 17:29:48.130651 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-mzsw5" Feb 26 17:30:00 crc kubenswrapper[4805]: I0226 17:30:00.141720 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535450-rlwkb"] Feb 26 17:30:00 crc kubenswrapper[4805]: I0226 17:30:00.143692 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535450-rlwkb" Feb 26 17:30:00 crc kubenswrapper[4805]: I0226 17:30:00.145916 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 17:30:00 crc kubenswrapper[4805]: I0226 17:30:00.146182 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 17:30:00 crc kubenswrapper[4805]: I0226 17:30:00.147084 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 17:30:00 crc kubenswrapper[4805]: I0226 17:30:00.153243 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535450-rlwkb"] Feb 26 17:30:00 crc kubenswrapper[4805]: I0226 17:30:00.209322 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhv8n\" (UniqueName: \"kubernetes.io/projected/0f0c7db9-369e-4a42-bf2e-2bacfed49fe2-kube-api-access-qhv8n\") pod \"auto-csr-approver-29535450-rlwkb\" (UID: \"0f0c7db9-369e-4a42-bf2e-2bacfed49fe2\") " pod="openshift-infra/auto-csr-approver-29535450-rlwkb" Feb 26 17:30:00 crc kubenswrapper[4805]: I0226 17:30:00.238456 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535450-pkxrs"] Feb 26 17:30:00 crc kubenswrapper[4805]: I0226 17:30:00.239165 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535450-pkxrs" Feb 26 17:30:00 crc kubenswrapper[4805]: I0226 17:30:00.240767 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 17:30:00 crc kubenswrapper[4805]: I0226 17:30:00.240953 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 17:30:00 crc kubenswrapper[4805]: I0226 17:30:00.248500 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535450-pkxrs"] Feb 26 17:30:00 crc kubenswrapper[4805]: I0226 17:30:00.309725 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b832ebc6-5bcc-437a-9ff7-8e9987e423af-secret-volume\") pod \"collect-profiles-29535450-pkxrs\" (UID: \"b832ebc6-5bcc-437a-9ff7-8e9987e423af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535450-pkxrs" Feb 26 17:30:00 crc kubenswrapper[4805]: I0226 17:30:00.309765 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b832ebc6-5bcc-437a-9ff7-8e9987e423af-config-volume\") pod \"collect-profiles-29535450-pkxrs\" (UID: \"b832ebc6-5bcc-437a-9ff7-8e9987e423af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535450-pkxrs" Feb 26 17:30:00 crc kubenswrapper[4805]: I0226 17:30:00.309791 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mcgq\" (UniqueName: \"kubernetes.io/projected/b832ebc6-5bcc-437a-9ff7-8e9987e423af-kube-api-access-2mcgq\") pod \"collect-profiles-29535450-pkxrs\" (UID: \"b832ebc6-5bcc-437a-9ff7-8e9987e423af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535450-pkxrs" Feb 26 17:30:00 crc kubenswrapper[4805]: I0226 17:30:00.309865 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhv8n\" (UniqueName: \"kubernetes.io/projected/0f0c7db9-369e-4a42-bf2e-2bacfed49fe2-kube-api-access-qhv8n\") pod \"auto-csr-approver-29535450-rlwkb\" (UID: \"0f0c7db9-369e-4a42-bf2e-2bacfed49fe2\") " pod="openshift-infra/auto-csr-approver-29535450-rlwkb" Feb 26 17:30:00 crc kubenswrapper[4805]: I0226 17:30:00.332520 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhv8n\" (UniqueName: \"kubernetes.io/projected/0f0c7db9-369e-4a42-bf2e-2bacfed49fe2-kube-api-access-qhv8n\") pod \"auto-csr-approver-29535450-rlwkb\" (UID: \"0f0c7db9-369e-4a42-bf2e-2bacfed49fe2\") " pod="openshift-infra/auto-csr-approver-29535450-rlwkb" Feb 26 17:30:00 crc kubenswrapper[4805]: I0226 17:30:00.411365 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b832ebc6-5bcc-437a-9ff7-8e9987e423af-config-volume\") pod \"collect-profiles-29535450-pkxrs\" (UID: \"b832ebc6-5bcc-437a-9ff7-8e9987e423af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535450-pkxrs" Feb 26 17:30:00 crc kubenswrapper[4805]: I0226 17:30:00.411413 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b832ebc6-5bcc-437a-9ff7-8e9987e423af-secret-volume\") pod \"collect-profiles-29535450-pkxrs\" (UID: \"b832ebc6-5bcc-437a-9ff7-8e9987e423af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535450-pkxrs" Feb 26 17:30:00 crc kubenswrapper[4805]: I0226 17:30:00.411453 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mcgq\" (UniqueName: \"kubernetes.io/projected/b832ebc6-5bcc-437a-9ff7-8e9987e423af-kube-api-access-2mcgq\") pod \"collect-profiles-29535450-pkxrs\" (UID: \"b832ebc6-5bcc-437a-9ff7-8e9987e423af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535450-pkxrs" Feb 26 17:30:00 crc kubenswrapper[4805]: I0226 17:30:00.412567 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b832ebc6-5bcc-437a-9ff7-8e9987e423af-config-volume\") pod \"collect-profiles-29535450-pkxrs\" (UID: \"b832ebc6-5bcc-437a-9ff7-8e9987e423af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535450-pkxrs" Feb 26 17:30:00 crc kubenswrapper[4805]: I0226 17:30:00.415152 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b832ebc6-5bcc-437a-9ff7-8e9987e423af-secret-volume\") pod \"collect-profiles-29535450-pkxrs\" (UID: \"b832ebc6-5bcc-437a-9ff7-8e9987e423af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535450-pkxrs" Feb 26 17:30:00 crc kubenswrapper[4805]: I0226 17:30:00.433270 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mcgq\" (UniqueName: \"kubernetes.io/projected/b832ebc6-5bcc-437a-9ff7-8e9987e423af-kube-api-access-2mcgq\") pod \"collect-profiles-29535450-pkxrs\" (UID: \"b832ebc6-5bcc-437a-9ff7-8e9987e423af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535450-pkxrs" Feb 26 17:30:00 crc kubenswrapper[4805]: I0226 17:30:00.478487 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535450-rlwkb" Feb 26 17:30:00 crc kubenswrapper[4805]: I0226 17:30:00.553495 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535450-pkxrs" Feb 26 17:30:00 crc kubenswrapper[4805]: I0226 17:30:00.664880 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535450-rlwkb"] Feb 26 17:30:00 crc kubenswrapper[4805]: W0226 17:30:00.670852 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f0c7db9_369e_4a42_bf2e_2bacfed49fe2.slice/crio-f3f713dd7a088a56f495048ad87be0a4d0086ebfb79c2c0689ad0ab8b2231201 WatchSource:0}: Error finding container f3f713dd7a088a56f495048ad87be0a4d0086ebfb79c2c0689ad0ab8b2231201: Status 404 returned error can't find the container with id f3f713dd7a088a56f495048ad87be0a4d0086ebfb79c2c0689ad0ab8b2231201 Feb 26 17:30:00 crc kubenswrapper[4805]: I0226 17:30:00.741649 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535450-pkxrs"] Feb 26 17:30:00 crc kubenswrapper[4805]: W0226 17:30:00.746161 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb832ebc6_5bcc_437a_9ff7_8e9987e423af.slice/crio-5a2a433bfc9c40955a23e30369beef0c379f00f681cbb41c99daa39046b36979 WatchSource:0}: Error finding container 5a2a433bfc9c40955a23e30369beef0c379f00f681cbb41c99daa39046b36979: Status 404 returned error can't find the container with id 5a2a433bfc9c40955a23e30369beef0c379f00f681cbb41c99daa39046b36979 Feb 26 17:30:01 crc kubenswrapper[4805]: I0226 17:30:01.390365 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535450-rlwkb" event={"ID":"0f0c7db9-369e-4a42-bf2e-2bacfed49fe2","Type":"ContainerStarted","Data":"f3f713dd7a088a56f495048ad87be0a4d0086ebfb79c2c0689ad0ab8b2231201"} Feb 26 17:30:01 crc kubenswrapper[4805]: I0226 17:30:01.393411 4805 generic.go:334] "Generic (PLEG): container finished" podID="b832ebc6-5bcc-437a-9ff7-8e9987e423af" containerID="1a8ce2778aa1f4965424fa96a8b7ca85ae26ff8f3b4144fb6d807265af867ded" exitCode=0 Feb 26 17:30:01 crc kubenswrapper[4805]: I0226 17:30:01.393483 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535450-pkxrs" event={"ID":"b832ebc6-5bcc-437a-9ff7-8e9987e423af","Type":"ContainerDied","Data":"1a8ce2778aa1f4965424fa96a8b7ca85ae26ff8f3b4144fb6d807265af867ded"} Feb 26 17:30:01 crc kubenswrapper[4805]: I0226 17:30:01.393620 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535450-pkxrs" event={"ID":"b832ebc6-5bcc-437a-9ff7-8e9987e423af","Type":"ContainerStarted","Data":"5a2a433bfc9c40955a23e30369beef0c379f00f681cbb41c99daa39046b36979"} Feb 26 17:30:02 crc kubenswrapper[4805]: I0226 17:30:02.627502 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535450-pkxrs" Feb 26 17:30:02 crc kubenswrapper[4805]: I0226 17:30:02.662837 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b832ebc6-5bcc-437a-9ff7-8e9987e423af-config-volume\") pod \"b832ebc6-5bcc-437a-9ff7-8e9987e423af\" (UID: \"b832ebc6-5bcc-437a-9ff7-8e9987e423af\") " Feb 26 17:30:02 crc kubenswrapper[4805]: I0226 17:30:02.663031 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b832ebc6-5bcc-437a-9ff7-8e9987e423af-secret-volume\") pod \"b832ebc6-5bcc-437a-9ff7-8e9987e423af\" (UID: \"b832ebc6-5bcc-437a-9ff7-8e9987e423af\") " Feb 26 17:30:02 crc kubenswrapper[4805]: I0226 17:30:02.663067 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mcgq\" (UniqueName: \"kubernetes.io/projected/b832ebc6-5bcc-437a-9ff7-8e9987e423af-kube-api-access-2mcgq\") pod \"b832ebc6-5bcc-437a-9ff7-8e9987e423af\" (UID: \"b832ebc6-5bcc-437a-9ff7-8e9987e423af\") " Feb 26 17:30:02 crc kubenswrapper[4805]: I0226 17:30:02.663654 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b832ebc6-5bcc-437a-9ff7-8e9987e423af-config-volume" (OuterVolumeSpecName: "config-volume") pod "b832ebc6-5bcc-437a-9ff7-8e9987e423af" (UID: "b832ebc6-5bcc-437a-9ff7-8e9987e423af"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:30:02 crc kubenswrapper[4805]: I0226 17:30:02.668291 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b832ebc6-5bcc-437a-9ff7-8e9987e423af-kube-api-access-2mcgq" (OuterVolumeSpecName: "kube-api-access-2mcgq") pod "b832ebc6-5bcc-437a-9ff7-8e9987e423af" (UID: "b832ebc6-5bcc-437a-9ff7-8e9987e423af"). InnerVolumeSpecName "kube-api-access-2mcgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:30:02 crc kubenswrapper[4805]: I0226 17:30:02.676770 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b832ebc6-5bcc-437a-9ff7-8e9987e423af-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b832ebc6-5bcc-437a-9ff7-8e9987e423af" (UID: "b832ebc6-5bcc-437a-9ff7-8e9987e423af"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:30:02 crc kubenswrapper[4805]: I0226 17:30:02.764659 4805 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b832ebc6-5bcc-437a-9ff7-8e9987e423af-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 17:30:02 crc kubenswrapper[4805]: I0226 17:30:02.764690 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mcgq\" (UniqueName: \"kubernetes.io/projected/b832ebc6-5bcc-437a-9ff7-8e9987e423af-kube-api-access-2mcgq\") on node \"crc\" DevicePath \"\"" Feb 26 17:30:02 crc kubenswrapper[4805]: I0226 17:30:02.764701 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b832ebc6-5bcc-437a-9ff7-8e9987e423af-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 17:30:02 crc kubenswrapper[4805]: I0226 17:30:02.977582 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 17:30:02 crc kubenswrapper[4805]: I0226 17:30:02.977664 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 17:30:03 crc kubenswrapper[4805]: I0226 17:30:03.406744 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535450-pkxrs" event={"ID":"b832ebc6-5bcc-437a-9ff7-8e9987e423af","Type":"ContainerDied","Data":"5a2a433bfc9c40955a23e30369beef0c379f00f681cbb41c99daa39046b36979"} Feb 26 17:30:03 crc kubenswrapper[4805]: I0226 17:30:03.406783 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a2a433bfc9c40955a23e30369beef0c379f00f681cbb41c99daa39046b36979" Feb 26 17:30:03 crc kubenswrapper[4805]: I0226 17:30:03.406768 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535450-pkxrs" Feb 26 17:30:03 crc kubenswrapper[4805]: I0226 17:30:03.408284 4805 generic.go:334] "Generic (PLEG): container finished" podID="0f0c7db9-369e-4a42-bf2e-2bacfed49fe2" containerID="4352610a30c9dedb36f560774603a9265c1073fd82623f3f98c5f75a51581c5a" exitCode=0 Feb 26 17:30:03 crc kubenswrapper[4805]: I0226 17:30:03.408322 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535450-rlwkb" event={"ID":"0f0c7db9-369e-4a42-bf2e-2bacfed49fe2","Type":"ContainerDied","Data":"4352610a30c9dedb36f560774603a9265c1073fd82623f3f98c5f75a51581c5a"} Feb 26 17:30:04 crc kubenswrapper[4805]: I0226 17:30:04.655678 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535450-rlwkb" Feb 26 17:30:04 crc kubenswrapper[4805]: I0226 17:30:04.687168 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhv8n\" (UniqueName: \"kubernetes.io/projected/0f0c7db9-369e-4a42-bf2e-2bacfed49fe2-kube-api-access-qhv8n\") pod \"0f0c7db9-369e-4a42-bf2e-2bacfed49fe2\" (UID: \"0f0c7db9-369e-4a42-bf2e-2bacfed49fe2\") " Feb 26 17:30:04 crc kubenswrapper[4805]: I0226 17:30:04.693152 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f0c7db9-369e-4a42-bf2e-2bacfed49fe2-kube-api-access-qhv8n" (OuterVolumeSpecName: "kube-api-access-qhv8n") pod "0f0c7db9-369e-4a42-bf2e-2bacfed49fe2" (UID: "0f0c7db9-369e-4a42-bf2e-2bacfed49fe2"). InnerVolumeSpecName "kube-api-access-qhv8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:30:04 crc kubenswrapper[4805]: I0226 17:30:04.788918 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhv8n\" (UniqueName: \"kubernetes.io/projected/0f0c7db9-369e-4a42-bf2e-2bacfed49fe2-kube-api-access-qhv8n\") on node \"crc\" DevicePath \"\"" Feb 26 17:30:05 crc kubenswrapper[4805]: I0226 17:30:05.420845 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535450-rlwkb" event={"ID":"0f0c7db9-369e-4a42-bf2e-2bacfed49fe2","Type":"ContainerDied","Data":"f3f713dd7a088a56f495048ad87be0a4d0086ebfb79c2c0689ad0ab8b2231201"} Feb 26 17:30:05 crc kubenswrapper[4805]: I0226 17:30:05.420890 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3f713dd7a088a56f495048ad87be0a4d0086ebfb79c2c0689ad0ab8b2231201" Feb 26 17:30:05 crc kubenswrapper[4805]: I0226 17:30:05.420968 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535450-rlwkb" Feb 26 17:30:05 crc kubenswrapper[4805]: I0226 17:30:05.705616 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535444-vbq7n"] Feb 26 17:30:05 crc kubenswrapper[4805]: I0226 17:30:05.712634 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535444-vbq7n"] Feb 26 17:30:06 crc kubenswrapper[4805]: I0226 17:30:06.963590 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31ac4c66-1f9e-4091-b057-ebac30de30ec" path="/var/lib/kubelet/pods/31ac4c66-1f9e-4091-b057-ebac30de30ec/volumes" Feb 26 17:30:11 crc kubenswrapper[4805]: I0226 17:30:11.523007 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651s5h92"] Feb 26 17:30:11 crc kubenswrapper[4805]: E0226 17:30:11.524234 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f0c7db9-369e-4a42-bf2e-2bacfed49fe2" containerName="oc" Feb 26 17:30:11 crc kubenswrapper[4805]: I0226 17:30:11.524252 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f0c7db9-369e-4a42-bf2e-2bacfed49fe2" containerName="oc" Feb 26 17:30:11 crc kubenswrapper[4805]: E0226 17:30:11.524270 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b832ebc6-5bcc-437a-9ff7-8e9987e423af" containerName="collect-profiles" Feb 26 17:30:11 crc kubenswrapper[4805]: I0226 17:30:11.524276 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b832ebc6-5bcc-437a-9ff7-8e9987e423af" containerName="collect-profiles" Feb 26 17:30:11 crc kubenswrapper[4805]: I0226 17:30:11.524397 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f0c7db9-369e-4a42-bf2e-2bacfed49fe2" containerName="oc" Feb 26 17:30:11 crc kubenswrapper[4805]: I0226 17:30:11.524415 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="b832ebc6-5bcc-437a-9ff7-8e9987e423af" containerName="collect-profiles" Feb 26 17:30:11 crc kubenswrapper[4805]: I0226 17:30:11.525437 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651s5h92" Feb 26 17:30:11 crc kubenswrapper[4805]: I0226 17:30:11.527588 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 26 17:30:11 crc kubenswrapper[4805]: I0226 17:30:11.531525 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651s5h92"] Feb 26 17:30:11 crc kubenswrapper[4805]: I0226 17:30:11.584123 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3b63836c-d48c-4cba-a661-0c6063f2cbbc-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651s5h92\" (UID: \"3b63836c-d48c-4cba-a661-0c6063f2cbbc\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651s5h92" Feb 26 17:30:11 crc kubenswrapper[4805]: I0226 17:30:11.584217 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3b63836c-d48c-4cba-a661-0c6063f2cbbc-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651s5h92\" (UID: \"3b63836c-d48c-4cba-a661-0c6063f2cbbc\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651s5h92" Feb 26 17:30:11 crc kubenswrapper[4805]: I0226 17:30:11.584317 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7zdv\" (UniqueName: \"kubernetes.io/projected/3b63836c-d48c-4cba-a661-0c6063f2cbbc-kube-api-access-r7zdv\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651s5h92\" (UID: \"3b63836c-d48c-4cba-a661-0c6063f2cbbc\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651s5h92" Feb 26 17:30:11 crc kubenswrapper[4805]: I0226 17:30:11.685524 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3b63836c-d48c-4cba-a661-0c6063f2cbbc-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651s5h92\" (UID: \"3b63836c-d48c-4cba-a661-0c6063f2cbbc\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651s5h92" Feb 26 17:30:11 crc kubenswrapper[4805]: I0226 17:30:11.685675 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7zdv\" (UniqueName: \"kubernetes.io/projected/3b63836c-d48c-4cba-a661-0c6063f2cbbc-kube-api-access-r7zdv\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651s5h92\" (UID: \"3b63836c-d48c-4cba-a661-0c6063f2cbbc\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651s5h92" Feb 26 17:30:11 crc kubenswrapper[4805]: I0226 17:30:11.685740 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3b63836c-d48c-4cba-a661-0c6063f2cbbc-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651s5h92\" (UID: \"3b63836c-d48c-4cba-a661-0c6063f2cbbc\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651s5h92" Feb 26 17:30:11 crc kubenswrapper[4805]: I0226 17:30:11.686143 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3b63836c-d48c-4cba-a661-0c6063f2cbbc-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651s5h92\" (UID: \"3b63836c-d48c-4cba-a661-0c6063f2cbbc\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651s5h92" Feb 26 17:30:11 crc kubenswrapper[4805]: I0226 17:30:11.686239 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3b63836c-d48c-4cba-a661-0c6063f2cbbc-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651s5h92\" (UID: \"3b63836c-d48c-4cba-a661-0c6063f2cbbc\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651s5h92" Feb 26 17:30:11 crc kubenswrapper[4805]: I0226 17:30:11.718495 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7zdv\" (UniqueName: \"kubernetes.io/projected/3b63836c-d48c-4cba-a661-0c6063f2cbbc-kube-api-access-r7zdv\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651s5h92\" (UID: \"3b63836c-d48c-4cba-a661-0c6063f2cbbc\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651s5h92" Feb 26 17:30:11 crc kubenswrapper[4805]: I0226 17:30:11.842597 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651s5h92" Feb 26 17:30:12 crc kubenswrapper[4805]: I0226 17:30:12.068767 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651s5h92"] Feb 26 17:30:12 crc kubenswrapper[4805]: I0226 17:30:12.466794 4805 generic.go:334] "Generic (PLEG): container finished" podID="3b63836c-d48c-4cba-a661-0c6063f2cbbc" containerID="9c7503b02b926c274dd2903dea687e092d1f883acbb9e034d5f31dcd7534e6a0" exitCode=0 Feb 26 17:30:12 crc kubenswrapper[4805]: I0226 17:30:12.466838 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651s5h92" event={"ID":"3b63836c-d48c-4cba-a661-0c6063f2cbbc","Type":"ContainerDied","Data":"9c7503b02b926c274dd2903dea687e092d1f883acbb9e034d5f31dcd7534e6a0"} Feb 26 17:30:12 crc kubenswrapper[4805]: I0226 17:30:12.466862 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651s5h92" event={"ID":"3b63836c-d48c-4cba-a661-0c6063f2cbbc","Type":"ContainerStarted","Data":"9bbdf7b28cc64cae3d57f2f904f25a8789ac948fc3e503b04a918c1e3be89b91"} Feb 26 17:30:13 crc kubenswrapper[4805]: I0226 17:30:13.113738 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Feb 26 17:30:13 crc kubenswrapper[4805]: I0226 17:30:13.114789 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 26 17:30:13 crc kubenswrapper[4805]: I0226 17:30:13.116859 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Feb 26 17:30:13 crc kubenswrapper[4805]: I0226 17:30:13.116859 4805 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-m7gzd" Feb 26 17:30:13 crc kubenswrapper[4805]: I0226 17:30:13.117193 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Feb 26 17:30:13 crc kubenswrapper[4805]: I0226 17:30:13.121695 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 26 17:30:13 crc kubenswrapper[4805]: I0226 17:30:13.206197 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-170b5e2d-9e74-4770-9551-675d66e2edcb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-170b5e2d-9e74-4770-9551-675d66e2edcb\") pod \"minio\" (UID: \"250bedca-52a1-4dd3-a7ad-084e8372437d\") " pod="minio-dev/minio" Feb 26 17:30:13 crc kubenswrapper[4805]: I0226 17:30:13.206275 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt8r2\" (UniqueName: \"kubernetes.io/projected/250bedca-52a1-4dd3-a7ad-084e8372437d-kube-api-access-dt8r2\") pod \"minio\" (UID: \"250bedca-52a1-4dd3-a7ad-084e8372437d\") " pod="minio-dev/minio" Feb 26 17:30:13 crc kubenswrapper[4805]: I0226 17:30:13.307845 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-170b5e2d-9e74-4770-9551-675d66e2edcb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-170b5e2d-9e74-4770-9551-675d66e2edcb\") pod \"minio\" (UID: \"250bedca-52a1-4dd3-a7ad-084e8372437d\") " pod="minio-dev/minio" Feb 26 17:30:13 crc kubenswrapper[4805]: I0226 17:30:13.307930 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt8r2\" (UniqueName: \"kubernetes.io/projected/250bedca-52a1-4dd3-a7ad-084e8372437d-kube-api-access-dt8r2\") pod \"minio\" (UID: \"250bedca-52a1-4dd3-a7ad-084e8372437d\") " pod="minio-dev/minio" Feb 26 17:30:13 crc kubenswrapper[4805]: I0226 17:30:13.332010 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt8r2\" (UniqueName: \"kubernetes.io/projected/250bedca-52a1-4dd3-a7ad-084e8372437d-kube-api-access-dt8r2\") pod \"minio\" (UID: \"250bedca-52a1-4dd3-a7ad-084e8372437d\") " pod="minio-dev/minio" Feb 26 17:30:13 crc kubenswrapper[4805]: I0226 17:30:13.335371 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 17:30:13 crc kubenswrapper[4805]: I0226 17:30:13.335437 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-170b5e2d-9e74-4770-9551-675d66e2edcb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-170b5e2d-9e74-4770-9551-675d66e2edcb\") pod \"minio\" (UID: \"250bedca-52a1-4dd3-a7ad-084e8372437d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a18686e151316913868ca14a1a59932d867d9824262a9b7d19a48d8fd781de4d/globalmount\"" pod="minio-dev/minio" Feb 26 17:30:13 crc kubenswrapper[4805]: I0226 17:30:13.457292 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-170b5e2d-9e74-4770-9551-675d66e2edcb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-170b5e2d-9e74-4770-9551-675d66e2edcb\") pod \"minio\" (UID: \"250bedca-52a1-4dd3-a7ad-084e8372437d\") " pod="minio-dev/minio" Feb 26 17:30:13 crc kubenswrapper[4805]: I0226 17:30:13.732524 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 26 17:30:14 crc kubenswrapper[4805]: I0226 17:30:14.163234 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 26 17:30:14 crc kubenswrapper[4805]: W0226 17:30:14.169591 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod250bedca_52a1_4dd3_a7ad_084e8372437d.slice/crio-5e1b9ce7611b81ee2745d18ae56977ebde0ca9b0d311fbc0018c13a088b650f0 WatchSource:0}: Error finding container 5e1b9ce7611b81ee2745d18ae56977ebde0ca9b0d311fbc0018c13a088b650f0: Status 404 returned error can't find the container with id 5e1b9ce7611b81ee2745d18ae56977ebde0ca9b0d311fbc0018c13a088b650f0 Feb 26 17:30:14 crc kubenswrapper[4805]: I0226 17:30:14.487575 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"250bedca-52a1-4dd3-a7ad-084e8372437d","Type":"ContainerStarted","Data":"5e1b9ce7611b81ee2745d18ae56977ebde0ca9b0d311fbc0018c13a088b650f0"} Feb 26 17:30:14 crc kubenswrapper[4805]: I0226 17:30:14.489231 4805 generic.go:334] "Generic (PLEG): container finished" podID="3b63836c-d48c-4cba-a661-0c6063f2cbbc" containerID="7c92a2b919b124d0870f00c0022312ee5537b10a6fca9241a6ac15fdea2915dc" exitCode=0 Feb 26 17:30:14 crc kubenswrapper[4805]: I0226 17:30:14.489269 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651s5h92" event={"ID":"3b63836c-d48c-4cba-a661-0c6063f2cbbc","Type":"ContainerDied","Data":"7c92a2b919b124d0870f00c0022312ee5537b10a6fca9241a6ac15fdea2915dc"} Feb 26 17:30:15 crc kubenswrapper[4805]: I0226 17:30:15.497256 4805 generic.go:334] "Generic (PLEG): container finished" podID="3b63836c-d48c-4cba-a661-0c6063f2cbbc" containerID="b109633d9f3c9e0588017a779541811c1ea4ad92557fdc9ed0131679f13070e2" exitCode=0 Feb 26 17:30:15 crc kubenswrapper[4805]: I0226 17:30:15.497340 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651s5h92" event={"ID":"3b63836c-d48c-4cba-a661-0c6063f2cbbc","Type":"ContainerDied","Data":"b109633d9f3c9e0588017a779541811c1ea4ad92557fdc9ed0131679f13070e2"} Feb 26 17:30:17 crc kubenswrapper[4805]: I0226 17:30:17.076477 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651s5h92" Feb 26 17:30:17 crc kubenswrapper[4805]: I0226 17:30:17.176996 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3b63836c-d48c-4cba-a661-0c6063f2cbbc-bundle\") pod \"3b63836c-d48c-4cba-a661-0c6063f2cbbc\" (UID: \"3b63836c-d48c-4cba-a661-0c6063f2cbbc\") " Feb 26 17:30:17 crc kubenswrapper[4805]: I0226 17:30:17.177086 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3b63836c-d48c-4cba-a661-0c6063f2cbbc-util\") pod \"3b63836c-d48c-4cba-a661-0c6063f2cbbc\" (UID: \"3b63836c-d48c-4cba-a661-0c6063f2cbbc\") " Feb 26 17:30:17 crc kubenswrapper[4805]: I0226 17:30:17.177143 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7zdv\" (UniqueName: \"kubernetes.io/projected/3b63836c-d48c-4cba-a661-0c6063f2cbbc-kube-api-access-r7zdv\") pod \"3b63836c-d48c-4cba-a661-0c6063f2cbbc\" (UID: \"3b63836c-d48c-4cba-a661-0c6063f2cbbc\") " Feb 26 17:30:17 crc kubenswrapper[4805]: I0226 17:30:17.178303 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b63836c-d48c-4cba-a661-0c6063f2cbbc-bundle" (OuterVolumeSpecName: "bundle") pod "3b63836c-d48c-4cba-a661-0c6063f2cbbc" (UID: "3b63836c-d48c-4cba-a661-0c6063f2cbbc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:30:17 crc kubenswrapper[4805]: I0226 17:30:17.183047 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b63836c-d48c-4cba-a661-0c6063f2cbbc-kube-api-access-r7zdv" (OuterVolumeSpecName: "kube-api-access-r7zdv") pod "3b63836c-d48c-4cba-a661-0c6063f2cbbc" (UID: "3b63836c-d48c-4cba-a661-0c6063f2cbbc"). InnerVolumeSpecName "kube-api-access-r7zdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:30:17 crc kubenswrapper[4805]: I0226 17:30:17.190592 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b63836c-d48c-4cba-a661-0c6063f2cbbc-util" (OuterVolumeSpecName: "util") pod "3b63836c-d48c-4cba-a661-0c6063f2cbbc" (UID: "3b63836c-d48c-4cba-a661-0c6063f2cbbc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:30:17 crc kubenswrapper[4805]: I0226 17:30:17.278500 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7zdv\" (UniqueName: \"kubernetes.io/projected/3b63836c-d48c-4cba-a661-0c6063f2cbbc-kube-api-access-r7zdv\") on node \"crc\" DevicePath \"\"" Feb 26 17:30:17 crc kubenswrapper[4805]: I0226 17:30:17.278983 4805 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3b63836c-d48c-4cba-a661-0c6063f2cbbc-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:30:17 crc kubenswrapper[4805]: I0226 17:30:17.279000 4805 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3b63836c-d48c-4cba-a661-0c6063f2cbbc-util\") on node \"crc\" DevicePath \"\"" Feb 26 17:30:17 crc kubenswrapper[4805]: I0226 17:30:17.514444 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"250bedca-52a1-4dd3-a7ad-084e8372437d","Type":"ContainerStarted","Data":"18f8a78149674e8c7a37d83c5b765b224ead7a2d73e74619d6a1b74716a72e33"} Feb 26 17:30:17 crc kubenswrapper[4805]: I0226 17:30:17.518377 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651s5h92" event={"ID":"3b63836c-d48c-4cba-a661-0c6063f2cbbc","Type":"ContainerDied","Data":"9bbdf7b28cc64cae3d57f2f904f25a8789ac948fc3e503b04a918c1e3be89b91"} Feb 26 17:30:17 crc kubenswrapper[4805]: I0226 17:30:17.518417 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bbdf7b28cc64cae3d57f2f904f25a8789ac948fc3e503b04a918c1e3be89b91" Feb 26 17:30:17 crc kubenswrapper[4805]: I0226 17:30:17.518485 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651s5h92" Feb 26 17:30:17 crc kubenswrapper[4805]: I0226 17:30:17.534375 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.509056047 podStartE2EDuration="7.534357594s" podCreationTimestamp="2026-02-26 17:30:10 +0000 UTC" firstStartedPulling="2026-02-26 17:30:14.17143538 +0000 UTC m=+928.733189719" lastFinishedPulling="2026-02-26 17:30:17.196736917 +0000 UTC m=+931.758491266" observedRunningTime="2026-02-26 17:30:17.532104766 +0000 UTC m=+932.093859125" watchObservedRunningTime="2026-02-26 17:30:17.534357594 +0000 UTC m=+932.096111933" Feb 26 17:30:23 crc kubenswrapper[4805]: I0226 17:30:23.346461 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-dbdbb8494-f8mdl"] Feb 26 17:30:23 crc kubenswrapper[4805]: E0226 17:30:23.347248 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b63836c-d48c-4cba-a661-0c6063f2cbbc" containerName="pull" Feb 26 17:30:23 crc kubenswrapper[4805]: I0226 17:30:23.347265 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b63836c-d48c-4cba-a661-0c6063f2cbbc" containerName="pull" Feb 26 17:30:23 crc kubenswrapper[4805]: E0226 17:30:23.347276 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b63836c-d48c-4cba-a661-0c6063f2cbbc" containerName="util" Feb 26 17:30:23 crc kubenswrapper[4805]: I0226 17:30:23.347285 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b63836c-d48c-4cba-a661-0c6063f2cbbc" containerName="util" Feb 26 17:30:23 crc kubenswrapper[4805]: E0226 17:30:23.347293 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b63836c-d48c-4cba-a661-0c6063f2cbbc" containerName="extract" Feb 26 17:30:23 crc kubenswrapper[4805]: I0226 17:30:23.347299 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b63836c-d48c-4cba-a661-0c6063f2cbbc" containerName="extract" Feb 26 17:30:23 crc kubenswrapper[4805]: I0226 17:30:23.347432 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b63836c-d48c-4cba-a661-0c6063f2cbbc" containerName="extract" Feb 26 17:30:23 crc kubenswrapper[4805]: I0226 17:30:23.348059 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-dbdbb8494-f8mdl" Feb 26 17:30:23 crc kubenswrapper[4805]: I0226 17:30:23.349910 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Feb 26 17:30:23 crc kubenswrapper[4805]: I0226 17:30:23.351150 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Feb 26 17:30:23 crc kubenswrapper[4805]: I0226 17:30:23.351201 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Feb 26 17:30:23 crc kubenswrapper[4805]: I0226 17:30:23.351266 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Feb 26 17:30:23 crc kubenswrapper[4805]: I0226 17:30:23.351506 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Feb 26 17:30:23 crc kubenswrapper[4805]: I0226 17:30:23.353590 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-zzc5m" Feb 26 17:30:23 crc kubenswrapper[4805]: I0226 17:30:23.363747 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-dbdbb8494-f8mdl"] Feb 26 17:30:23 crc kubenswrapper[4805]: I0226 17:30:23.452129 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9d1d4276-53fb-4d5a-814b-c29e097718d0-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-dbdbb8494-f8mdl\" (UID: \"9d1d4276-53fb-4d5a-814b-c29e097718d0\") " pod="openshift-operators-redhat/loki-operator-controller-manager-dbdbb8494-f8mdl" Feb 26 17:30:23 crc kubenswrapper[4805]: I0226 17:30:23.452445 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9d1d4276-53fb-4d5a-814b-c29e097718d0-webhook-cert\") pod \"loki-operator-controller-manager-dbdbb8494-f8mdl\" (UID: \"9d1d4276-53fb-4d5a-814b-c29e097718d0\") " pod="openshift-operators-redhat/loki-operator-controller-manager-dbdbb8494-f8mdl" Feb 26 17:30:23 crc kubenswrapper[4805]: I0226 17:30:23.452520 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqwh7\" (UniqueName: \"kubernetes.io/projected/9d1d4276-53fb-4d5a-814b-c29e097718d0-kube-api-access-wqwh7\") pod \"loki-operator-controller-manager-dbdbb8494-f8mdl\" (UID: \"9d1d4276-53fb-4d5a-814b-c29e097718d0\") " pod="openshift-operators-redhat/loki-operator-controller-manager-dbdbb8494-f8mdl" Feb 26 17:30:23 crc kubenswrapper[4805]: I0226 17:30:23.452586 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/9d1d4276-53fb-4d5a-814b-c29e097718d0-manager-config\") pod \"loki-operator-controller-manager-dbdbb8494-f8mdl\" (UID: \"9d1d4276-53fb-4d5a-814b-c29e097718d0\") " pod="openshift-operators-redhat/loki-operator-controller-manager-dbdbb8494-f8mdl" Feb 26 17:30:23 crc kubenswrapper[4805]: I0226 17:30:23.452637 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9d1d4276-53fb-4d5a-814b-c29e097718d0-apiservice-cert\") pod \"loki-operator-controller-manager-dbdbb8494-f8mdl\" (UID: \"9d1d4276-53fb-4d5a-814b-c29e097718d0\") " pod="openshift-operators-redhat/loki-operator-controller-manager-dbdbb8494-f8mdl" Feb 26 17:30:23 crc kubenswrapper[4805]: I0226 17:30:23.553889 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9d1d4276-53fb-4d5a-814b-c29e097718d0-apiservice-cert\") pod \"loki-operator-controller-manager-dbdbb8494-f8mdl\" (UID: \"9d1d4276-53fb-4d5a-814b-c29e097718d0\") " pod="openshift-operators-redhat/loki-operator-controller-manager-dbdbb8494-f8mdl" Feb 26 17:30:23 crc kubenswrapper[4805]: I0226 17:30:23.553968 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9d1d4276-53fb-4d5a-814b-c29e097718d0-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-dbdbb8494-f8mdl\" (UID: \"9d1d4276-53fb-4d5a-814b-c29e097718d0\") " pod="openshift-operators-redhat/loki-operator-controller-manager-dbdbb8494-f8mdl" Feb 26 17:30:23 crc kubenswrapper[4805]: I0226 17:30:23.554001 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9d1d4276-53fb-4d5a-814b-c29e097718d0-webhook-cert\") pod \"loki-operator-controller-manager-dbdbb8494-f8mdl\" (UID: \"9d1d4276-53fb-4d5a-814b-c29e097718d0\") " pod="openshift-operators-redhat/loki-operator-controller-manager-dbdbb8494-f8mdl" Feb 26 17:30:23 crc kubenswrapper[4805]: I0226 17:30:23.554084 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqwh7\" (UniqueName: \"kubernetes.io/projected/9d1d4276-53fb-4d5a-814b-c29e097718d0-kube-api-access-wqwh7\") pod \"loki-operator-controller-manager-dbdbb8494-f8mdl\" (UID: \"9d1d4276-53fb-4d5a-814b-c29e097718d0\") " pod="openshift-operators-redhat/loki-operator-controller-manager-dbdbb8494-f8mdl" Feb 26 17:30:23 crc kubenswrapper[4805]: I0226 17:30:23.554134 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/9d1d4276-53fb-4d5a-814b-c29e097718d0-manager-config\") pod \"loki-operator-controller-manager-dbdbb8494-f8mdl\" (UID: \"9d1d4276-53fb-4d5a-814b-c29e097718d0\") " pod="openshift-operators-redhat/loki-operator-controller-manager-dbdbb8494-f8mdl" Feb 26 17:30:23 crc kubenswrapper[4805]: I0226 17:30:23.555099 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/9d1d4276-53fb-4d5a-814b-c29e097718d0-manager-config\") pod \"loki-operator-controller-manager-dbdbb8494-f8mdl\" (UID: \"9d1d4276-53fb-4d5a-814b-c29e097718d0\") " pod="openshift-operators-redhat/loki-operator-controller-manager-dbdbb8494-f8mdl" Feb 26 17:30:23 crc kubenswrapper[4805]: I0226 17:30:23.562650 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9d1d4276-53fb-4d5a-814b-c29e097718d0-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-dbdbb8494-f8mdl\" (UID: \"9d1d4276-53fb-4d5a-814b-c29e097718d0\") " pod="openshift-operators-redhat/loki-operator-controller-manager-dbdbb8494-f8mdl" Feb 26 17:30:23 crc kubenswrapper[4805]: I0226 17:30:23.564261 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9d1d4276-53fb-4d5a-814b-c29e097718d0-webhook-cert\") pod \"loki-operator-controller-manager-dbdbb8494-f8mdl\" (UID: \"9d1d4276-53fb-4d5a-814b-c29e097718d0\") " pod="openshift-operators-redhat/loki-operator-controller-manager-dbdbb8494-f8mdl" Feb 26 17:30:23 crc kubenswrapper[4805]: I0226 17:30:23.569375 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqwh7\" (UniqueName: \"kubernetes.io/projected/9d1d4276-53fb-4d5a-814b-c29e097718d0-kube-api-access-wqwh7\") pod \"loki-operator-controller-manager-dbdbb8494-f8mdl\" (UID: \"9d1d4276-53fb-4d5a-814b-c29e097718d0\") " pod="openshift-operators-redhat/loki-operator-controller-manager-dbdbb8494-f8mdl" Feb 26 17:30:23 crc kubenswrapper[4805]: I0226 17:30:23.569661 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9d1d4276-53fb-4d5a-814b-c29e097718d0-apiservice-cert\") pod \"loki-operator-controller-manager-dbdbb8494-f8mdl\" (UID: \"9d1d4276-53fb-4d5a-814b-c29e097718d0\") " pod="openshift-operators-redhat/loki-operator-controller-manager-dbdbb8494-f8mdl" Feb 26 17:30:23 crc kubenswrapper[4805]: I0226 17:30:23.662087 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-dbdbb8494-f8mdl" Feb 26 17:30:24 crc kubenswrapper[4805]: I0226 17:30:24.180509 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-dbdbb8494-f8mdl"] Feb 26 17:30:24 crc kubenswrapper[4805]: I0226 17:30:24.558966 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-dbdbb8494-f8mdl" event={"ID":"9d1d4276-53fb-4d5a-814b-c29e097718d0","Type":"ContainerStarted","Data":"ab24bc56132d9bf9c52fdb908dc896f039bd27ecc465ebde528da828307a053a"} Feb 26 17:30:29 crc kubenswrapper[4805]: I0226 17:30:29.615077 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-dbdbb8494-f8mdl" event={"ID":"9d1d4276-53fb-4d5a-814b-c29e097718d0","Type":"ContainerStarted","Data":"8f7b2617d2bd5ed42df4f174acde9149ca38d0117d21ea2f1ff49f0cd8fbc6ac"} Feb 26 17:30:32 crc kubenswrapper[4805]: I0226 17:30:32.977461 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 17:30:32 crc kubenswrapper[4805]: I0226 17:30:32.978200 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 17:30:32 crc kubenswrapper[4805]: I0226 17:30:32.978244 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" Feb 26 17:30:32 crc kubenswrapper[4805]: I0226 17:30:32.978814 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1d44138a55ef33aa1de9eac7f541bad377db04ef7075e41168f322227c042d08"} pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 17:30:32 crc kubenswrapper[4805]: I0226 17:30:32.978863 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" containerID="cri-o://1d44138a55ef33aa1de9eac7f541bad377db04ef7075e41168f322227c042d08" gracePeriod=600 Feb 26 17:30:33 crc kubenswrapper[4805]: I0226 17:30:33.642104 4805 generic.go:334] "Generic (PLEG): container finished" podID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerID="1d44138a55ef33aa1de9eac7f541bad377db04ef7075e41168f322227c042d08" exitCode=0 Feb 26 17:30:33 crc kubenswrapper[4805]: I0226 17:30:33.642153 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" event={"ID":"25e83477-65d0-41be-8e55-fdacfc5871a8","Type":"ContainerDied","Data":"1d44138a55ef33aa1de9eac7f541bad377db04ef7075e41168f322227c042d08"} Feb 26 17:30:33 crc kubenswrapper[4805]: I0226 17:30:33.642190 4805 scope.go:117] "RemoveContainer" containerID="2ad740ea60d5ab6cc5388ad856d449d28f7d0452892c7d5d07969ba24766bab4" Feb 26 17:30:36 crc kubenswrapper[4805]: I0226 17:30:36.672532 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-dbdbb8494-f8mdl" event={"ID":"9d1d4276-53fb-4d5a-814b-c29e097718d0","Type":"ContainerStarted","Data":"93909ea53fdfbd715102ed07f8a07f7617cf5092f9735973cc5584bbbe364bf2"} Feb 26 17:30:36 crc kubenswrapper[4805]: I0226 17:30:36.673671 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-dbdbb8494-f8mdl" Feb 26 17:30:36 crc kubenswrapper[4805]: I0226 17:30:36.676531 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" event={"ID":"25e83477-65d0-41be-8e55-fdacfc5871a8","Type":"ContainerStarted","Data":"990e4f598fe9e20fbcba699271a0052db1db79e508164396a947ff262b514683"} Feb 26 17:30:36 crc kubenswrapper[4805]: I0226 17:30:36.678141 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-dbdbb8494-f8mdl" Feb 26 17:30:36 crc kubenswrapper[4805]: I0226 17:30:36.719182 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-dbdbb8494-f8mdl" podStartSLOduration=1.893981561 podStartE2EDuration="13.719157754s" podCreationTimestamp="2026-02-26 17:30:23 +0000 UTC" firstStartedPulling="2026-02-26 17:30:24.188724348 +0000 UTC m=+938.750478697" lastFinishedPulling="2026-02-26 17:30:36.013900541 +0000 UTC m=+950.575654890" observedRunningTime="2026-02-26 17:30:36.70808181 +0000 UTC m=+951.269836159" watchObservedRunningTime="2026-02-26 17:30:36.719157754 +0000 UTC m=+951.280912113" Feb 26 17:30:49 crc kubenswrapper[4805]: I0226 17:30:49.096424 4805 scope.go:117] "RemoveContainer" containerID="82ff1457db2fa0cb9203af37c2fad5c9123810bcd878a8e5cb8bc28ef40f472e" Feb 26 17:31:08 crc kubenswrapper[4805]: I0226 17:31:08.330013 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82x2xnq"] Feb 26 17:31:08 crc kubenswrapper[4805]: I0226 17:31:08.332120 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82x2xnq" Feb 26 17:31:08 crc kubenswrapper[4805]: I0226 17:31:08.338254 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 26 17:31:08 crc kubenswrapper[4805]: I0226 17:31:08.343166 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82x2xnq"] Feb 26 17:31:08 crc kubenswrapper[4805]: I0226 17:31:08.442713 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/22ae402a-acf3-450c-b0e8-e3ad05335343-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82x2xnq\" (UID: \"22ae402a-acf3-450c-b0e8-e3ad05335343\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82x2xnq" Feb 26 17:31:08 crc kubenswrapper[4805]: I0226 17:31:08.442827 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/22ae402a-acf3-450c-b0e8-e3ad05335343-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82x2xnq\" (UID: \"22ae402a-acf3-450c-b0e8-e3ad05335343\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82x2xnq" Feb 26 17:31:08 crc kubenswrapper[4805]: I0226 17:31:08.442880 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml4vw\" (UniqueName: \"kubernetes.io/projected/22ae402a-acf3-450c-b0e8-e3ad05335343-kube-api-access-ml4vw\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82x2xnq\" (UID: \"22ae402a-acf3-450c-b0e8-e3ad05335343\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82x2xnq" Feb 26 17:31:08 crc kubenswrapper[4805]: I0226 17:31:08.545835 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml4vw\" (UniqueName: \"kubernetes.io/projected/22ae402a-acf3-450c-b0e8-e3ad05335343-kube-api-access-ml4vw\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82x2xnq\" (UID: \"22ae402a-acf3-450c-b0e8-e3ad05335343\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82x2xnq" Feb 26 17:31:08 crc kubenswrapper[4805]: I0226 17:31:08.545902 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/22ae402a-acf3-450c-b0e8-e3ad05335343-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82x2xnq\" (UID: \"22ae402a-acf3-450c-b0e8-e3ad05335343\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82x2xnq" Feb 26 17:31:08 crc kubenswrapper[4805]: I0226 17:31:08.546008 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/22ae402a-acf3-450c-b0e8-e3ad05335343-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82x2xnq\" (UID: \"22ae402a-acf3-450c-b0e8-e3ad05335343\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82x2xnq" Feb 26 17:31:08 crc kubenswrapper[4805]: I0226 17:31:08.550545 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/22ae402a-acf3-450c-b0e8-e3ad05335343-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82x2xnq\" (UID: \"22ae402a-acf3-450c-b0e8-e3ad05335343\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82x2xnq" Feb 26 17:31:08 crc kubenswrapper[4805]: I0226 17:31:08.551345 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/22ae402a-acf3-450c-b0e8-e3ad05335343-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82x2xnq\" (UID: \"22ae402a-acf3-450c-b0e8-e3ad05335343\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82x2xnq" Feb 26 17:31:08 crc kubenswrapper[4805]: I0226 17:31:08.575374 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml4vw\" (UniqueName: \"kubernetes.io/projected/22ae402a-acf3-450c-b0e8-e3ad05335343-kube-api-access-ml4vw\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82x2xnq\" (UID: \"22ae402a-acf3-450c-b0e8-e3ad05335343\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82x2xnq" Feb 26 17:31:08 crc kubenswrapper[4805]: I0226 17:31:08.653668 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82x2xnq" Feb 26 17:31:08 crc kubenswrapper[4805]: I0226 17:31:08.866485 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82x2xnq"] Feb 26 17:31:09 crc kubenswrapper[4805]: I0226 17:31:09.266275 4805 generic.go:334] "Generic (PLEG): container finished" podID="22ae402a-acf3-450c-b0e8-e3ad05335343" containerID="aa1c07b16ff03aa83d12b0a7935a6ccd9a02d9492e5e38b3f5221174f4764bee" exitCode=0 Feb 26 17:31:09 crc kubenswrapper[4805]: I0226 17:31:09.266539 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82x2xnq" event={"ID":"22ae402a-acf3-450c-b0e8-e3ad05335343","Type":"ContainerDied","Data":"aa1c07b16ff03aa83d12b0a7935a6ccd9a02d9492e5e38b3f5221174f4764bee"} Feb 26 17:31:09 crc kubenswrapper[4805]: I0226 17:31:09.266564 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82x2xnq" event={"ID":"22ae402a-acf3-450c-b0e8-e3ad05335343","Type":"ContainerStarted","Data":"a5487d28323f2b80109e2b9cd4cd608fa474790cc481c901e7aa1a8c31090fbc"} Feb 26 17:31:12 crc kubenswrapper[4805]: I0226 17:31:12.288119 4805 generic.go:334] "Generic (PLEG): container finished" podID="22ae402a-acf3-450c-b0e8-e3ad05335343" containerID="3d6ad8c675e30a2b4ce64101361f460e94b9d4e61eeb9817c4786dfeac2d49aa" exitCode=0 Feb 26 17:31:12 crc kubenswrapper[4805]: I0226 17:31:12.288206 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82x2xnq" event={"ID":"22ae402a-acf3-450c-b0e8-e3ad05335343","Type":"ContainerDied","Data":"3d6ad8c675e30a2b4ce64101361f460e94b9d4e61eeb9817c4786dfeac2d49aa"} Feb 26 17:31:13 crc kubenswrapper[4805]: I0226 17:31:13.298064 4805 generic.go:334] "Generic (PLEG): container finished" podID="22ae402a-acf3-450c-b0e8-e3ad05335343" containerID="967d6fac67e0e919b0d155ae7c9837e70a3d562bb772fb167793b4872fea833f" exitCode=0 Feb 26 17:31:13 crc kubenswrapper[4805]: I0226 17:31:13.298427 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82x2xnq" event={"ID":"22ae402a-acf3-450c-b0e8-e3ad05335343","Type":"ContainerDied","Data":"967d6fac67e0e919b0d155ae7c9837e70a3d562bb772fb167793b4872fea833f"} Feb 26 17:31:14 crc kubenswrapper[4805]: I0226 17:31:14.555209 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82x2xnq" Feb 26 17:31:14 crc kubenswrapper[4805]: I0226 17:31:14.726849 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ml4vw\" (UniqueName: \"kubernetes.io/projected/22ae402a-acf3-450c-b0e8-e3ad05335343-kube-api-access-ml4vw\") pod \"22ae402a-acf3-450c-b0e8-e3ad05335343\" (UID: \"22ae402a-acf3-450c-b0e8-e3ad05335343\") " Feb 26 17:31:14 crc kubenswrapper[4805]: I0226 17:31:14.726917 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/22ae402a-acf3-450c-b0e8-e3ad05335343-bundle\") pod \"22ae402a-acf3-450c-b0e8-e3ad05335343\" (UID: \"22ae402a-acf3-450c-b0e8-e3ad05335343\") " Feb 26 17:31:14 crc kubenswrapper[4805]: I0226 17:31:14.726994 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/22ae402a-acf3-450c-b0e8-e3ad05335343-util\") pod \"22ae402a-acf3-450c-b0e8-e3ad05335343\" (UID: \"22ae402a-acf3-450c-b0e8-e3ad05335343\") " Feb 26 17:31:14 crc kubenswrapper[4805]: I0226 17:31:14.727437 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22ae402a-acf3-450c-b0e8-e3ad05335343-bundle" (OuterVolumeSpecName: "bundle") pod "22ae402a-acf3-450c-b0e8-e3ad05335343" (UID: "22ae402a-acf3-450c-b0e8-e3ad05335343"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:31:14 crc kubenswrapper[4805]: I0226 17:31:14.732413 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22ae402a-acf3-450c-b0e8-e3ad05335343-kube-api-access-ml4vw" (OuterVolumeSpecName: "kube-api-access-ml4vw") pod "22ae402a-acf3-450c-b0e8-e3ad05335343" (UID: "22ae402a-acf3-450c-b0e8-e3ad05335343"). InnerVolumeSpecName "kube-api-access-ml4vw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:31:14 crc kubenswrapper[4805]: I0226 17:31:14.737529 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22ae402a-acf3-450c-b0e8-e3ad05335343-util" (OuterVolumeSpecName: "util") pod "22ae402a-acf3-450c-b0e8-e3ad05335343" (UID: "22ae402a-acf3-450c-b0e8-e3ad05335343"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:31:14 crc kubenswrapper[4805]: I0226 17:31:14.828825 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ml4vw\" (UniqueName: \"kubernetes.io/projected/22ae402a-acf3-450c-b0e8-e3ad05335343-kube-api-access-ml4vw\") on node \"crc\" DevicePath \"\"" Feb 26 17:31:14 crc kubenswrapper[4805]: I0226 17:31:14.829119 4805 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/22ae402a-acf3-450c-b0e8-e3ad05335343-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:31:14 crc kubenswrapper[4805]: I0226 17:31:14.829130 4805 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/22ae402a-acf3-450c-b0e8-e3ad05335343-util\") on node \"crc\" DevicePath \"\"" Feb 26 17:31:15 crc kubenswrapper[4805]: I0226 17:31:15.317381 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82x2xnq" event={"ID":"22ae402a-acf3-450c-b0e8-e3ad05335343","Type":"ContainerDied","Data":"a5487d28323f2b80109e2b9cd4cd608fa474790cc481c901e7aa1a8c31090fbc"} Feb 26 17:31:15 crc kubenswrapper[4805]: I0226 17:31:15.317425 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5487d28323f2b80109e2b9cd4cd608fa474790cc481c901e7aa1a8c31090fbc" Feb 26 17:31:15 crc kubenswrapper[4805]: I0226 17:31:15.317496 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82x2xnq" Feb 26 17:31:20 crc kubenswrapper[4805]: I0226 17:31:20.646292 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-vw2h5"] Feb 26 17:31:20 crc kubenswrapper[4805]: E0226 17:31:20.646855 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22ae402a-acf3-450c-b0e8-e3ad05335343" containerName="pull" Feb 26 17:31:20 crc kubenswrapper[4805]: I0226 17:31:20.646871 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="22ae402a-acf3-450c-b0e8-e3ad05335343" containerName="pull" Feb 26 17:31:20 crc kubenswrapper[4805]: E0226 17:31:20.646890 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22ae402a-acf3-450c-b0e8-e3ad05335343" containerName="util" Feb 26 17:31:20 crc kubenswrapper[4805]: I0226 17:31:20.646897 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="22ae402a-acf3-450c-b0e8-e3ad05335343" containerName="util" Feb 26 17:31:20 crc kubenswrapper[4805]: E0226 17:31:20.646914 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22ae402a-acf3-450c-b0e8-e3ad05335343" containerName="extract" Feb 26 17:31:20 crc kubenswrapper[4805]: I0226 17:31:20.646921 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="22ae402a-acf3-450c-b0e8-e3ad05335343" containerName="extract" Feb 26 17:31:20 crc kubenswrapper[4805]: I0226 17:31:20.647054 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="22ae402a-acf3-450c-b0e8-e3ad05335343" containerName="extract" Feb 26 17:31:20 crc kubenswrapper[4805]: I0226 17:31:20.647604 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-vw2h5" Feb 26 17:31:20 crc kubenswrapper[4805]: I0226 17:31:20.649910 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 26 17:31:20 crc kubenswrapper[4805]: I0226 17:31:20.651414 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 26 17:31:20 crc kubenswrapper[4805]: I0226 17:31:20.651451 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-9rnjv" Feb 26 17:31:20 crc kubenswrapper[4805]: I0226 17:31:20.659612 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-vw2h5"] Feb 26 17:31:20 crc kubenswrapper[4805]: I0226 17:31:20.696866 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2khs\" (UniqueName: \"kubernetes.io/projected/37140a20-72cb-40f9-814f-2d6d02a3d4fa-kube-api-access-f2khs\") pod \"nmstate-operator-75c5dccd6c-vw2h5\" (UID: \"37140a20-72cb-40f9-814f-2d6d02a3d4fa\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-vw2h5" Feb 26 17:31:20 crc kubenswrapper[4805]: I0226 17:31:20.798138 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2khs\" (UniqueName: \"kubernetes.io/projected/37140a20-72cb-40f9-814f-2d6d02a3d4fa-kube-api-access-f2khs\") pod \"nmstate-operator-75c5dccd6c-vw2h5\" (UID: \"37140a20-72cb-40f9-814f-2d6d02a3d4fa\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-vw2h5" Feb 26 17:31:20 crc kubenswrapper[4805]: I0226 17:31:20.818222 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2khs\" (UniqueName: \"kubernetes.io/projected/37140a20-72cb-40f9-814f-2d6d02a3d4fa-kube-api-access-f2khs\") pod \"nmstate-operator-75c5dccd6c-vw2h5\" (UID: \"37140a20-72cb-40f9-814f-2d6d02a3d4fa\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-vw2h5" Feb 26 17:31:20 crc kubenswrapper[4805]: I0226 17:31:20.962713 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-vw2h5" Feb 26 17:31:21 crc kubenswrapper[4805]: I0226 17:31:21.199443 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-vw2h5"] Feb 26 17:31:21 crc kubenswrapper[4805]: I0226 17:31:21.358591 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-vw2h5" event={"ID":"37140a20-72cb-40f9-814f-2d6d02a3d4fa","Type":"ContainerStarted","Data":"3bc1ac3d432b8f8aabc845905bd2c036193f3693527276683f40d9a64b566e8f"} Feb 26 17:31:24 crc kubenswrapper[4805]: I0226 17:31:24.376950 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-vw2h5" event={"ID":"37140a20-72cb-40f9-814f-2d6d02a3d4fa","Type":"ContainerStarted","Data":"58039e77c645c066fd4ef79cee8d14d9ff677fa386ad83b345954aafa0ad1197"} Feb 26 17:31:24 crc kubenswrapper[4805]: I0226 17:31:24.397821 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-vw2h5" podStartSLOduration=1.499626503 podStartE2EDuration="4.39779926s" podCreationTimestamp="2026-02-26 17:31:20 +0000 UTC" firstStartedPulling="2026-02-26 17:31:21.214233575 +0000 UTC m=+995.775987914" lastFinishedPulling="2026-02-26 17:31:24.112406332 +0000 UTC m=+998.674160671" observedRunningTime="2026-02-26 17:31:24.393811207 +0000 UTC m=+998.955565586" watchObservedRunningTime="2026-02-26 17:31:24.39779926 +0000 UTC m=+998.959553629" Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.573450 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-kzzsr"] Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.575102 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-kzzsr" Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.578556 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-qhrc4" Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.597539 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-4tg62"] Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.598285 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-4tg62" Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.601691 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.604360 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-kzzsr"] Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.624723 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-4tg62"] Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.630159 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-v5h82"] Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.631510 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-v5h82" Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.730546 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/20b243d2-9299-4d1b-b27e-3525992664ce-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-4tg62\" (UID: \"20b243d2-9299-4d1b-b27e-3525992664ce\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-4tg62" Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.730611 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/b589e613-548a-4e38-9802-b9aa97bea8ba-dbus-socket\") pod \"nmstate-handler-v5h82\" (UID: \"b589e613-548a-4e38-9802-b9aa97bea8ba\") " pod="openshift-nmstate/nmstate-handler-v5h82" Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.730638 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/b589e613-548a-4e38-9802-b9aa97bea8ba-nmstate-lock\") pod \"nmstate-handler-v5h82\" (UID: \"b589e613-548a-4e38-9802-b9aa97bea8ba\") " pod="openshift-nmstate/nmstate-handler-v5h82" Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.730664 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/b589e613-548a-4e38-9802-b9aa97bea8ba-ovs-socket\") pod \"nmstate-handler-v5h82\" (UID: \"b589e613-548a-4e38-9802-b9aa97bea8ba\") " pod="openshift-nmstate/nmstate-handler-v5h82" Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.730702 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrfzq\" (UniqueName: \"kubernetes.io/projected/9545e547-3f0a-461b-ac54-b0e3bc910543-kube-api-access-xrfzq\") pod \"nmstate-metrics-69594cc75-kzzsr\" (UID: \"9545e547-3f0a-461b-ac54-b0e3bc910543\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-kzzsr" Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.730730 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcbw7\" (UniqueName: \"kubernetes.io/projected/20b243d2-9299-4d1b-b27e-3525992664ce-kube-api-access-jcbw7\") pod \"nmstate-webhook-786f45cff4-4tg62\" (UID: \"20b243d2-9299-4d1b-b27e-3525992664ce\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-4tg62" Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.730755 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtr7d\" (UniqueName: \"kubernetes.io/projected/b589e613-548a-4e38-9802-b9aa97bea8ba-kube-api-access-wtr7d\") pod \"nmstate-handler-v5h82\" (UID: \"b589e613-548a-4e38-9802-b9aa97bea8ba\") " pod="openshift-nmstate/nmstate-handler-v5h82" Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.829918 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-69z2b"] Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.830663 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-69z2b" Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.831496 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/b589e613-548a-4e38-9802-b9aa97bea8ba-ovs-socket\") pod \"nmstate-handler-v5h82\" (UID: \"b589e613-548a-4e38-9802-b9aa97bea8ba\") " pod="openshift-nmstate/nmstate-handler-v5h82" Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.831556 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrfzq\" (UniqueName: \"kubernetes.io/projected/9545e547-3f0a-461b-ac54-b0e3bc910543-kube-api-access-xrfzq\") pod \"nmstate-metrics-69594cc75-kzzsr\" (UID: \"9545e547-3f0a-461b-ac54-b0e3bc910543\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-kzzsr" Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.831588 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcbw7\" (UniqueName: \"kubernetes.io/projected/20b243d2-9299-4d1b-b27e-3525992664ce-kube-api-access-jcbw7\") pod \"nmstate-webhook-786f45cff4-4tg62\" (UID: \"20b243d2-9299-4d1b-b27e-3525992664ce\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-4tg62" Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.831613 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtr7d\" (UniqueName: \"kubernetes.io/projected/b589e613-548a-4e38-9802-b9aa97bea8ba-kube-api-access-wtr7d\") pod \"nmstate-handler-v5h82\" (UID: \"b589e613-548a-4e38-9802-b9aa97bea8ba\") " pod="openshift-nmstate/nmstate-handler-v5h82" Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.831665 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/20b243d2-9299-4d1b-b27e-3525992664ce-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-4tg62\" (UID: \"20b243d2-9299-4d1b-b27e-3525992664ce\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-4tg62" Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.831695 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/b589e613-548a-4e38-9802-b9aa97bea8ba-dbus-socket\") pod \"nmstate-handler-v5h82\" (UID: \"b589e613-548a-4e38-9802-b9aa97bea8ba\") " pod="openshift-nmstate/nmstate-handler-v5h82" Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.831714 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/b589e613-548a-4e38-9802-b9aa97bea8ba-nmstate-lock\") pod \"nmstate-handler-v5h82\" (UID: \"b589e613-548a-4e38-9802-b9aa97bea8ba\") " pod="openshift-nmstate/nmstate-handler-v5h82" Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.831784 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/b589e613-548a-4e38-9802-b9aa97bea8ba-nmstate-lock\") pod \"nmstate-handler-v5h82\" (UID: \"b589e613-548a-4e38-9802-b9aa97bea8ba\") " pod="openshift-nmstate/nmstate-handler-v5h82" Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.831829 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/b589e613-548a-4e38-9802-b9aa97bea8ba-ovs-socket\") pod \"nmstate-handler-v5h82\" (UID: \"b589e613-548a-4e38-9802-b9aa97bea8ba\") " pod="openshift-nmstate/nmstate-handler-v5h82" Feb 26 17:31:29 crc kubenswrapper[4805]: E0226 17:31:29.832139 4805 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 26 17:31:29 crc kubenswrapper[4805]: E0226 17:31:29.832189 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/20b243d2-9299-4d1b-b27e-3525992664ce-tls-key-pair podName:20b243d2-9299-4d1b-b27e-3525992664ce nodeName:}" failed. No retries permitted until 2026-02-26 17:31:30.332171855 +0000 UTC m=+1004.893926194 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/20b243d2-9299-4d1b-b27e-3525992664ce-tls-key-pair") pod "nmstate-webhook-786f45cff4-4tg62" (UID: "20b243d2-9299-4d1b-b27e-3525992664ce") : secret "openshift-nmstate-webhook" not found Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.832423 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/b589e613-548a-4e38-9802-b9aa97bea8ba-dbus-socket\") pod \"nmstate-handler-v5h82\" (UID: \"b589e613-548a-4e38-9802-b9aa97bea8ba\") " pod="openshift-nmstate/nmstate-handler-v5h82" Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.834332 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.834373 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.834785 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-f2npq" Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.847231 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-69z2b"] Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.857931 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtr7d\" (UniqueName: \"kubernetes.io/projected/b589e613-548a-4e38-9802-b9aa97bea8ba-kube-api-access-wtr7d\") pod \"nmstate-handler-v5h82\" (UID: \"b589e613-548a-4e38-9802-b9aa97bea8ba\") " pod="openshift-nmstate/nmstate-handler-v5h82" Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.859987 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrfzq\" (UniqueName: \"kubernetes.io/projected/9545e547-3f0a-461b-ac54-b0e3bc910543-kube-api-access-xrfzq\") pod \"nmstate-metrics-69594cc75-kzzsr\" (UID: \"9545e547-3f0a-461b-ac54-b0e3bc910543\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-kzzsr" Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.867808 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcbw7\" (UniqueName: \"kubernetes.io/projected/20b243d2-9299-4d1b-b27e-3525992664ce-kube-api-access-jcbw7\") pod \"nmstate-webhook-786f45cff4-4tg62\" (UID: \"20b243d2-9299-4d1b-b27e-3525992664ce\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-4tg62" Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.891810 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-kzzsr" Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.933458 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6njr\" (UniqueName: \"kubernetes.io/projected/c118280e-50c2-42a7-a69f-5cd4654ad329-kube-api-access-j6njr\") pod \"nmstate-console-plugin-5dcbbd79cf-69z2b\" (UID: \"c118280e-50c2-42a7-a69f-5cd4654ad329\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-69z2b" Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.933820 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c118280e-50c2-42a7-a69f-5cd4654ad329-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-69z2b\" (UID: \"c118280e-50c2-42a7-a69f-5cd4654ad329\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-69z2b" Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.933939 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c118280e-50c2-42a7-a69f-5cd4654ad329-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-69z2b\" (UID: \"c118280e-50c2-42a7-a69f-5cd4654ad329\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-69z2b" Feb 26 17:31:29 crc kubenswrapper[4805]: I0226 17:31:29.984091 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-v5h82" Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.034806 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c118280e-50c2-42a7-a69f-5cd4654ad329-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-69z2b\" (UID: \"c118280e-50c2-42a7-a69f-5cd4654ad329\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-69z2b" Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.034872 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c118280e-50c2-42a7-a69f-5cd4654ad329-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-69z2b\" (UID: \"c118280e-50c2-42a7-a69f-5cd4654ad329\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-69z2b" Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.034920 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6njr\" (UniqueName: \"kubernetes.io/projected/c118280e-50c2-42a7-a69f-5cd4654ad329-kube-api-access-j6njr\") pod \"nmstate-console-plugin-5dcbbd79cf-69z2b\" (UID: \"c118280e-50c2-42a7-a69f-5cd4654ad329\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-69z2b" Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.038705 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c118280e-50c2-42a7-a69f-5cd4654ad329-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-69z2b\" (UID: \"c118280e-50c2-42a7-a69f-5cd4654ad329\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-69z2b" Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.058387 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c118280e-50c2-42a7-a69f-5cd4654ad329-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-69z2b\" (UID: \"c118280e-50c2-42a7-a69f-5cd4654ad329\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-69z2b" Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.066333 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-68887c5b9d-ctxhp"] Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.068080 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-68887c5b9d-ctxhp" Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.068788 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-68887c5b9d-ctxhp"] Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.069623 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6njr\" (UniqueName: \"kubernetes.io/projected/c118280e-50c2-42a7-a69f-5cd4654ad329-kube-api-access-j6njr\") pod \"nmstate-console-plugin-5dcbbd79cf-69z2b\" (UID: \"c118280e-50c2-42a7-a69f-5cd4654ad329\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-69z2b" Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.144299 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-69z2b" Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.237404 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54f8p\" (UniqueName: \"kubernetes.io/projected/f6584982-7053-4aa1-89d7-f7859ecc749b-kube-api-access-54f8p\") pod \"console-68887c5b9d-ctxhp\" (UID: \"f6584982-7053-4aa1-89d7-f7859ecc749b\") " pod="openshift-console/console-68887c5b9d-ctxhp" Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.237976 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f6584982-7053-4aa1-89d7-f7859ecc749b-console-config\") pod \"console-68887c5b9d-ctxhp\" (UID: \"f6584982-7053-4aa1-89d7-f7859ecc749b\") " pod="openshift-console/console-68887c5b9d-ctxhp" Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.238015 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f6584982-7053-4aa1-89d7-f7859ecc749b-service-ca\") pod \"console-68887c5b9d-ctxhp\" (UID: \"f6584982-7053-4aa1-89d7-f7859ecc749b\") " pod="openshift-console/console-68887c5b9d-ctxhp" Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.238060 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f6584982-7053-4aa1-89d7-f7859ecc749b-console-serving-cert\") pod \"console-68887c5b9d-ctxhp\" (UID: \"f6584982-7053-4aa1-89d7-f7859ecc749b\") " pod="openshift-console/console-68887c5b9d-ctxhp" Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.238084 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f6584982-7053-4aa1-89d7-f7859ecc749b-oauth-serving-cert\") pod \"console-68887c5b9d-ctxhp\" (UID: \"f6584982-7053-4aa1-89d7-f7859ecc749b\") " pod="openshift-console/console-68887c5b9d-ctxhp" Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.238121 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6584982-7053-4aa1-89d7-f7859ecc749b-trusted-ca-bundle\") pod \"console-68887c5b9d-ctxhp\" (UID: \"f6584982-7053-4aa1-89d7-f7859ecc749b\") " pod="openshift-console/console-68887c5b9d-ctxhp" Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.238146 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f6584982-7053-4aa1-89d7-f7859ecc749b-console-oauth-config\") pod \"console-68887c5b9d-ctxhp\" (UID: \"f6584982-7053-4aa1-89d7-f7859ecc749b\") " pod="openshift-console/console-68887c5b9d-ctxhp" Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.339852 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/20b243d2-9299-4d1b-b27e-3525992664ce-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-4tg62\" (UID: \"20b243d2-9299-4d1b-b27e-3525992664ce\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-4tg62" Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.339904 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54f8p\" (UniqueName: \"kubernetes.io/projected/f6584982-7053-4aa1-89d7-f7859ecc749b-kube-api-access-54f8p\") pod \"console-68887c5b9d-ctxhp\" (UID: \"f6584982-7053-4aa1-89d7-f7859ecc749b\") " pod="openshift-console/console-68887c5b9d-ctxhp" Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.339972 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f6584982-7053-4aa1-89d7-f7859ecc749b-console-config\") pod \"console-68887c5b9d-ctxhp\" (UID: \"f6584982-7053-4aa1-89d7-f7859ecc749b\") " pod="openshift-console/console-68887c5b9d-ctxhp" Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.340009 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f6584982-7053-4aa1-89d7-f7859ecc749b-service-ca\") pod \"console-68887c5b9d-ctxhp\" (UID: \"f6584982-7053-4aa1-89d7-f7859ecc749b\") " pod="openshift-console/console-68887c5b9d-ctxhp" Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.340068 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f6584982-7053-4aa1-89d7-f7859ecc749b-console-serving-cert\") pod \"console-68887c5b9d-ctxhp\" (UID: \"f6584982-7053-4aa1-89d7-f7859ecc749b\") " pod="openshift-console/console-68887c5b9d-ctxhp" Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.340091 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f6584982-7053-4aa1-89d7-f7859ecc749b-oauth-serving-cert\") pod \"console-68887c5b9d-ctxhp\" (UID: \"f6584982-7053-4aa1-89d7-f7859ecc749b\") " pod="openshift-console/console-68887c5b9d-ctxhp" Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.340130 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6584982-7053-4aa1-89d7-f7859ecc749b-trusted-ca-bundle\") pod \"console-68887c5b9d-ctxhp\" (UID: \"f6584982-7053-4aa1-89d7-f7859ecc749b\") " pod="openshift-console/console-68887c5b9d-ctxhp" Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.340158 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f6584982-7053-4aa1-89d7-f7859ecc749b-console-oauth-config\") pod \"console-68887c5b9d-ctxhp\" (UID: \"f6584982-7053-4aa1-89d7-f7859ecc749b\") " pod="openshift-console/console-68887c5b9d-ctxhp" Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.340843 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f6584982-7053-4aa1-89d7-f7859ecc749b-console-config\") pod \"console-68887c5b9d-ctxhp\" (UID: \"f6584982-7053-4aa1-89d7-f7859ecc749b\") " pod="openshift-console/console-68887c5b9d-ctxhp" Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.340993 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f6584982-7053-4aa1-89d7-f7859ecc749b-service-ca\") pod \"console-68887c5b9d-ctxhp\" (UID: \"f6584982-7053-4aa1-89d7-f7859ecc749b\") " pod="openshift-console/console-68887c5b9d-ctxhp" Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.341394 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f6584982-7053-4aa1-89d7-f7859ecc749b-oauth-serving-cert\") pod \"console-68887c5b9d-ctxhp\" (UID: \"f6584982-7053-4aa1-89d7-f7859ecc749b\") " pod="openshift-console/console-68887c5b9d-ctxhp" Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.341784 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6584982-7053-4aa1-89d7-f7859ecc749b-trusted-ca-bundle\") pod \"console-68887c5b9d-ctxhp\" (UID: \"f6584982-7053-4aa1-89d7-f7859ecc749b\") " pod="openshift-console/console-68887c5b9d-ctxhp" Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.346155 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f6584982-7053-4aa1-89d7-f7859ecc749b-console-serving-cert\") pod \"console-68887c5b9d-ctxhp\" (UID: \"f6584982-7053-4aa1-89d7-f7859ecc749b\") " pod="openshift-console/console-68887c5b9d-ctxhp" Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.347072 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/20b243d2-9299-4d1b-b27e-3525992664ce-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-4tg62\" (UID: \"20b243d2-9299-4d1b-b27e-3525992664ce\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-4tg62" Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.347558 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f6584982-7053-4aa1-89d7-f7859ecc749b-console-oauth-config\") pod \"console-68887c5b9d-ctxhp\" (UID: \"f6584982-7053-4aa1-89d7-f7859ecc749b\") " pod="openshift-console/console-68887c5b9d-ctxhp" Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.358900 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54f8p\" (UniqueName: \"kubernetes.io/projected/f6584982-7053-4aa1-89d7-f7859ecc749b-kube-api-access-54f8p\") pod \"console-68887c5b9d-ctxhp\" (UID: \"f6584982-7053-4aa1-89d7-f7859ecc749b\") " pod="openshift-console/console-68887c5b9d-ctxhp" Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.362315 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-kzzsr"] Feb 26 17:31:30 crc kubenswrapper[4805]: W0226 17:31:30.362686 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9545e547_3f0a_461b_ac54_b0e3bc910543.slice/crio-5402c4dcbb258613611bbdac982722ed80e7c7abe10c247bac8691c2f69020ec WatchSource:0}: Error finding container 5402c4dcbb258613611bbdac982722ed80e7c7abe10c247bac8691c2f69020ec: Status 404 returned error can't find the container with id 5402c4dcbb258613611bbdac982722ed80e7c7abe10c247bac8691c2f69020ec Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.401166 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-68887c5b9d-ctxhp" Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.418313 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-v5h82" event={"ID":"b589e613-548a-4e38-9802-b9aa97bea8ba","Type":"ContainerStarted","Data":"abe6496e6d86227ed0b2ad208a4f801d5626b4136b6a1d79ce568762e9efb480"} Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.419295 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-kzzsr" event={"ID":"9545e547-3f0a-461b-ac54-b0e3bc910543","Type":"ContainerStarted","Data":"5402c4dcbb258613611bbdac982722ed80e7c7abe10c247bac8691c2f69020ec"} Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.512643 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-4tg62" Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.537466 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-69z2b"] Feb 26 17:31:30 crc kubenswrapper[4805]: W0226 17:31:30.545357 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc118280e_50c2_42a7_a69f_5cd4654ad329.slice/crio-8ca758492ba789d7e5445a2070077fd9e233166f933396827f226eb7330c74c7 WatchSource:0}: Error finding container 8ca758492ba789d7e5445a2070077fd9e233166f933396827f226eb7330c74c7: Status 404 returned error can't find the container with id 8ca758492ba789d7e5445a2070077fd9e233166f933396827f226eb7330c74c7 Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.593324 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-68887c5b9d-ctxhp"] Feb 26 17:31:30 crc kubenswrapper[4805]: W0226 17:31:30.602932 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6584982_7053_4aa1_89d7_f7859ecc749b.slice/crio-899c3fd633e7de1a16ef9fe5a83043d81f0706d0c4a89d7c3dc860078bd927d2 WatchSource:0}: Error finding container 899c3fd633e7de1a16ef9fe5a83043d81f0706d0c4a89d7c3dc860078bd927d2: Status 404 returned error can't find the container with id 899c3fd633e7de1a16ef9fe5a83043d81f0706d0c4a89d7c3dc860078bd927d2 Feb 26 17:31:30 crc kubenswrapper[4805]: I0226 17:31:30.947925 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-4tg62"] Feb 26 17:31:30 crc kubenswrapper[4805]: W0226 17:31:30.950318 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20b243d2_9299_4d1b_b27e_3525992664ce.slice/crio-a5f2b95c7491d1d36f12f002582bcf90ad2ed7acc8437fa5ec9935890f594a80 WatchSource:0}: Error finding container a5f2b95c7491d1d36f12f002582bcf90ad2ed7acc8437fa5ec9935890f594a80: Status 404 returned error can't find the container with id a5f2b95c7491d1d36f12f002582bcf90ad2ed7acc8437fa5ec9935890f594a80 Feb 26 17:31:31 crc kubenswrapper[4805]: I0226 17:31:31.438202 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-69z2b" event={"ID":"c118280e-50c2-42a7-a69f-5cd4654ad329","Type":"ContainerStarted","Data":"8ca758492ba789d7e5445a2070077fd9e233166f933396827f226eb7330c74c7"} Feb 26 17:31:31 crc kubenswrapper[4805]: I0226 17:31:31.439979 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-68887c5b9d-ctxhp" event={"ID":"f6584982-7053-4aa1-89d7-f7859ecc749b","Type":"ContainerStarted","Data":"946aa25aa320c344d3cc57fe546f879c85f979d6248695b396e3814fe6bcf7af"} Feb 26 17:31:31 crc kubenswrapper[4805]: I0226 17:31:31.440053 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-68887c5b9d-ctxhp" event={"ID":"f6584982-7053-4aa1-89d7-f7859ecc749b","Type":"ContainerStarted","Data":"899c3fd633e7de1a16ef9fe5a83043d81f0706d0c4a89d7c3dc860078bd927d2"} Feb 26 17:31:31 crc kubenswrapper[4805]: I0226 17:31:31.441958 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-4tg62" event={"ID":"20b243d2-9299-4d1b-b27e-3525992664ce","Type":"ContainerStarted","Data":"a5f2b95c7491d1d36f12f002582bcf90ad2ed7acc8437fa5ec9935890f594a80"} Feb 26 17:31:34 crc kubenswrapper[4805]: I0226 17:31:34.499502 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-68887c5b9d-ctxhp" podStartSLOduration=5.499479932 podStartE2EDuration="5.499479932s" podCreationTimestamp="2026-02-26 17:31:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:31:31.459755935 +0000 UTC m=+1006.021510294" watchObservedRunningTime="2026-02-26 17:31:34.499479932 +0000 UTC m=+1009.061234271" Feb 26 17:31:34 crc kubenswrapper[4805]: I0226 17:31:34.503976 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5wzwf"] Feb 26 17:31:34 crc kubenswrapper[4805]: I0226 17:31:34.505456 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5wzwf" Feb 26 17:31:34 crc kubenswrapper[4805]: I0226 17:31:34.516718 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5wzwf"] Feb 26 17:31:34 crc kubenswrapper[4805]: I0226 17:31:34.616058 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c52c031-06ea-4fef-9180-fa2496ba71e5-catalog-content\") pod \"certified-operators-5wzwf\" (UID: \"0c52c031-06ea-4fef-9180-fa2496ba71e5\") " pod="openshift-marketplace/certified-operators-5wzwf" Feb 26 17:31:34 crc kubenswrapper[4805]: I0226 17:31:34.616126 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c52c031-06ea-4fef-9180-fa2496ba71e5-utilities\") pod \"certified-operators-5wzwf\" (UID: \"0c52c031-06ea-4fef-9180-fa2496ba71e5\") " pod="openshift-marketplace/certified-operators-5wzwf" Feb 26 17:31:34 crc kubenswrapper[4805]: I0226 17:31:34.616258 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzt8s\" (UniqueName: \"kubernetes.io/projected/0c52c031-06ea-4fef-9180-fa2496ba71e5-kube-api-access-wzt8s\") pod \"certified-operators-5wzwf\" (UID: \"0c52c031-06ea-4fef-9180-fa2496ba71e5\") " pod="openshift-marketplace/certified-operators-5wzwf" Feb 26 17:31:34 crc kubenswrapper[4805]: I0226 17:31:34.718247 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c52c031-06ea-4fef-9180-fa2496ba71e5-catalog-content\") pod \"certified-operators-5wzwf\" (UID: \"0c52c031-06ea-4fef-9180-fa2496ba71e5\") " pod="openshift-marketplace/certified-operators-5wzwf" Feb 26 17:31:34 crc kubenswrapper[4805]: I0226 17:31:34.718317 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c52c031-06ea-4fef-9180-fa2496ba71e5-utilities\") pod \"certified-operators-5wzwf\" (UID: \"0c52c031-06ea-4fef-9180-fa2496ba71e5\") " pod="openshift-marketplace/certified-operators-5wzwf" Feb 26 17:31:34 crc kubenswrapper[4805]: I0226 17:31:34.718372 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzt8s\" (UniqueName: \"kubernetes.io/projected/0c52c031-06ea-4fef-9180-fa2496ba71e5-kube-api-access-wzt8s\") pod \"certified-operators-5wzwf\" (UID: \"0c52c031-06ea-4fef-9180-fa2496ba71e5\") " pod="openshift-marketplace/certified-operators-5wzwf" Feb 26 17:31:34 crc kubenswrapper[4805]: I0226 17:31:34.718913 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c52c031-06ea-4fef-9180-fa2496ba71e5-catalog-content\") pod \"certified-operators-5wzwf\" (UID: \"0c52c031-06ea-4fef-9180-fa2496ba71e5\") " pod="openshift-marketplace/certified-operators-5wzwf" Feb 26 17:31:34 crc kubenswrapper[4805]: I0226 17:31:34.719078 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c52c031-06ea-4fef-9180-fa2496ba71e5-utilities\") pod \"certified-operators-5wzwf\" (UID: \"0c52c031-06ea-4fef-9180-fa2496ba71e5\") " pod="openshift-marketplace/certified-operators-5wzwf" Feb 26 17:31:34 crc kubenswrapper[4805]: I0226 17:31:34.744963 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzt8s\" (UniqueName: \"kubernetes.io/projected/0c52c031-06ea-4fef-9180-fa2496ba71e5-kube-api-access-wzt8s\") pod \"certified-operators-5wzwf\" (UID: \"0c52c031-06ea-4fef-9180-fa2496ba71e5\") " pod="openshift-marketplace/certified-operators-5wzwf" Feb 26 17:31:34 crc kubenswrapper[4805]: I0226 17:31:34.828289 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5wzwf" Feb 26 17:31:35 crc kubenswrapper[4805]: I0226 17:31:35.629603 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5wzwf"] Feb 26 17:31:36 crc kubenswrapper[4805]: I0226 17:31:36.476937 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-v5h82" event={"ID":"b589e613-548a-4e38-9802-b9aa97bea8ba","Type":"ContainerStarted","Data":"95e3fc8e7bea03404be0ced2df900d8fd0e5bbbbc9ca15c5ca2d7e26035e4ce3"} Feb 26 17:31:36 crc kubenswrapper[4805]: I0226 17:31:36.477280 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-v5h82" Feb 26 17:31:36 crc kubenswrapper[4805]: I0226 17:31:36.479512 4805 generic.go:334] "Generic (PLEG): container finished" podID="0c52c031-06ea-4fef-9180-fa2496ba71e5" containerID="a17584ff0b048a2200d97ad20deccdfab9e3833178aad08f96adf1699d8d50b0" exitCode=0 Feb 26 17:31:36 crc kubenswrapper[4805]: I0226 17:31:36.480342 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wzwf" event={"ID":"0c52c031-06ea-4fef-9180-fa2496ba71e5","Type":"ContainerDied","Data":"a17584ff0b048a2200d97ad20deccdfab9e3833178aad08f96adf1699d8d50b0"} Feb 26 17:31:36 crc kubenswrapper[4805]: I0226 17:31:36.480372 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wzwf" event={"ID":"0c52c031-06ea-4fef-9180-fa2496ba71e5","Type":"ContainerStarted","Data":"b65e7f669419cc0181b0adae76db7c8ed8882a841fa4ecd97fb78a82bc9613e7"} Feb 26 17:31:36 crc kubenswrapper[4805]: I0226 17:31:36.483755 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-4tg62" event={"ID":"20b243d2-9299-4d1b-b27e-3525992664ce","Type":"ContainerStarted","Data":"22d150a955005d812c7993d219e050384a1bc75f86f9d0750ebe64b33acced41"} Feb 26 17:31:36 crc kubenswrapper[4805]: I0226 17:31:36.484305 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-786f45cff4-4tg62" Feb 26 17:31:36 crc kubenswrapper[4805]: I0226 17:31:36.485998 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-kzzsr" event={"ID":"9545e547-3f0a-461b-ac54-b0e3bc910543","Type":"ContainerStarted","Data":"05f9c7275154325899eaacd67de3985abc070264089e1ba7c59062b5095ad049"} Feb 26 17:31:36 crc kubenswrapper[4805]: I0226 17:31:36.493717 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-69z2b" event={"ID":"c118280e-50c2-42a7-a69f-5cd4654ad329","Type":"ContainerStarted","Data":"a2382b9e825410738d65ea05c177651746bf382972eba3b346149a1407949d88"} Feb 26 17:31:36 crc kubenswrapper[4805]: I0226 17:31:36.493935 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-v5h82" podStartSLOduration=2.423431285 podStartE2EDuration="7.493922579s" podCreationTimestamp="2026-02-26 17:31:29 +0000 UTC" firstStartedPulling="2026-02-26 17:31:30.041760289 +0000 UTC m=+1004.603514628" lastFinishedPulling="2026-02-26 17:31:35.112251583 +0000 UTC m=+1009.674005922" observedRunningTime="2026-02-26 17:31:36.492875822 +0000 UTC m=+1011.054630161" watchObservedRunningTime="2026-02-26 17:31:36.493922579 +0000 UTC m=+1011.055676918" Feb 26 17:31:36 crc kubenswrapper[4805]: I0226 17:31:36.514845 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-786f45cff4-4tg62" podStartSLOduration=3.342407486 podStartE2EDuration="7.514826255s" podCreationTimestamp="2026-02-26 17:31:29 +0000 UTC" firstStartedPulling="2026-02-26 17:31:30.953219457 +0000 UTC m=+1005.514973796" lastFinishedPulling="2026-02-26 17:31:35.125638226 +0000 UTC m=+1009.687392565" observedRunningTime="2026-02-26 17:31:36.511169021 +0000 UTC m=+1011.072923370" watchObservedRunningTime="2026-02-26 17:31:36.514826255 +0000 UTC m=+1011.076580594" Feb 26 17:31:36 crc kubenswrapper[4805]: I0226 17:31:36.557838 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-69z2b" podStartSLOduration=2.9941170550000002 podStartE2EDuration="7.557744605s" podCreationTimestamp="2026-02-26 17:31:29 +0000 UTC" firstStartedPulling="2026-02-26 17:31:30.547349421 +0000 UTC m=+1005.109103760" lastFinishedPulling="2026-02-26 17:31:35.110976971 +0000 UTC m=+1009.672731310" observedRunningTime="2026-02-26 17:31:36.54972108 +0000 UTC m=+1011.111475419" watchObservedRunningTime="2026-02-26 17:31:36.557744605 +0000 UTC m=+1011.119498944" Feb 26 17:31:38 crc kubenswrapper[4805]: I0226 17:31:38.509478 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wzwf" event={"ID":"0c52c031-06ea-4fef-9180-fa2496ba71e5","Type":"ContainerStarted","Data":"11975a840fa61de4158d6dd4cca14a3ee5110e7ae1f7b8d286a09d7ebf419ac6"} Feb 26 17:31:38 crc kubenswrapper[4805]: I0226 17:31:38.511726 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-kzzsr" event={"ID":"9545e547-3f0a-461b-ac54-b0e3bc910543","Type":"ContainerStarted","Data":"1af96a8015dfdc221985b27ef94f37a37bb7a00df98303fc4dbafb48926ef0e5"} Feb 26 17:31:38 crc kubenswrapper[4805]: I0226 17:31:38.544495 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-69594cc75-kzzsr" podStartSLOduration=2.016362058 podStartE2EDuration="9.544474794s" podCreationTimestamp="2026-02-26 17:31:29 +0000 UTC" firstStartedPulling="2026-02-26 17:31:30.364927005 +0000 UTC m=+1004.926681354" lastFinishedPulling="2026-02-26 17:31:37.893039751 +0000 UTC m=+1012.454794090" observedRunningTime="2026-02-26 17:31:38.542180485 +0000 UTC m=+1013.103934834" watchObservedRunningTime="2026-02-26 17:31:38.544474794 +0000 UTC m=+1013.106229133" Feb 26 17:31:39 crc kubenswrapper[4805]: I0226 17:31:39.520647 4805 generic.go:334] "Generic (PLEG): container finished" podID="0c52c031-06ea-4fef-9180-fa2496ba71e5" containerID="11975a840fa61de4158d6dd4cca14a3ee5110e7ae1f7b8d286a09d7ebf419ac6" exitCode=0 Feb 26 17:31:39 crc kubenswrapper[4805]: I0226 17:31:39.520738 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wzwf" event={"ID":"0c52c031-06ea-4fef-9180-fa2496ba71e5","Type":"ContainerDied","Data":"11975a840fa61de4158d6dd4cca14a3ee5110e7ae1f7b8d286a09d7ebf419ac6"} Feb 26 17:31:40 crc kubenswrapper[4805]: I0226 17:31:40.401936 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-68887c5b9d-ctxhp" Feb 26 17:31:40 crc kubenswrapper[4805]: I0226 17:31:40.402050 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-68887c5b9d-ctxhp" Feb 26 17:31:40 crc kubenswrapper[4805]: I0226 17:31:40.408346 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-68887c5b9d-ctxhp" Feb 26 17:31:40 crc kubenswrapper[4805]: I0226 17:31:40.532047 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wzwf" event={"ID":"0c52c031-06ea-4fef-9180-fa2496ba71e5","Type":"ContainerStarted","Data":"b653da729a3718e5db455e525e3eb1b456399c943e49853fd35ad8936df6edf4"} Feb 26 17:31:40 crc kubenswrapper[4805]: I0226 17:31:40.536429 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-68887c5b9d-ctxhp" Feb 26 17:31:40 crc kubenswrapper[4805]: I0226 17:31:40.557118 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5wzwf" podStartSLOduration=3.103869718 podStartE2EDuration="6.557103147s" podCreationTimestamp="2026-02-26 17:31:34 +0000 UTC" firstStartedPulling="2026-02-26 17:31:36.48185368 +0000 UTC m=+1011.043608019" lastFinishedPulling="2026-02-26 17:31:39.935087089 +0000 UTC m=+1014.496841448" observedRunningTime="2026-02-26 17:31:40.553558016 +0000 UTC m=+1015.115312355" watchObservedRunningTime="2026-02-26 17:31:40.557103147 +0000 UTC m=+1015.118857476" Feb 26 17:31:40 crc kubenswrapper[4805]: I0226 17:31:40.611124 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-2dnn9"] Feb 26 17:31:44 crc kubenswrapper[4805]: I0226 17:31:44.829435 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5wzwf" Feb 26 17:31:44 crc kubenswrapper[4805]: I0226 17:31:44.829752 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5wzwf" Feb 26 17:31:44 crc kubenswrapper[4805]: I0226 17:31:44.883216 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5wzwf" Feb 26 17:31:45 crc kubenswrapper[4805]: I0226 17:31:45.017248 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-v5h82" Feb 26 17:31:45 crc kubenswrapper[4805]: I0226 17:31:45.605681 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5wzwf" Feb 26 17:31:45 crc kubenswrapper[4805]: I0226 17:31:45.654798 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5wzwf"] Feb 26 17:31:47 crc kubenswrapper[4805]: I0226 17:31:47.576766 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5wzwf" podUID="0c52c031-06ea-4fef-9180-fa2496ba71e5" containerName="registry-server" containerID="cri-o://b653da729a3718e5db455e525e3eb1b456399c943e49853fd35ad8936df6edf4" gracePeriod=2 Feb 26 17:31:47 crc kubenswrapper[4805]: I0226 17:31:47.588440 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-52jrl"] Feb 26 17:31:47 crc kubenswrapper[4805]: I0226 17:31:47.590441 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52jrl" Feb 26 17:31:47 crc kubenswrapper[4805]: I0226 17:31:47.612914 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-52jrl"] Feb 26 17:31:47 crc kubenswrapper[4805]: I0226 17:31:47.701311 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab071f9e-430d-46fa-bb26-dbd09f7fc95e-catalog-content\") pod \"community-operators-52jrl\" (UID: \"ab071f9e-430d-46fa-bb26-dbd09f7fc95e\") " pod="openshift-marketplace/community-operators-52jrl" Feb 26 17:31:47 crc kubenswrapper[4805]: I0226 17:31:47.701357 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab071f9e-430d-46fa-bb26-dbd09f7fc95e-utilities\") pod \"community-operators-52jrl\" (UID: \"ab071f9e-430d-46fa-bb26-dbd09f7fc95e\") " pod="openshift-marketplace/community-operators-52jrl" Feb 26 17:31:47 crc kubenswrapper[4805]: I0226 17:31:47.701382 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfp76\" (UniqueName: \"kubernetes.io/projected/ab071f9e-430d-46fa-bb26-dbd09f7fc95e-kube-api-access-tfp76\") pod \"community-operators-52jrl\" (UID: \"ab071f9e-430d-46fa-bb26-dbd09f7fc95e\") " pod="openshift-marketplace/community-operators-52jrl" Feb 26 17:31:47 crc kubenswrapper[4805]: I0226 17:31:47.802568 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab071f9e-430d-46fa-bb26-dbd09f7fc95e-catalog-content\") pod \"community-operators-52jrl\" (UID: \"ab071f9e-430d-46fa-bb26-dbd09f7fc95e\") " pod="openshift-marketplace/community-operators-52jrl" Feb 26 17:31:47 crc kubenswrapper[4805]: I0226 17:31:47.803184 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab071f9e-430d-46fa-bb26-dbd09f7fc95e-utilities\") pod \"community-operators-52jrl\" (UID: \"ab071f9e-430d-46fa-bb26-dbd09f7fc95e\") " pod="openshift-marketplace/community-operators-52jrl" Feb 26 17:31:47 crc kubenswrapper[4805]: I0226 17:31:47.803111 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab071f9e-430d-46fa-bb26-dbd09f7fc95e-catalog-content\") pod \"community-operators-52jrl\" (UID: \"ab071f9e-430d-46fa-bb26-dbd09f7fc95e\") " pod="openshift-marketplace/community-operators-52jrl" Feb 26 17:31:47 crc kubenswrapper[4805]: I0226 17:31:47.803266 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfp76\" (UniqueName: \"kubernetes.io/projected/ab071f9e-430d-46fa-bb26-dbd09f7fc95e-kube-api-access-tfp76\") pod \"community-operators-52jrl\" (UID: \"ab071f9e-430d-46fa-bb26-dbd09f7fc95e\") " pod="openshift-marketplace/community-operators-52jrl" Feb 26 17:31:47 crc kubenswrapper[4805]: I0226 17:31:47.803549 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab071f9e-430d-46fa-bb26-dbd09f7fc95e-utilities\") pod \"community-operators-52jrl\" (UID: \"ab071f9e-430d-46fa-bb26-dbd09f7fc95e\") " pod="openshift-marketplace/community-operators-52jrl" Feb 26 17:31:47 crc kubenswrapper[4805]: I0226 17:31:47.835327 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfp76\" (UniqueName: \"kubernetes.io/projected/ab071f9e-430d-46fa-bb26-dbd09f7fc95e-kube-api-access-tfp76\") pod \"community-operators-52jrl\" (UID: \"ab071f9e-430d-46fa-bb26-dbd09f7fc95e\") " pod="openshift-marketplace/community-operators-52jrl" Feb 26 17:31:47 crc kubenswrapper[4805]: I0226 17:31:47.914340 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52jrl" Feb 26 17:31:47 crc kubenswrapper[4805]: I0226 17:31:47.987275 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5wzwf" Feb 26 17:31:48 crc kubenswrapper[4805]: I0226 17:31:48.111048 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c52c031-06ea-4fef-9180-fa2496ba71e5-utilities\") pod \"0c52c031-06ea-4fef-9180-fa2496ba71e5\" (UID: \"0c52c031-06ea-4fef-9180-fa2496ba71e5\") " Feb 26 17:31:48 crc kubenswrapper[4805]: I0226 17:31:48.111131 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c52c031-06ea-4fef-9180-fa2496ba71e5-catalog-content\") pod \"0c52c031-06ea-4fef-9180-fa2496ba71e5\" (UID: \"0c52c031-06ea-4fef-9180-fa2496ba71e5\") " Feb 26 17:31:48 crc kubenswrapper[4805]: I0226 17:31:48.111199 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzt8s\" (UniqueName: \"kubernetes.io/projected/0c52c031-06ea-4fef-9180-fa2496ba71e5-kube-api-access-wzt8s\") pod \"0c52c031-06ea-4fef-9180-fa2496ba71e5\" (UID: \"0c52c031-06ea-4fef-9180-fa2496ba71e5\") " Feb 26 17:31:48 crc kubenswrapper[4805]: I0226 17:31:48.112986 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c52c031-06ea-4fef-9180-fa2496ba71e5-utilities" (OuterVolumeSpecName: "utilities") pod "0c52c031-06ea-4fef-9180-fa2496ba71e5" (UID: "0c52c031-06ea-4fef-9180-fa2496ba71e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:31:48 crc kubenswrapper[4805]: I0226 17:31:48.129233 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c52c031-06ea-4fef-9180-fa2496ba71e5-kube-api-access-wzt8s" (OuterVolumeSpecName: "kube-api-access-wzt8s") pod "0c52c031-06ea-4fef-9180-fa2496ba71e5" (UID: "0c52c031-06ea-4fef-9180-fa2496ba71e5"). InnerVolumeSpecName "kube-api-access-wzt8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:31:48 crc kubenswrapper[4805]: I0226 17:31:48.171543 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-52jrl"] Feb 26 17:31:48 crc kubenswrapper[4805]: I0226 17:31:48.213056 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c52c031-06ea-4fef-9180-fa2496ba71e5-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 17:31:48 crc kubenswrapper[4805]: I0226 17:31:48.213103 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzt8s\" (UniqueName: \"kubernetes.io/projected/0c52c031-06ea-4fef-9180-fa2496ba71e5-kube-api-access-wzt8s\") on node \"crc\" DevicePath \"\"" Feb 26 17:31:48 crc kubenswrapper[4805]: I0226 17:31:48.226656 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c52c031-06ea-4fef-9180-fa2496ba71e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c52c031-06ea-4fef-9180-fa2496ba71e5" (UID: "0c52c031-06ea-4fef-9180-fa2496ba71e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:31:48 crc kubenswrapper[4805]: I0226 17:31:48.313968 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c52c031-06ea-4fef-9180-fa2496ba71e5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 17:31:48 crc kubenswrapper[4805]: I0226 17:31:48.586746 4805 generic.go:334] "Generic (PLEG): container finished" podID="0c52c031-06ea-4fef-9180-fa2496ba71e5" containerID="b653da729a3718e5db455e525e3eb1b456399c943e49853fd35ad8936df6edf4" exitCode=0 Feb 26 17:31:48 crc kubenswrapper[4805]: I0226 17:31:48.586831 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wzwf" event={"ID":"0c52c031-06ea-4fef-9180-fa2496ba71e5","Type":"ContainerDied","Data":"b653da729a3718e5db455e525e3eb1b456399c943e49853fd35ad8936df6edf4"} Feb 26 17:31:48 crc kubenswrapper[4805]: I0226 17:31:48.586887 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5wzwf" Feb 26 17:31:48 crc kubenswrapper[4805]: I0226 17:31:48.586913 4805 scope.go:117] "RemoveContainer" containerID="b653da729a3718e5db455e525e3eb1b456399c943e49853fd35ad8936df6edf4" Feb 26 17:31:48 crc kubenswrapper[4805]: I0226 17:31:48.586900 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wzwf" event={"ID":"0c52c031-06ea-4fef-9180-fa2496ba71e5","Type":"ContainerDied","Data":"b65e7f669419cc0181b0adae76db7c8ed8882a841fa4ecd97fb78a82bc9613e7"} Feb 26 17:31:48 crc kubenswrapper[4805]: I0226 17:31:48.589043 4805 generic.go:334] "Generic (PLEG): container finished" podID="ab071f9e-430d-46fa-bb26-dbd09f7fc95e" containerID="08fbdb7cd9d17267bbbaf7904788b35a61d56e002278942266730566c983f863" exitCode=0 Feb 26 17:31:48 crc kubenswrapper[4805]: I0226 17:31:48.589074 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52jrl" event={"ID":"ab071f9e-430d-46fa-bb26-dbd09f7fc95e","Type":"ContainerDied","Data":"08fbdb7cd9d17267bbbaf7904788b35a61d56e002278942266730566c983f863"} Feb 26 17:31:48 crc kubenswrapper[4805]: I0226 17:31:48.589091 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52jrl" event={"ID":"ab071f9e-430d-46fa-bb26-dbd09f7fc95e","Type":"ContainerStarted","Data":"82c145e005307bd5e28afba46612b303678899748ea8e3836280fe341e79644b"} Feb 26 17:31:48 crc kubenswrapper[4805]: I0226 17:31:48.610358 4805 scope.go:117] "RemoveContainer" containerID="11975a840fa61de4158d6dd4cca14a3ee5110e7ae1f7b8d286a09d7ebf419ac6" Feb 26 17:31:48 crc kubenswrapper[4805]: I0226 17:31:48.637589 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5wzwf"] Feb 26 17:31:48 crc kubenswrapper[4805]: I0226 17:31:48.639112 4805 scope.go:117] "RemoveContainer" containerID="a17584ff0b048a2200d97ad20deccdfab9e3833178aad08f96adf1699d8d50b0" Feb 26 17:31:48 crc kubenswrapper[4805]: I0226 17:31:48.641507 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5wzwf"] Feb 26 17:31:48 crc kubenswrapper[4805]: I0226 17:31:48.654771 4805 scope.go:117] "RemoveContainer" containerID="b653da729a3718e5db455e525e3eb1b456399c943e49853fd35ad8936df6edf4" Feb 26 17:31:48 crc kubenswrapper[4805]: E0226 17:31:48.656617 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b653da729a3718e5db455e525e3eb1b456399c943e49853fd35ad8936df6edf4\": container with ID starting with b653da729a3718e5db455e525e3eb1b456399c943e49853fd35ad8936df6edf4 not found: ID does not exist" containerID="b653da729a3718e5db455e525e3eb1b456399c943e49853fd35ad8936df6edf4" Feb 26 17:31:48 crc kubenswrapper[4805]: I0226 17:31:48.656658 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b653da729a3718e5db455e525e3eb1b456399c943e49853fd35ad8936df6edf4"} err="failed to get container status \"b653da729a3718e5db455e525e3eb1b456399c943e49853fd35ad8936df6edf4\": rpc error: code = NotFound desc = could not find container \"b653da729a3718e5db455e525e3eb1b456399c943e49853fd35ad8936df6edf4\": container with ID starting with b653da729a3718e5db455e525e3eb1b456399c943e49853fd35ad8936df6edf4 not found: ID does not exist" Feb 26 17:31:48 crc kubenswrapper[4805]: I0226 17:31:48.656683 4805 scope.go:117] "RemoveContainer" containerID="11975a840fa61de4158d6dd4cca14a3ee5110e7ae1f7b8d286a09d7ebf419ac6" Feb 26 17:31:48 crc kubenswrapper[4805]: E0226 17:31:48.657188 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11975a840fa61de4158d6dd4cca14a3ee5110e7ae1f7b8d286a09d7ebf419ac6\": container with ID starting with 11975a840fa61de4158d6dd4cca14a3ee5110e7ae1f7b8d286a09d7ebf419ac6 not found: ID does not exist" containerID="11975a840fa61de4158d6dd4cca14a3ee5110e7ae1f7b8d286a09d7ebf419ac6" Feb 26 17:31:48 crc kubenswrapper[4805]: I0226 17:31:48.657325 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11975a840fa61de4158d6dd4cca14a3ee5110e7ae1f7b8d286a09d7ebf419ac6"} err="failed to get container status \"11975a840fa61de4158d6dd4cca14a3ee5110e7ae1f7b8d286a09d7ebf419ac6\": rpc error: code = NotFound desc = could not find container \"11975a840fa61de4158d6dd4cca14a3ee5110e7ae1f7b8d286a09d7ebf419ac6\": container with ID starting with 11975a840fa61de4158d6dd4cca14a3ee5110e7ae1f7b8d286a09d7ebf419ac6 not found: ID does not exist" Feb 26 17:31:48 crc kubenswrapper[4805]: I0226 17:31:48.657435 4805 scope.go:117] "RemoveContainer" containerID="a17584ff0b048a2200d97ad20deccdfab9e3833178aad08f96adf1699d8d50b0" Feb 26 17:31:48 crc kubenswrapper[4805]: E0226 17:31:48.657851 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a17584ff0b048a2200d97ad20deccdfab9e3833178aad08f96adf1699d8d50b0\": container with ID starting with a17584ff0b048a2200d97ad20deccdfab9e3833178aad08f96adf1699d8d50b0 not found: ID does not exist" containerID="a17584ff0b048a2200d97ad20deccdfab9e3833178aad08f96adf1699d8d50b0" Feb 26 17:31:48 crc kubenswrapper[4805]: I0226 17:31:48.657961 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a17584ff0b048a2200d97ad20deccdfab9e3833178aad08f96adf1699d8d50b0"} err="failed to get container status \"a17584ff0b048a2200d97ad20deccdfab9e3833178aad08f96adf1699d8d50b0\": rpc error: code = NotFound desc = could not find container \"a17584ff0b048a2200d97ad20deccdfab9e3833178aad08f96adf1699d8d50b0\": container with ID starting with a17584ff0b048a2200d97ad20deccdfab9e3833178aad08f96adf1699d8d50b0 not found: ID does not exist" Feb 26 17:31:48 crc kubenswrapper[4805]: I0226 17:31:48.966203 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c52c031-06ea-4fef-9180-fa2496ba71e5" path="/var/lib/kubelet/pods/0c52c031-06ea-4fef-9180-fa2496ba71e5/volumes" Feb 26 17:31:50 crc kubenswrapper[4805]: I0226 17:31:50.518503 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-786f45cff4-4tg62" Feb 26 17:31:50 crc kubenswrapper[4805]: I0226 17:31:50.602625 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52jrl" event={"ID":"ab071f9e-430d-46fa-bb26-dbd09f7fc95e","Type":"ContainerStarted","Data":"d84bbc9f472ee5b60a4fa95fef415a9720bc4d574e771388390ba02ba5dbff1d"} Feb 26 17:31:51 crc kubenswrapper[4805]: I0226 17:31:51.609144 4805 generic.go:334] "Generic (PLEG): container finished" podID="ab071f9e-430d-46fa-bb26-dbd09f7fc95e" containerID="d84bbc9f472ee5b60a4fa95fef415a9720bc4d574e771388390ba02ba5dbff1d" exitCode=0 Feb 26 17:31:51 crc kubenswrapper[4805]: I0226 17:31:51.609195 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52jrl" event={"ID":"ab071f9e-430d-46fa-bb26-dbd09f7fc95e","Type":"ContainerDied","Data":"d84bbc9f472ee5b60a4fa95fef415a9720bc4d574e771388390ba02ba5dbff1d"} Feb 26 17:31:53 crc kubenswrapper[4805]: I0226 17:31:53.623213 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52jrl" event={"ID":"ab071f9e-430d-46fa-bb26-dbd09f7fc95e","Type":"ContainerStarted","Data":"91a75141bcfd0cab0481d2bcb3083126f9f24764b2f6b934f8ef64e308e9b1f8"} Feb 26 17:31:53 crc kubenswrapper[4805]: I0226 17:31:53.645112 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-52jrl" podStartSLOduration=2.526523717 podStartE2EDuration="6.645091855s" podCreationTimestamp="2026-02-26 17:31:47 +0000 UTC" firstStartedPulling="2026-02-26 17:31:48.590796155 +0000 UTC m=+1023.152550534" lastFinishedPulling="2026-02-26 17:31:52.709364333 +0000 UTC m=+1027.271118672" observedRunningTime="2026-02-26 17:31:53.642368205 +0000 UTC m=+1028.204122574" watchObservedRunningTime="2026-02-26 17:31:53.645091855 +0000 UTC m=+1028.206846194" Feb 26 17:31:54 crc kubenswrapper[4805]: I0226 17:31:54.324706 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v6p6m"] Feb 26 17:31:54 crc kubenswrapper[4805]: E0226 17:31:54.324944 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c52c031-06ea-4fef-9180-fa2496ba71e5" containerName="extract-content" Feb 26 17:31:54 crc kubenswrapper[4805]: I0226 17:31:54.324955 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c52c031-06ea-4fef-9180-fa2496ba71e5" containerName="extract-content" Feb 26 17:31:54 crc kubenswrapper[4805]: E0226 17:31:54.324976 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c52c031-06ea-4fef-9180-fa2496ba71e5" containerName="extract-utilities" Feb 26 17:31:54 crc kubenswrapper[4805]: I0226 17:31:54.324981 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c52c031-06ea-4fef-9180-fa2496ba71e5" containerName="extract-utilities" Feb 26 17:31:54 crc kubenswrapper[4805]: E0226 17:31:54.324995 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c52c031-06ea-4fef-9180-fa2496ba71e5" containerName="registry-server" Feb 26 17:31:54 crc kubenswrapper[4805]: I0226 17:31:54.325002 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c52c031-06ea-4fef-9180-fa2496ba71e5" containerName="registry-server" Feb 26 17:31:54 crc kubenswrapper[4805]: I0226 17:31:54.325343 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c52c031-06ea-4fef-9180-fa2496ba71e5" containerName="registry-server" Feb 26 17:31:54 crc kubenswrapper[4805]: I0226 17:31:54.326161 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v6p6m" Feb 26 17:31:54 crc kubenswrapper[4805]: I0226 17:31:54.345151 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v6p6m"] Feb 26 17:31:54 crc kubenswrapper[4805]: I0226 17:31:54.399130 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fae282fa-40c1-4521-b1b9-73faca1a560c-catalog-content\") pod \"redhat-marketplace-v6p6m\" (UID: \"fae282fa-40c1-4521-b1b9-73faca1a560c\") " pod="openshift-marketplace/redhat-marketplace-v6p6m" Feb 26 17:31:54 crc kubenswrapper[4805]: I0226 17:31:54.399190 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fae282fa-40c1-4521-b1b9-73faca1a560c-utilities\") pod \"redhat-marketplace-v6p6m\" (UID: \"fae282fa-40c1-4521-b1b9-73faca1a560c\") " pod="openshift-marketplace/redhat-marketplace-v6p6m" Feb 26 17:31:54 crc kubenswrapper[4805]: I0226 17:31:54.399251 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqdr4\" (UniqueName: \"kubernetes.io/projected/fae282fa-40c1-4521-b1b9-73faca1a560c-kube-api-access-tqdr4\") pod \"redhat-marketplace-v6p6m\" (UID: \"fae282fa-40c1-4521-b1b9-73faca1a560c\") " pod="openshift-marketplace/redhat-marketplace-v6p6m" Feb 26 17:31:54 crc kubenswrapper[4805]: I0226 17:31:54.499842 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqdr4\" (UniqueName: \"kubernetes.io/projected/fae282fa-40c1-4521-b1b9-73faca1a560c-kube-api-access-tqdr4\") pod \"redhat-marketplace-v6p6m\" (UID: \"fae282fa-40c1-4521-b1b9-73faca1a560c\") " pod="openshift-marketplace/redhat-marketplace-v6p6m" Feb 26 17:31:54 crc kubenswrapper[4805]: I0226 17:31:54.499921 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fae282fa-40c1-4521-b1b9-73faca1a560c-catalog-content\") pod \"redhat-marketplace-v6p6m\" (UID: \"fae282fa-40c1-4521-b1b9-73faca1a560c\") " pod="openshift-marketplace/redhat-marketplace-v6p6m" Feb 26 17:31:54 crc kubenswrapper[4805]: I0226 17:31:54.499948 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fae282fa-40c1-4521-b1b9-73faca1a560c-utilities\") pod \"redhat-marketplace-v6p6m\" (UID: \"fae282fa-40c1-4521-b1b9-73faca1a560c\") " pod="openshift-marketplace/redhat-marketplace-v6p6m" Feb 26 17:31:54 crc kubenswrapper[4805]: I0226 17:31:54.500394 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fae282fa-40c1-4521-b1b9-73faca1a560c-utilities\") pod \"redhat-marketplace-v6p6m\" (UID: \"fae282fa-40c1-4521-b1b9-73faca1a560c\") " pod="openshift-marketplace/redhat-marketplace-v6p6m" Feb 26 17:31:54 crc kubenswrapper[4805]: I0226 17:31:54.500456 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fae282fa-40c1-4521-b1b9-73faca1a560c-catalog-content\") pod \"redhat-marketplace-v6p6m\" (UID: \"fae282fa-40c1-4521-b1b9-73faca1a560c\") " pod="openshift-marketplace/redhat-marketplace-v6p6m" Feb 26 17:31:54 crc kubenswrapper[4805]: I0226 17:31:54.520034 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqdr4\" (UniqueName: \"kubernetes.io/projected/fae282fa-40c1-4521-b1b9-73faca1a560c-kube-api-access-tqdr4\") pod \"redhat-marketplace-v6p6m\" (UID: \"fae282fa-40c1-4521-b1b9-73faca1a560c\") " pod="openshift-marketplace/redhat-marketplace-v6p6m" Feb 26 17:31:54 crc kubenswrapper[4805]: I0226 17:31:54.641089 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v6p6m" Feb 26 17:31:55 crc kubenswrapper[4805]: I0226 17:31:55.058769 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v6p6m"] Feb 26 17:31:55 crc kubenswrapper[4805]: W0226 17:31:55.061815 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfae282fa_40c1_4521_b1b9_73faca1a560c.slice/crio-dae9fce9403224a6d7940dfea26d8f3fdfb6118fc7146876d57d7217164ba239 WatchSource:0}: Error finding container dae9fce9403224a6d7940dfea26d8f3fdfb6118fc7146876d57d7217164ba239: Status 404 returned error can't find the container with id dae9fce9403224a6d7940dfea26d8f3fdfb6118fc7146876d57d7217164ba239 Feb 26 17:31:55 crc kubenswrapper[4805]: I0226 17:31:55.636068 4805 generic.go:334] "Generic (PLEG): container finished" podID="fae282fa-40c1-4521-b1b9-73faca1a560c" containerID="aecba009316d59fa24f1650d03faa72a59053366f4e52f419edc30786e8da45a" exitCode=0 Feb 26 17:31:55 crc kubenswrapper[4805]: I0226 17:31:55.636127 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v6p6m" event={"ID":"fae282fa-40c1-4521-b1b9-73faca1a560c","Type":"ContainerDied","Data":"aecba009316d59fa24f1650d03faa72a59053366f4e52f419edc30786e8da45a"} Feb 26 17:31:55 crc kubenswrapper[4805]: I0226 17:31:55.636183 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v6p6m" event={"ID":"fae282fa-40c1-4521-b1b9-73faca1a560c","Type":"ContainerStarted","Data":"dae9fce9403224a6d7940dfea26d8f3fdfb6118fc7146876d57d7217164ba239"} Feb 26 17:31:57 crc kubenswrapper[4805]: I0226 17:31:57.656965 4805 generic.go:334] "Generic (PLEG): container finished" podID="fae282fa-40c1-4521-b1b9-73faca1a560c" containerID="7ebadf762e1e397741cadba8544bf406cb1d8ff2b3a0dbfe6129a1bc3f21f146" exitCode=0 Feb 26 17:31:57 crc kubenswrapper[4805]: I0226 17:31:57.657353 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v6p6m" event={"ID":"fae282fa-40c1-4521-b1b9-73faca1a560c","Type":"ContainerDied","Data":"7ebadf762e1e397741cadba8544bf406cb1d8ff2b3a0dbfe6129a1bc3f21f146"} Feb 26 17:31:57 crc kubenswrapper[4805]: I0226 17:31:57.915339 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-52jrl" Feb 26 17:31:57 crc kubenswrapper[4805]: I0226 17:31:57.915740 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-52jrl" Feb 26 17:31:57 crc kubenswrapper[4805]: I0226 17:31:57.978044 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-52jrl" Feb 26 17:31:58 crc kubenswrapper[4805]: I0226 17:31:58.753639 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-52jrl" Feb 26 17:31:59 crc kubenswrapper[4805]: I0226 17:31:59.697888 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v6p6m" event={"ID":"fae282fa-40c1-4521-b1b9-73faca1a560c","Type":"ContainerStarted","Data":"48b3d4d3d763952f3e7a7af5b5a77a534e1c17021ba8d109af9ad6fd04a6d9fb"} Feb 26 17:31:59 crc kubenswrapper[4805]: I0226 17:31:59.717969 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v6p6m" podStartSLOduration=2.302917689 podStartE2EDuration="5.717938469s" podCreationTimestamp="2026-02-26 17:31:54 +0000 UTC" firstStartedPulling="2026-02-26 17:31:55.637836787 +0000 UTC m=+1030.199591146" lastFinishedPulling="2026-02-26 17:31:59.052857587 +0000 UTC m=+1033.614611926" observedRunningTime="2026-02-26 17:31:59.714158582 +0000 UTC m=+1034.275912921" watchObservedRunningTime="2026-02-26 17:31:59.717938469 +0000 UTC m=+1034.279692808" Feb 26 17:31:59 crc kubenswrapper[4805]: I0226 17:31:59.918326 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-52jrl"] Feb 26 17:32:00 crc kubenswrapper[4805]: I0226 17:32:00.138381 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535452-cmdkc"] Feb 26 17:32:00 crc kubenswrapper[4805]: I0226 17:32:00.139064 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535452-cmdkc" Feb 26 17:32:00 crc kubenswrapper[4805]: I0226 17:32:00.141501 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 17:32:00 crc kubenswrapper[4805]: I0226 17:32:00.141768 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 17:32:00 crc kubenswrapper[4805]: I0226 17:32:00.142226 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 17:32:00 crc kubenswrapper[4805]: I0226 17:32:00.176375 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535452-cmdkc"] Feb 26 17:32:00 crc kubenswrapper[4805]: I0226 17:32:00.191962 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v68cw\" (UniqueName: \"kubernetes.io/projected/2bd043d5-4b5d-47f7-887b-5e1685b4c0ce-kube-api-access-v68cw\") pod \"auto-csr-approver-29535452-cmdkc\" (UID: \"2bd043d5-4b5d-47f7-887b-5e1685b4c0ce\") " pod="openshift-infra/auto-csr-approver-29535452-cmdkc" Feb 26 17:32:00 crc kubenswrapper[4805]: I0226 17:32:00.292900 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v68cw\" (UniqueName: \"kubernetes.io/projected/2bd043d5-4b5d-47f7-887b-5e1685b4c0ce-kube-api-access-v68cw\") pod \"auto-csr-approver-29535452-cmdkc\" (UID: \"2bd043d5-4b5d-47f7-887b-5e1685b4c0ce\") " pod="openshift-infra/auto-csr-approver-29535452-cmdkc" Feb 26 17:32:00 crc kubenswrapper[4805]: I0226 17:32:00.322929 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v68cw\" (UniqueName: \"kubernetes.io/projected/2bd043d5-4b5d-47f7-887b-5e1685b4c0ce-kube-api-access-v68cw\") pod \"auto-csr-approver-29535452-cmdkc\" (UID: \"2bd043d5-4b5d-47f7-887b-5e1685b4c0ce\") " pod="openshift-infra/auto-csr-approver-29535452-cmdkc" Feb 26 17:32:00 crc kubenswrapper[4805]: I0226 17:32:00.474047 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535452-cmdkc" Feb 26 17:32:00 crc kubenswrapper[4805]: I0226 17:32:00.685977 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535452-cmdkc"] Feb 26 17:32:00 crc kubenswrapper[4805]: I0226 17:32:00.711516 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535452-cmdkc" event={"ID":"2bd043d5-4b5d-47f7-887b-5e1685b4c0ce","Type":"ContainerStarted","Data":"9171c5727860ab32933b5f47cc4c65071c2472442b701b4ac0608ff9e494f888"} Feb 26 17:32:00 crc kubenswrapper[4805]: I0226 17:32:00.712140 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-52jrl" podUID="ab071f9e-430d-46fa-bb26-dbd09f7fc95e" containerName="registry-server" containerID="cri-o://91a75141bcfd0cab0481d2bcb3083126f9f24764b2f6b934f8ef64e308e9b1f8" gracePeriod=2 Feb 26 17:32:01 crc kubenswrapper[4805]: I0226 17:32:01.248895 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52jrl" Feb 26 17:32:01 crc kubenswrapper[4805]: I0226 17:32:01.312758 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab071f9e-430d-46fa-bb26-dbd09f7fc95e-utilities\") pod \"ab071f9e-430d-46fa-bb26-dbd09f7fc95e\" (UID: \"ab071f9e-430d-46fa-bb26-dbd09f7fc95e\") " Feb 26 17:32:01 crc kubenswrapper[4805]: I0226 17:32:01.313035 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab071f9e-430d-46fa-bb26-dbd09f7fc95e-catalog-content\") pod \"ab071f9e-430d-46fa-bb26-dbd09f7fc95e\" (UID: \"ab071f9e-430d-46fa-bb26-dbd09f7fc95e\") " Feb 26 17:32:01 crc kubenswrapper[4805]: I0226 17:32:01.313151 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfp76\" (UniqueName: \"kubernetes.io/projected/ab071f9e-430d-46fa-bb26-dbd09f7fc95e-kube-api-access-tfp76\") pod \"ab071f9e-430d-46fa-bb26-dbd09f7fc95e\" (UID: \"ab071f9e-430d-46fa-bb26-dbd09f7fc95e\") " Feb 26 17:32:01 crc kubenswrapper[4805]: I0226 17:32:01.313725 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab071f9e-430d-46fa-bb26-dbd09f7fc95e-utilities" (OuterVolumeSpecName: "utilities") pod "ab071f9e-430d-46fa-bb26-dbd09f7fc95e" (UID: "ab071f9e-430d-46fa-bb26-dbd09f7fc95e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:32:01 crc kubenswrapper[4805]: I0226 17:32:01.314008 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab071f9e-430d-46fa-bb26-dbd09f7fc95e-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 17:32:01 crc kubenswrapper[4805]: I0226 17:32:01.341449 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab071f9e-430d-46fa-bb26-dbd09f7fc95e-kube-api-access-tfp76" (OuterVolumeSpecName: "kube-api-access-tfp76") pod "ab071f9e-430d-46fa-bb26-dbd09f7fc95e" (UID: "ab071f9e-430d-46fa-bb26-dbd09f7fc95e"). InnerVolumeSpecName "kube-api-access-tfp76". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:32:01 crc kubenswrapper[4805]: I0226 17:32:01.414846 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfp76\" (UniqueName: \"kubernetes.io/projected/ab071f9e-430d-46fa-bb26-dbd09f7fc95e-kube-api-access-tfp76\") on node \"crc\" DevicePath \"\"" Feb 26 17:32:01 crc kubenswrapper[4805]: I0226 17:32:01.722941 4805 generic.go:334] "Generic (PLEG): container finished" podID="ab071f9e-430d-46fa-bb26-dbd09f7fc95e" containerID="91a75141bcfd0cab0481d2bcb3083126f9f24764b2f6b934f8ef64e308e9b1f8" exitCode=0 Feb 26 17:32:01 crc kubenswrapper[4805]: I0226 17:32:01.723073 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52jrl" event={"ID":"ab071f9e-430d-46fa-bb26-dbd09f7fc95e","Type":"ContainerDied","Data":"91a75141bcfd0cab0481d2bcb3083126f9f24764b2f6b934f8ef64e308e9b1f8"} Feb 26 17:32:01 crc kubenswrapper[4805]: I0226 17:32:01.723629 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52jrl" event={"ID":"ab071f9e-430d-46fa-bb26-dbd09f7fc95e","Type":"ContainerDied","Data":"82c145e005307bd5e28afba46612b303678899748ea8e3836280fe341e79644b"} Feb 26 17:32:01 crc kubenswrapper[4805]: I0226 17:32:01.723660 4805 scope.go:117] "RemoveContainer" containerID="91a75141bcfd0cab0481d2bcb3083126f9f24764b2f6b934f8ef64e308e9b1f8" Feb 26 17:32:01 crc kubenswrapper[4805]: I0226 17:32:01.723096 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52jrl" Feb 26 17:32:01 crc kubenswrapper[4805]: I0226 17:32:01.746085 4805 scope.go:117] "RemoveContainer" containerID="d84bbc9f472ee5b60a4fa95fef415a9720bc4d574e771388390ba02ba5dbff1d" Feb 26 17:32:01 crc kubenswrapper[4805]: I0226 17:32:01.774938 4805 scope.go:117] "RemoveContainer" containerID="08fbdb7cd9d17267bbbaf7904788b35a61d56e002278942266730566c983f863" Feb 26 17:32:01 crc kubenswrapper[4805]: I0226 17:32:01.850758 4805 scope.go:117] "RemoveContainer" containerID="91a75141bcfd0cab0481d2bcb3083126f9f24764b2f6b934f8ef64e308e9b1f8" Feb 26 17:32:01 crc kubenswrapper[4805]: E0226 17:32:01.852848 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91a75141bcfd0cab0481d2bcb3083126f9f24764b2f6b934f8ef64e308e9b1f8\": container with ID starting with 91a75141bcfd0cab0481d2bcb3083126f9f24764b2f6b934f8ef64e308e9b1f8 not found: ID does not exist" containerID="91a75141bcfd0cab0481d2bcb3083126f9f24764b2f6b934f8ef64e308e9b1f8" Feb 26 17:32:01 crc kubenswrapper[4805]: I0226 17:32:01.852923 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91a75141bcfd0cab0481d2bcb3083126f9f24764b2f6b934f8ef64e308e9b1f8"} err="failed to get container status \"91a75141bcfd0cab0481d2bcb3083126f9f24764b2f6b934f8ef64e308e9b1f8\": rpc error: code = NotFound desc = could not find container \"91a75141bcfd0cab0481d2bcb3083126f9f24764b2f6b934f8ef64e308e9b1f8\": container with ID starting with 91a75141bcfd0cab0481d2bcb3083126f9f24764b2f6b934f8ef64e308e9b1f8 not found: ID does not exist" Feb 26 17:32:01 crc kubenswrapper[4805]: I0226 17:32:01.852977 4805 scope.go:117] "RemoveContainer" containerID="d84bbc9f472ee5b60a4fa95fef415a9720bc4d574e771388390ba02ba5dbff1d" Feb 26 17:32:01 crc kubenswrapper[4805]: E0226 17:32:01.853702 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d84bbc9f472ee5b60a4fa95fef415a9720bc4d574e771388390ba02ba5dbff1d\": container with ID starting with d84bbc9f472ee5b60a4fa95fef415a9720bc4d574e771388390ba02ba5dbff1d not found: ID does not exist" containerID="d84bbc9f472ee5b60a4fa95fef415a9720bc4d574e771388390ba02ba5dbff1d" Feb 26 17:32:01 crc kubenswrapper[4805]: I0226 17:32:01.853792 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d84bbc9f472ee5b60a4fa95fef415a9720bc4d574e771388390ba02ba5dbff1d"} err="failed to get container status \"d84bbc9f472ee5b60a4fa95fef415a9720bc4d574e771388390ba02ba5dbff1d\": rpc error: code = NotFound desc = could not find container \"d84bbc9f472ee5b60a4fa95fef415a9720bc4d574e771388390ba02ba5dbff1d\": container with ID starting with d84bbc9f472ee5b60a4fa95fef415a9720bc4d574e771388390ba02ba5dbff1d not found: ID does not exist" Feb 26 17:32:01 crc kubenswrapper[4805]: I0226 17:32:01.853857 4805 scope.go:117] "RemoveContainer" containerID="08fbdb7cd9d17267bbbaf7904788b35a61d56e002278942266730566c983f863" Feb 26 17:32:01 crc kubenswrapper[4805]: E0226 17:32:01.854678 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08fbdb7cd9d17267bbbaf7904788b35a61d56e002278942266730566c983f863\": container with ID starting with 08fbdb7cd9d17267bbbaf7904788b35a61d56e002278942266730566c983f863 not found: ID does not exist" containerID="08fbdb7cd9d17267bbbaf7904788b35a61d56e002278942266730566c983f863" Feb 26 17:32:01 crc kubenswrapper[4805]: I0226 17:32:01.854715 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08fbdb7cd9d17267bbbaf7904788b35a61d56e002278942266730566c983f863"} err="failed to get container status \"08fbdb7cd9d17267bbbaf7904788b35a61d56e002278942266730566c983f863\": rpc error: code = NotFound desc = could not find container \"08fbdb7cd9d17267bbbaf7904788b35a61d56e002278942266730566c983f863\": container with ID starting with 08fbdb7cd9d17267bbbaf7904788b35a61d56e002278942266730566c983f863 not found: ID does not exist" Feb 26 17:32:03 crc kubenswrapper[4805]: I0226 17:32:03.356496 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab071f9e-430d-46fa-bb26-dbd09f7fc95e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab071f9e-430d-46fa-bb26-dbd09f7fc95e" (UID: "ab071f9e-430d-46fa-bb26-dbd09f7fc95e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:32:03 crc kubenswrapper[4805]: I0226 17:32:03.444789 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab071f9e-430d-46fa-bb26-dbd09f7fc95e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 17:32:03 crc kubenswrapper[4805]: I0226 17:32:03.553890 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-52jrl"] Feb 26 17:32:03 crc kubenswrapper[4805]: I0226 17:32:03.559441 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-52jrl"] Feb 26 17:32:04 crc kubenswrapper[4805]: I0226 17:32:04.642267 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v6p6m" Feb 26 17:32:04 crc kubenswrapper[4805]: I0226 17:32:04.643307 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v6p6m" Feb 26 17:32:04 crc kubenswrapper[4805]: I0226 17:32:04.695562 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v6p6m" Feb 26 17:32:04 crc kubenswrapper[4805]: I0226 17:32:04.780948 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v6p6m" Feb 26 17:32:04 crc kubenswrapper[4805]: I0226 17:32:04.965075 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab071f9e-430d-46fa-bb26-dbd09f7fc95e" path="/var/lib/kubelet/pods/ab071f9e-430d-46fa-bb26-dbd09f7fc95e/volumes" Feb 26 17:32:05 crc kubenswrapper[4805]: I0226 17:32:05.651253 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-2dnn9" podUID="dd3f2c3b-2417-44c9-bd45-02b10d68cf24" containerName="console" containerID="cri-o://7f837bdfdc837e512c61bb638e4a979e8ac1cea90bf65554544533a743c5594b" gracePeriod=15 Feb 26 17:32:06 crc kubenswrapper[4805]: I0226 17:32:06.521358 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v6p6m"] Feb 26 17:32:06 crc kubenswrapper[4805]: I0226 17:32:06.668696 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-2dnn9_dd3f2c3b-2417-44c9-bd45-02b10d68cf24/console/0.log" Feb 26 17:32:06 crc kubenswrapper[4805]: I0226 17:32:06.669998 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2dnn9" Feb 26 17:32:06 crc kubenswrapper[4805]: I0226 17:32:06.759359 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-2dnn9_dd3f2c3b-2417-44c9-bd45-02b10d68cf24/console/0.log" Feb 26 17:32:06 crc kubenswrapper[4805]: I0226 17:32:06.759410 4805 generic.go:334] "Generic (PLEG): container finished" podID="dd3f2c3b-2417-44c9-bd45-02b10d68cf24" containerID="7f837bdfdc837e512c61bb638e4a979e8ac1cea90bf65554544533a743c5594b" exitCode=2 Feb 26 17:32:06 crc kubenswrapper[4805]: I0226 17:32:06.759469 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2dnn9" event={"ID":"dd3f2c3b-2417-44c9-bd45-02b10d68cf24","Type":"ContainerDied","Data":"7f837bdfdc837e512c61bb638e4a979e8ac1cea90bf65554544533a743c5594b"} Feb 26 17:32:06 crc kubenswrapper[4805]: I0226 17:32:06.759537 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-2dnn9" event={"ID":"dd3f2c3b-2417-44c9-bd45-02b10d68cf24","Type":"ContainerDied","Data":"6c7fc7cc98f5a11339ae09d9e842054dc3e3266731dca078ea954da269a655c0"} Feb 26 17:32:06 crc kubenswrapper[4805]: I0226 17:32:06.759558 4805 scope.go:117] "RemoveContainer" containerID="7f837bdfdc837e512c61bb638e4a979e8ac1cea90bf65554544533a743c5594b" Feb 26 17:32:06 crc kubenswrapper[4805]: I0226 17:32:06.759585 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-v6p6m" podUID="fae282fa-40c1-4521-b1b9-73faca1a560c" containerName="registry-server" containerID="cri-o://48b3d4d3d763952f3e7a7af5b5a77a534e1c17021ba8d109af9ad6fd04a6d9fb" gracePeriod=2 Feb 26 17:32:06 crc kubenswrapper[4805]: I0226 17:32:06.759486 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-2dnn9" Feb 26 17:32:06 crc kubenswrapper[4805]: I0226 17:32:06.787877 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-service-ca\") pod \"dd3f2c3b-2417-44c9-bd45-02b10d68cf24\" (UID: \"dd3f2c3b-2417-44c9-bd45-02b10d68cf24\") " Feb 26 17:32:06 crc kubenswrapper[4805]: I0226 17:32:06.788002 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-console-oauth-config\") pod \"dd3f2c3b-2417-44c9-bd45-02b10d68cf24\" (UID: \"dd3f2c3b-2417-44c9-bd45-02b10d68cf24\") " Feb 26 17:32:06 crc kubenswrapper[4805]: I0226 17:32:06.788095 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-console-config\") pod \"dd3f2c3b-2417-44c9-bd45-02b10d68cf24\" (UID: \"dd3f2c3b-2417-44c9-bd45-02b10d68cf24\") " Feb 26 17:32:06 crc kubenswrapper[4805]: I0226 17:32:06.788146 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-console-serving-cert\") pod \"dd3f2c3b-2417-44c9-bd45-02b10d68cf24\" (UID: \"dd3f2c3b-2417-44c9-bd45-02b10d68cf24\") " Feb 26 17:32:06 crc kubenswrapper[4805]: I0226 17:32:06.788165 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-trusted-ca-bundle\") pod \"dd3f2c3b-2417-44c9-bd45-02b10d68cf24\" (UID: \"dd3f2c3b-2417-44c9-bd45-02b10d68cf24\") " Feb 26 17:32:06 crc kubenswrapper[4805]: I0226 17:32:06.788262 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-oauth-serving-cert\") pod \"dd3f2c3b-2417-44c9-bd45-02b10d68cf24\" (UID: \"dd3f2c3b-2417-44c9-bd45-02b10d68cf24\") " Feb 26 17:32:06 crc kubenswrapper[4805]: I0226 17:32:06.788289 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2d6l\" (UniqueName: \"kubernetes.io/projected/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-kube-api-access-p2d6l\") pod \"dd3f2c3b-2417-44c9-bd45-02b10d68cf24\" (UID: \"dd3f2c3b-2417-44c9-bd45-02b10d68cf24\") " Feb 26 17:32:06 crc kubenswrapper[4805]: I0226 17:32:06.788777 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-console-config" (OuterVolumeSpecName: "console-config") pod "dd3f2c3b-2417-44c9-bd45-02b10d68cf24" (UID: "dd3f2c3b-2417-44c9-bd45-02b10d68cf24"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:32:06 crc kubenswrapper[4805]: I0226 17:32:06.788788 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-service-ca" (OuterVolumeSpecName: "service-ca") pod "dd3f2c3b-2417-44c9-bd45-02b10d68cf24" (UID: "dd3f2c3b-2417-44c9-bd45-02b10d68cf24"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:32:06 crc kubenswrapper[4805]: I0226 17:32:06.789302 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "dd3f2c3b-2417-44c9-bd45-02b10d68cf24" (UID: "dd3f2c3b-2417-44c9-bd45-02b10d68cf24"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:32:06 crc kubenswrapper[4805]: I0226 17:32:06.789408 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dd3f2c3b-2417-44c9-bd45-02b10d68cf24" (UID: "dd3f2c3b-2417-44c9-bd45-02b10d68cf24"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:32:06 crc kubenswrapper[4805]: I0226 17:32:06.793893 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-kube-api-access-p2d6l" (OuterVolumeSpecName: "kube-api-access-p2d6l") pod "dd3f2c3b-2417-44c9-bd45-02b10d68cf24" (UID: "dd3f2c3b-2417-44c9-bd45-02b10d68cf24"). InnerVolumeSpecName "kube-api-access-p2d6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:32:06 crc kubenswrapper[4805]: I0226 17:32:06.794004 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "dd3f2c3b-2417-44c9-bd45-02b10d68cf24" (UID: "dd3f2c3b-2417-44c9-bd45-02b10d68cf24"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:32:06 crc kubenswrapper[4805]: I0226 17:32:06.794134 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "dd3f2c3b-2417-44c9-bd45-02b10d68cf24" (UID: "dd3f2c3b-2417-44c9-bd45-02b10d68cf24"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:32:06 crc kubenswrapper[4805]: I0226 17:32:06.889370 4805 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:32:06 crc kubenswrapper[4805]: I0226 17:32:06.889397 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2d6l\" (UniqueName: \"kubernetes.io/projected/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-kube-api-access-p2d6l\") on node \"crc\" DevicePath \"\"" Feb 26 17:32:06 crc kubenswrapper[4805]: I0226 17:32:06.889407 4805 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-service-ca\") on node \"crc\" DevicePath \"\"" Feb 26 17:32:06 crc kubenswrapper[4805]: I0226 17:32:06.889416 4805 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:32:06 crc kubenswrapper[4805]: I0226 17:32:06.889424 4805 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-console-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:32:06 crc kubenswrapper[4805]: I0226 17:32:06.889431 4805 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 26 17:32:06 crc kubenswrapper[4805]: I0226 17:32:06.889441 4805 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd3f2c3b-2417-44c9-bd45-02b10d68cf24-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:32:07 crc kubenswrapper[4805]: I0226 17:32:07.029137 4805 scope.go:117] "RemoveContainer" containerID="7f837bdfdc837e512c61bb638e4a979e8ac1cea90bf65554544533a743c5594b" Feb 26 17:32:07 crc kubenswrapper[4805]: E0226 17:32:07.030096 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f837bdfdc837e512c61bb638e4a979e8ac1cea90bf65554544533a743c5594b\": container with ID starting with 7f837bdfdc837e512c61bb638e4a979e8ac1cea90bf65554544533a743c5594b not found: ID does not exist" containerID="7f837bdfdc837e512c61bb638e4a979e8ac1cea90bf65554544533a743c5594b" Feb 26 17:32:07 crc kubenswrapper[4805]: I0226 17:32:07.030154 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f837bdfdc837e512c61bb638e4a979e8ac1cea90bf65554544533a743c5594b"} err="failed to get container status \"7f837bdfdc837e512c61bb638e4a979e8ac1cea90bf65554544533a743c5594b\": rpc error: code = NotFound desc = could not find container \"7f837bdfdc837e512c61bb638e4a979e8ac1cea90bf65554544533a743c5594b\": container with ID starting with 7f837bdfdc837e512c61bb638e4a979e8ac1cea90bf65554544533a743c5594b not found: ID does not exist" Feb 26 17:32:07 crc kubenswrapper[4805]: I0226 17:32:07.144602 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-2dnn9"] Feb 26 17:32:07 crc kubenswrapper[4805]: I0226 17:32:07.149873 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-2dnn9"] Feb 26 17:32:07 crc kubenswrapper[4805]: I0226 17:32:07.389190 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v6p6m" Feb 26 17:32:07 crc kubenswrapper[4805]: I0226 17:32:07.395851 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fae282fa-40c1-4521-b1b9-73faca1a560c-utilities\") pod \"fae282fa-40c1-4521-b1b9-73faca1a560c\" (UID: \"fae282fa-40c1-4521-b1b9-73faca1a560c\") " Feb 26 17:32:07 crc kubenswrapper[4805]: I0226 17:32:07.395917 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqdr4\" (UniqueName: \"kubernetes.io/projected/fae282fa-40c1-4521-b1b9-73faca1a560c-kube-api-access-tqdr4\") pod \"fae282fa-40c1-4521-b1b9-73faca1a560c\" (UID: \"fae282fa-40c1-4521-b1b9-73faca1a560c\") " Feb 26 17:32:07 crc kubenswrapper[4805]: I0226 17:32:07.395952 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fae282fa-40c1-4521-b1b9-73faca1a560c-catalog-content\") pod \"fae282fa-40c1-4521-b1b9-73faca1a560c\" (UID: \"fae282fa-40c1-4521-b1b9-73faca1a560c\") " Feb 26 17:32:07 crc kubenswrapper[4805]: I0226 17:32:07.396990 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fae282fa-40c1-4521-b1b9-73faca1a560c-utilities" (OuterVolumeSpecName: "utilities") pod "fae282fa-40c1-4521-b1b9-73faca1a560c" (UID: "fae282fa-40c1-4521-b1b9-73faca1a560c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:32:07 crc kubenswrapper[4805]: I0226 17:32:07.403670 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fae282fa-40c1-4521-b1b9-73faca1a560c-kube-api-access-tqdr4" (OuterVolumeSpecName: "kube-api-access-tqdr4") pod "fae282fa-40c1-4521-b1b9-73faca1a560c" (UID: "fae282fa-40c1-4521-b1b9-73faca1a560c"). InnerVolumeSpecName "kube-api-access-tqdr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:32:07 crc kubenswrapper[4805]: I0226 17:32:07.433288 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fae282fa-40c1-4521-b1b9-73faca1a560c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fae282fa-40c1-4521-b1b9-73faca1a560c" (UID: "fae282fa-40c1-4521-b1b9-73faca1a560c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:32:07 crc kubenswrapper[4805]: I0226 17:32:07.496952 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fae282fa-40c1-4521-b1b9-73faca1a560c-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 17:32:07 crc kubenswrapper[4805]: I0226 17:32:07.497031 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqdr4\" (UniqueName: \"kubernetes.io/projected/fae282fa-40c1-4521-b1b9-73faca1a560c-kube-api-access-tqdr4\") on node \"crc\" DevicePath \"\"" Feb 26 17:32:07 crc kubenswrapper[4805]: I0226 17:32:07.497041 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fae282fa-40c1-4521-b1b9-73faca1a560c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 17:32:07 crc kubenswrapper[4805]: I0226 17:32:07.769662 4805 generic.go:334] "Generic (PLEG): container finished" podID="fae282fa-40c1-4521-b1b9-73faca1a560c" containerID="48b3d4d3d763952f3e7a7af5b5a77a534e1c17021ba8d109af9ad6fd04a6d9fb" exitCode=0 Feb 26 17:32:07 crc kubenswrapper[4805]: I0226 17:32:07.769792 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v6p6m" Feb 26 17:32:07 crc kubenswrapper[4805]: I0226 17:32:07.770123 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v6p6m" event={"ID":"fae282fa-40c1-4521-b1b9-73faca1a560c","Type":"ContainerDied","Data":"48b3d4d3d763952f3e7a7af5b5a77a534e1c17021ba8d109af9ad6fd04a6d9fb"} Feb 26 17:32:07 crc kubenswrapper[4805]: I0226 17:32:07.770184 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v6p6m" event={"ID":"fae282fa-40c1-4521-b1b9-73faca1a560c","Type":"ContainerDied","Data":"dae9fce9403224a6d7940dfea26d8f3fdfb6118fc7146876d57d7217164ba239"} Feb 26 17:32:07 crc kubenswrapper[4805]: I0226 17:32:07.770209 4805 scope.go:117] "RemoveContainer" containerID="48b3d4d3d763952f3e7a7af5b5a77a534e1c17021ba8d109af9ad6fd04a6d9fb" Feb 26 17:32:07 crc kubenswrapper[4805]: I0226 17:32:07.775518 4805 generic.go:334] "Generic (PLEG): container finished" podID="2bd043d5-4b5d-47f7-887b-5e1685b4c0ce" containerID="e07412a516f5409469792dc0a2b18405f86d480df730a3e67af22989803299b8" exitCode=0 Feb 26 17:32:07 crc kubenswrapper[4805]: I0226 17:32:07.775561 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535452-cmdkc" event={"ID":"2bd043d5-4b5d-47f7-887b-5e1685b4c0ce","Type":"ContainerDied","Data":"e07412a516f5409469792dc0a2b18405f86d480df730a3e67af22989803299b8"} Feb 26 17:32:07 crc kubenswrapper[4805]: I0226 17:32:07.795775 4805 scope.go:117] "RemoveContainer" containerID="7ebadf762e1e397741cadba8544bf406cb1d8ff2b3a0dbfe6129a1bc3f21f146" Feb 26 17:32:07 crc kubenswrapper[4805]: I0226 17:32:07.840909 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v6p6m"] Feb 26 17:32:07 crc kubenswrapper[4805]: I0226 17:32:07.848418 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-v6p6m"] Feb 26 17:32:07 crc kubenswrapper[4805]: I0226 17:32:07.865909 4805 scope.go:117] "RemoveContainer" containerID="aecba009316d59fa24f1650d03faa72a59053366f4e52f419edc30786e8da45a" Feb 26 17:32:07 crc kubenswrapper[4805]: I0226 17:32:07.887390 4805 scope.go:117] "RemoveContainer" containerID="48b3d4d3d763952f3e7a7af5b5a77a534e1c17021ba8d109af9ad6fd04a6d9fb" Feb 26 17:32:07 crc kubenswrapper[4805]: E0226 17:32:07.888570 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48b3d4d3d763952f3e7a7af5b5a77a534e1c17021ba8d109af9ad6fd04a6d9fb\": container with ID starting with 48b3d4d3d763952f3e7a7af5b5a77a534e1c17021ba8d109af9ad6fd04a6d9fb not found: ID does not exist" containerID="48b3d4d3d763952f3e7a7af5b5a77a534e1c17021ba8d109af9ad6fd04a6d9fb" Feb 26 17:32:07 crc kubenswrapper[4805]: I0226 17:32:07.888630 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48b3d4d3d763952f3e7a7af5b5a77a534e1c17021ba8d109af9ad6fd04a6d9fb"} err="failed to get container status \"48b3d4d3d763952f3e7a7af5b5a77a534e1c17021ba8d109af9ad6fd04a6d9fb\": rpc error: code = NotFound desc = could not find container \"48b3d4d3d763952f3e7a7af5b5a77a534e1c17021ba8d109af9ad6fd04a6d9fb\": container with ID starting with 48b3d4d3d763952f3e7a7af5b5a77a534e1c17021ba8d109af9ad6fd04a6d9fb not found: ID does not exist" Feb 26 17:32:07 crc kubenswrapper[4805]: I0226 17:32:07.888663 4805 scope.go:117] "RemoveContainer" containerID="7ebadf762e1e397741cadba8544bf406cb1d8ff2b3a0dbfe6129a1bc3f21f146" Feb 26 17:32:07 crc kubenswrapper[4805]: E0226 17:32:07.889159 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ebadf762e1e397741cadba8544bf406cb1d8ff2b3a0dbfe6129a1bc3f21f146\": container with ID starting with 7ebadf762e1e397741cadba8544bf406cb1d8ff2b3a0dbfe6129a1bc3f21f146 not found: ID does not exist" containerID="7ebadf762e1e397741cadba8544bf406cb1d8ff2b3a0dbfe6129a1bc3f21f146" Feb 26 17:32:07 crc kubenswrapper[4805]: I0226 17:32:07.889192 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ebadf762e1e397741cadba8544bf406cb1d8ff2b3a0dbfe6129a1bc3f21f146"} err="failed to get container status \"7ebadf762e1e397741cadba8544bf406cb1d8ff2b3a0dbfe6129a1bc3f21f146\": rpc error: code = NotFound desc = could not find container \"7ebadf762e1e397741cadba8544bf406cb1d8ff2b3a0dbfe6129a1bc3f21f146\": container with ID starting with 7ebadf762e1e397741cadba8544bf406cb1d8ff2b3a0dbfe6129a1bc3f21f146 not found: ID does not exist" Feb 26 17:32:07 crc kubenswrapper[4805]: I0226 17:32:07.889209 4805 scope.go:117] "RemoveContainer" containerID="aecba009316d59fa24f1650d03faa72a59053366f4e52f419edc30786e8da45a" Feb 26 17:32:07 crc kubenswrapper[4805]: E0226 17:32:07.889497 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aecba009316d59fa24f1650d03faa72a59053366f4e52f419edc30786e8da45a\": container with ID starting with aecba009316d59fa24f1650d03faa72a59053366f4e52f419edc30786e8da45a not found: ID does not exist" containerID="aecba009316d59fa24f1650d03faa72a59053366f4e52f419edc30786e8da45a" Feb 26 17:32:07 crc kubenswrapper[4805]: I0226 17:32:07.889533 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aecba009316d59fa24f1650d03faa72a59053366f4e52f419edc30786e8da45a"} err="failed to get container status \"aecba009316d59fa24f1650d03faa72a59053366f4e52f419edc30786e8da45a\": rpc error: code = NotFound desc = could not find container \"aecba009316d59fa24f1650d03faa72a59053366f4e52f419edc30786e8da45a\": container with ID starting with aecba009316d59fa24f1650d03faa72a59053366f4e52f419edc30786e8da45a not found: ID does not exist" Feb 26 17:32:08 crc kubenswrapper[4805]: I0226 17:32:08.961789 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd3f2c3b-2417-44c9-bd45-02b10d68cf24" path="/var/lib/kubelet/pods/dd3f2c3b-2417-44c9-bd45-02b10d68cf24/volumes" Feb 26 17:32:08 crc kubenswrapper[4805]: I0226 17:32:08.963182 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fae282fa-40c1-4521-b1b9-73faca1a560c" path="/var/lib/kubelet/pods/fae282fa-40c1-4521-b1b9-73faca1a560c/volumes" Feb 26 17:32:09 crc kubenswrapper[4805]: I0226 17:32:09.038987 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535452-cmdkc" Feb 26 17:32:09 crc kubenswrapper[4805]: I0226 17:32:09.157694 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4sxhs9"] Feb 26 17:32:09 crc kubenswrapper[4805]: E0226 17:32:09.158158 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fae282fa-40c1-4521-b1b9-73faca1a560c" containerName="registry-server" Feb 26 17:32:09 crc kubenswrapper[4805]: I0226 17:32:09.158248 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="fae282fa-40c1-4521-b1b9-73faca1a560c" containerName="registry-server" Feb 26 17:32:09 crc kubenswrapper[4805]: E0226 17:32:09.158325 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab071f9e-430d-46fa-bb26-dbd09f7fc95e" containerName="registry-server" Feb 26 17:32:09 crc kubenswrapper[4805]: I0226 17:32:09.158395 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab071f9e-430d-46fa-bb26-dbd09f7fc95e" containerName="registry-server" Feb 26 17:32:09 crc kubenswrapper[4805]: E0226 17:32:09.158479 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd3f2c3b-2417-44c9-bd45-02b10d68cf24" containerName="console" Feb 26 17:32:09 crc kubenswrapper[4805]: I0226 17:32:09.158546 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd3f2c3b-2417-44c9-bd45-02b10d68cf24" containerName="console" Feb 26 17:32:09 crc kubenswrapper[4805]: E0226 17:32:09.158615 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fae282fa-40c1-4521-b1b9-73faca1a560c" containerName="extract-content" Feb 26 17:32:09 crc kubenswrapper[4805]: I0226 17:32:09.158681 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="fae282fa-40c1-4521-b1b9-73faca1a560c" containerName="extract-content" Feb 26 17:32:09 crc kubenswrapper[4805]: E0226 17:32:09.158775 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab071f9e-430d-46fa-bb26-dbd09f7fc95e" containerName="extract-content" Feb 26 17:32:09 crc kubenswrapper[4805]: I0226 17:32:09.158837 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab071f9e-430d-46fa-bb26-dbd09f7fc95e" containerName="extract-content" Feb 26 17:32:09 crc kubenswrapper[4805]: E0226 17:32:09.158899 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fae282fa-40c1-4521-b1b9-73faca1a560c" containerName="extract-utilities" Feb 26 17:32:09 crc kubenswrapper[4805]: I0226 17:32:09.158960 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="fae282fa-40c1-4521-b1b9-73faca1a560c" containerName="extract-utilities" Feb 26 17:32:09 crc kubenswrapper[4805]: E0226 17:32:09.159036 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bd043d5-4b5d-47f7-887b-5e1685b4c0ce" containerName="oc" Feb 26 17:32:09 crc kubenswrapper[4805]: I0226 17:32:09.159107 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bd043d5-4b5d-47f7-887b-5e1685b4c0ce" containerName="oc" Feb 26 17:32:09 crc kubenswrapper[4805]: E0226 17:32:09.159193 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab071f9e-430d-46fa-bb26-dbd09f7fc95e" containerName="extract-utilities" Feb 26 17:32:09 crc kubenswrapper[4805]: I0226 17:32:09.159257 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab071f9e-430d-46fa-bb26-dbd09f7fc95e" containerName="extract-utilities" Feb 26 17:32:09 crc kubenswrapper[4805]: I0226 17:32:09.159444 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab071f9e-430d-46fa-bb26-dbd09f7fc95e" containerName="registry-server" Feb 26 17:32:09 crc kubenswrapper[4805]: I0226 17:32:09.159534 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bd043d5-4b5d-47f7-887b-5e1685b4c0ce" containerName="oc" Feb 26 17:32:09 crc kubenswrapper[4805]: I0226 17:32:09.159605 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="fae282fa-40c1-4521-b1b9-73faca1a560c" containerName="registry-server" Feb 26 17:32:09 crc kubenswrapper[4805]: I0226 17:32:09.159678 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd3f2c3b-2417-44c9-bd45-02b10d68cf24" containerName="console" Feb 26 17:32:09 crc kubenswrapper[4805]: I0226 17:32:09.160632 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4sxhs9" Feb 26 17:32:09 crc kubenswrapper[4805]: I0226 17:32:09.163515 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 26 17:32:09 crc kubenswrapper[4805]: I0226 17:32:09.170705 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4sxhs9"] Feb 26 17:32:09 crc kubenswrapper[4805]: I0226 17:32:09.230113 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v68cw\" (UniqueName: \"kubernetes.io/projected/2bd043d5-4b5d-47f7-887b-5e1685b4c0ce-kube-api-access-v68cw\") pod \"2bd043d5-4b5d-47f7-887b-5e1685b4c0ce\" (UID: \"2bd043d5-4b5d-47f7-887b-5e1685b4c0ce\") " Feb 26 17:32:09 crc kubenswrapper[4805]: I0226 17:32:09.236403 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bd043d5-4b5d-47f7-887b-5e1685b4c0ce-kube-api-access-v68cw" (OuterVolumeSpecName: "kube-api-access-v68cw") pod "2bd043d5-4b5d-47f7-887b-5e1685b4c0ce" (UID: "2bd043d5-4b5d-47f7-887b-5e1685b4c0ce"). InnerVolumeSpecName "kube-api-access-v68cw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:32:09 crc kubenswrapper[4805]: I0226 17:32:09.331533 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjrk4\" (UniqueName: \"kubernetes.io/projected/6684ce9e-9190-4147-961f-4d9b437d17be-kube-api-access-qjrk4\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4sxhs9\" (UID: \"6684ce9e-9190-4147-961f-4d9b437d17be\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4sxhs9" Feb 26 17:32:09 crc kubenswrapper[4805]: I0226 17:32:09.331587 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6684ce9e-9190-4147-961f-4d9b437d17be-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4sxhs9\" (UID: \"6684ce9e-9190-4147-961f-4d9b437d17be\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4sxhs9" Feb 26 17:32:09 crc kubenswrapper[4805]: I0226 17:32:09.331611 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6684ce9e-9190-4147-961f-4d9b437d17be-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4sxhs9\" (UID: \"6684ce9e-9190-4147-961f-4d9b437d17be\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4sxhs9" Feb 26 17:32:09 crc kubenswrapper[4805]: I0226 17:32:09.332026 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v68cw\" (UniqueName: \"kubernetes.io/projected/2bd043d5-4b5d-47f7-887b-5e1685b4c0ce-kube-api-access-v68cw\") on node \"crc\" DevicePath \"\"" Feb 26 17:32:09 crc kubenswrapper[4805]: I0226 17:32:09.433632 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjrk4\" (UniqueName: \"kubernetes.io/projected/6684ce9e-9190-4147-961f-4d9b437d17be-kube-api-access-qjrk4\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4sxhs9\" (UID: \"6684ce9e-9190-4147-961f-4d9b437d17be\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4sxhs9" Feb 26 17:32:09 crc kubenswrapper[4805]: I0226 17:32:09.433690 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6684ce9e-9190-4147-961f-4d9b437d17be-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4sxhs9\" (UID: \"6684ce9e-9190-4147-961f-4d9b437d17be\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4sxhs9" Feb 26 17:32:09 crc kubenswrapper[4805]: I0226 17:32:09.433721 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6684ce9e-9190-4147-961f-4d9b437d17be-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4sxhs9\" (UID: \"6684ce9e-9190-4147-961f-4d9b437d17be\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4sxhs9" Feb 26 17:32:09 crc kubenswrapper[4805]: I0226 17:32:09.434270 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6684ce9e-9190-4147-961f-4d9b437d17be-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4sxhs9\" (UID: \"6684ce9e-9190-4147-961f-4d9b437d17be\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4sxhs9" Feb 26 17:32:09 crc kubenswrapper[4805]: I0226 17:32:09.434326 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6684ce9e-9190-4147-961f-4d9b437d17be-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4sxhs9\" (UID: \"6684ce9e-9190-4147-961f-4d9b437d17be\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4sxhs9" Feb 26 17:32:09 crc kubenswrapper[4805]: I0226 17:32:09.450066 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjrk4\" (UniqueName: \"kubernetes.io/projected/6684ce9e-9190-4147-961f-4d9b437d17be-kube-api-access-qjrk4\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4sxhs9\" (UID: \"6684ce9e-9190-4147-961f-4d9b437d17be\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4sxhs9" Feb 26 17:32:09 crc kubenswrapper[4805]: I0226 17:32:09.475168 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4sxhs9" Feb 26 17:32:09 crc kubenswrapper[4805]: I0226 17:32:09.656916 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4sxhs9"] Feb 26 17:32:09 crc kubenswrapper[4805]: I0226 17:32:09.791206 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4sxhs9" event={"ID":"6684ce9e-9190-4147-961f-4d9b437d17be","Type":"ContainerStarted","Data":"73c9077852182349b821913f015541101509b98d16ed98fc068b831f091fbf18"} Feb 26 17:32:09 crc kubenswrapper[4805]: I0226 17:32:09.791259 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4sxhs9" event={"ID":"6684ce9e-9190-4147-961f-4d9b437d17be","Type":"ContainerStarted","Data":"21d38a057b49334b01ada316b3cf45fbc117fbbf1c8e210988607f0f2796848e"} Feb 26 17:32:09 crc kubenswrapper[4805]: I0226 17:32:09.793452 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535452-cmdkc" event={"ID":"2bd043d5-4b5d-47f7-887b-5e1685b4c0ce","Type":"ContainerDied","Data":"9171c5727860ab32933b5f47cc4c65071c2472442b701b4ac0608ff9e494f888"} Feb 26 17:32:09 crc kubenswrapper[4805]: I0226 17:32:09.793495 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9171c5727860ab32933b5f47cc4c65071c2472442b701b4ac0608ff9e494f888" Feb 26 17:32:09 crc kubenswrapper[4805]: I0226 17:32:09.793500 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535452-cmdkc" Feb 26 17:32:10 crc kubenswrapper[4805]: I0226 17:32:10.090963 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535446-6bz22"] Feb 26 17:32:10 crc kubenswrapper[4805]: I0226 17:32:10.095649 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535446-6bz22"] Feb 26 17:32:10 crc kubenswrapper[4805]: I0226 17:32:10.807523 4805 generic.go:334] "Generic (PLEG): container finished" podID="6684ce9e-9190-4147-961f-4d9b437d17be" containerID="73c9077852182349b821913f015541101509b98d16ed98fc068b831f091fbf18" exitCode=0 Feb 26 17:32:10 crc kubenswrapper[4805]: I0226 17:32:10.807841 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4sxhs9" event={"ID":"6684ce9e-9190-4147-961f-4d9b437d17be","Type":"ContainerDied","Data":"73c9077852182349b821913f015541101509b98d16ed98fc068b831f091fbf18"} Feb 26 17:32:10 crc kubenswrapper[4805]: I0226 17:32:10.972938 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80bff9b9-9a19-4950-b00a-395ca080797c" path="/var/lib/kubelet/pods/80bff9b9-9a19-4950-b00a-395ca080797c/volumes" Feb 26 17:32:12 crc kubenswrapper[4805]: I0226 17:32:12.822418 4805 generic.go:334] "Generic (PLEG): container finished" podID="6684ce9e-9190-4147-961f-4d9b437d17be" containerID="22d49fbad403a9a7f56afd57f713efdd1c9caba5c1bdb7cb20214966ff42fc5e" exitCode=0 Feb 26 17:32:12 crc kubenswrapper[4805]: I0226 17:32:12.822465 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4sxhs9" event={"ID":"6684ce9e-9190-4147-961f-4d9b437d17be","Type":"ContainerDied","Data":"22d49fbad403a9a7f56afd57f713efdd1c9caba5c1bdb7cb20214966ff42fc5e"} Feb 26 17:32:13 crc kubenswrapper[4805]: I0226 17:32:13.830592 4805 generic.go:334] "Generic (PLEG): container finished" podID="6684ce9e-9190-4147-961f-4d9b437d17be" containerID="880ba29776dfc3f136bad3c80b94307cc0a3c69b60ee6a3e6fc90f4d391cc425" exitCode=0 Feb 26 17:32:13 crc kubenswrapper[4805]: I0226 17:32:13.830645 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4sxhs9" event={"ID":"6684ce9e-9190-4147-961f-4d9b437d17be","Type":"ContainerDied","Data":"880ba29776dfc3f136bad3c80b94307cc0a3c69b60ee6a3e6fc90f4d391cc425"} Feb 26 17:32:15 crc kubenswrapper[4805]: I0226 17:32:15.079720 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4sxhs9" Feb 26 17:32:15 crc kubenswrapper[4805]: I0226 17:32:15.102765 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6684ce9e-9190-4147-961f-4d9b437d17be-bundle\") pod \"6684ce9e-9190-4147-961f-4d9b437d17be\" (UID: \"6684ce9e-9190-4147-961f-4d9b437d17be\") " Feb 26 17:32:15 crc kubenswrapper[4805]: I0226 17:32:15.102819 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjrk4\" (UniqueName: \"kubernetes.io/projected/6684ce9e-9190-4147-961f-4d9b437d17be-kube-api-access-qjrk4\") pod \"6684ce9e-9190-4147-961f-4d9b437d17be\" (UID: \"6684ce9e-9190-4147-961f-4d9b437d17be\") " Feb 26 17:32:15 crc kubenswrapper[4805]: I0226 17:32:15.102864 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6684ce9e-9190-4147-961f-4d9b437d17be-util\") pod \"6684ce9e-9190-4147-961f-4d9b437d17be\" (UID: \"6684ce9e-9190-4147-961f-4d9b437d17be\") " Feb 26 17:32:15 crc kubenswrapper[4805]: I0226 17:32:15.103966 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6684ce9e-9190-4147-961f-4d9b437d17be-bundle" (OuterVolumeSpecName: "bundle") pod "6684ce9e-9190-4147-961f-4d9b437d17be" (UID: "6684ce9e-9190-4147-961f-4d9b437d17be"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:32:15 crc kubenswrapper[4805]: I0226 17:32:15.108714 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6684ce9e-9190-4147-961f-4d9b437d17be-kube-api-access-qjrk4" (OuterVolumeSpecName: "kube-api-access-qjrk4") pod "6684ce9e-9190-4147-961f-4d9b437d17be" (UID: "6684ce9e-9190-4147-961f-4d9b437d17be"). InnerVolumeSpecName "kube-api-access-qjrk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:32:15 crc kubenswrapper[4805]: I0226 17:32:15.117436 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6684ce9e-9190-4147-961f-4d9b437d17be-util" (OuterVolumeSpecName: "util") pod "6684ce9e-9190-4147-961f-4d9b437d17be" (UID: "6684ce9e-9190-4147-961f-4d9b437d17be"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:32:15 crc kubenswrapper[4805]: I0226 17:32:15.204090 4805 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6684ce9e-9190-4147-961f-4d9b437d17be-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:32:15 crc kubenswrapper[4805]: I0226 17:32:15.204124 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjrk4\" (UniqueName: \"kubernetes.io/projected/6684ce9e-9190-4147-961f-4d9b437d17be-kube-api-access-qjrk4\") on node \"crc\" DevicePath \"\"" Feb 26 17:32:15 crc kubenswrapper[4805]: I0226 17:32:15.204136 4805 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6684ce9e-9190-4147-961f-4d9b437d17be-util\") on node \"crc\" DevicePath \"\"" Feb 26 17:32:15 crc kubenswrapper[4805]: I0226 17:32:15.849061 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4sxhs9" event={"ID":"6684ce9e-9190-4147-961f-4d9b437d17be","Type":"ContainerDied","Data":"21d38a057b49334b01ada316b3cf45fbc117fbbf1c8e210988607f0f2796848e"} Feb 26 17:32:15 crc kubenswrapper[4805]: I0226 17:32:15.849117 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21d38a057b49334b01ada316b3cf45fbc117fbbf1c8e210988607f0f2796848e" Feb 26 17:32:15 crc kubenswrapper[4805]: I0226 17:32:15.849183 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4sxhs9" Feb 26 17:32:23 crc kubenswrapper[4805]: I0226 17:32:23.098225 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-54558c788d-wmhq8"] Feb 26 17:32:23 crc kubenswrapper[4805]: E0226 17:32:23.099057 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6684ce9e-9190-4147-961f-4d9b437d17be" containerName="extract" Feb 26 17:32:23 crc kubenswrapper[4805]: I0226 17:32:23.099070 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="6684ce9e-9190-4147-961f-4d9b437d17be" containerName="extract" Feb 26 17:32:23 crc kubenswrapper[4805]: E0226 17:32:23.099086 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6684ce9e-9190-4147-961f-4d9b437d17be" containerName="pull" Feb 26 17:32:23 crc kubenswrapper[4805]: I0226 17:32:23.099092 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="6684ce9e-9190-4147-961f-4d9b437d17be" containerName="pull" Feb 26 17:32:23 crc kubenswrapper[4805]: E0226 17:32:23.099105 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6684ce9e-9190-4147-961f-4d9b437d17be" containerName="util" Feb 26 17:32:23 crc kubenswrapper[4805]: I0226 17:32:23.099112 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="6684ce9e-9190-4147-961f-4d9b437d17be" containerName="util" Feb 26 17:32:23 crc kubenswrapper[4805]: I0226 17:32:23.099221 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="6684ce9e-9190-4147-961f-4d9b437d17be" containerName="extract" Feb 26 17:32:23 crc kubenswrapper[4805]: I0226 17:32:23.099634 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-54558c788d-wmhq8" Feb 26 17:32:23 crc kubenswrapper[4805]: I0226 17:32:23.102178 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 26 17:32:23 crc kubenswrapper[4805]: I0226 17:32:23.102353 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 26 17:32:23 crc kubenswrapper[4805]: I0226 17:32:23.102720 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 26 17:32:23 crc kubenswrapper[4805]: I0226 17:32:23.102736 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-l6gmd" Feb 26 17:32:23 crc kubenswrapper[4805]: I0226 17:32:23.107409 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 26 17:32:23 crc kubenswrapper[4805]: I0226 17:32:23.119759 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-54558c788d-wmhq8"] Feb 26 17:32:23 crc kubenswrapper[4805]: I0226 17:32:23.301443 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpc77\" (UniqueName: \"kubernetes.io/projected/22a4700e-04cb-4a48-9596-e0813f515868-kube-api-access-bpc77\") pod \"metallb-operator-controller-manager-54558c788d-wmhq8\" (UID: \"22a4700e-04cb-4a48-9596-e0813f515868\") " pod="metallb-system/metallb-operator-controller-manager-54558c788d-wmhq8" Feb 26 17:32:23 crc kubenswrapper[4805]: I0226 17:32:23.301499 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/22a4700e-04cb-4a48-9596-e0813f515868-webhook-cert\") pod \"metallb-operator-controller-manager-54558c788d-wmhq8\" (UID: \"22a4700e-04cb-4a48-9596-e0813f515868\") " pod="metallb-system/metallb-operator-controller-manager-54558c788d-wmhq8" Feb 26 17:32:23 crc kubenswrapper[4805]: I0226 17:32:23.301523 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/22a4700e-04cb-4a48-9596-e0813f515868-apiservice-cert\") pod \"metallb-operator-controller-manager-54558c788d-wmhq8\" (UID: \"22a4700e-04cb-4a48-9596-e0813f515868\") " pod="metallb-system/metallb-operator-controller-manager-54558c788d-wmhq8" Feb 26 17:32:23 crc kubenswrapper[4805]: I0226 17:32:23.403174 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpc77\" (UniqueName: \"kubernetes.io/projected/22a4700e-04cb-4a48-9596-e0813f515868-kube-api-access-bpc77\") pod \"metallb-operator-controller-manager-54558c788d-wmhq8\" (UID: \"22a4700e-04cb-4a48-9596-e0813f515868\") " pod="metallb-system/metallb-operator-controller-manager-54558c788d-wmhq8" Feb 26 17:32:23 crc kubenswrapper[4805]: I0226 17:32:23.403297 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/22a4700e-04cb-4a48-9596-e0813f515868-webhook-cert\") pod \"metallb-operator-controller-manager-54558c788d-wmhq8\" (UID: \"22a4700e-04cb-4a48-9596-e0813f515868\") " pod="metallb-system/metallb-operator-controller-manager-54558c788d-wmhq8" Feb 26 17:32:23 crc kubenswrapper[4805]: I0226 17:32:23.403337 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/22a4700e-04cb-4a48-9596-e0813f515868-apiservice-cert\") pod \"metallb-operator-controller-manager-54558c788d-wmhq8\" (UID: \"22a4700e-04cb-4a48-9596-e0813f515868\") " pod="metallb-system/metallb-operator-controller-manager-54558c788d-wmhq8" Feb 26 17:32:23 crc kubenswrapper[4805]: I0226 17:32:23.409758 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/22a4700e-04cb-4a48-9596-e0813f515868-webhook-cert\") pod \"metallb-operator-controller-manager-54558c788d-wmhq8\" (UID: \"22a4700e-04cb-4a48-9596-e0813f515868\") " pod="metallb-system/metallb-operator-controller-manager-54558c788d-wmhq8" Feb 26 17:32:23 crc kubenswrapper[4805]: I0226 17:32:23.409818 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/22a4700e-04cb-4a48-9596-e0813f515868-apiservice-cert\") pod \"metallb-operator-controller-manager-54558c788d-wmhq8\" (UID: \"22a4700e-04cb-4a48-9596-e0813f515868\") " pod="metallb-system/metallb-operator-controller-manager-54558c788d-wmhq8" Feb 26 17:32:23 crc kubenswrapper[4805]: I0226 17:32:23.436842 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpc77\" (UniqueName: \"kubernetes.io/projected/22a4700e-04cb-4a48-9596-e0813f515868-kube-api-access-bpc77\") pod \"metallb-operator-controller-manager-54558c788d-wmhq8\" (UID: \"22a4700e-04cb-4a48-9596-e0813f515868\") " pod="metallb-system/metallb-operator-controller-manager-54558c788d-wmhq8" Feb 26 17:32:23 crc kubenswrapper[4805]: I0226 17:32:23.475384 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-54896b5f59-xjslq"] Feb 26 17:32:23 crc kubenswrapper[4805]: I0226 17:32:23.476260 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-54896b5f59-xjslq" Feb 26 17:32:23 crc kubenswrapper[4805]: I0226 17:32:23.480422 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-xndfb" Feb 26 17:32:23 crc kubenswrapper[4805]: I0226 17:32:23.481314 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 26 17:32:23 crc kubenswrapper[4805]: I0226 17:32:23.482585 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 26 17:32:23 crc kubenswrapper[4805]: I0226 17:32:23.491328 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-54896b5f59-xjslq"] Feb 26 17:32:23 crc kubenswrapper[4805]: I0226 17:32:23.504870 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgmfz\" (UniqueName: \"kubernetes.io/projected/154d74df-d117-44e5-86c4-b4a72182153e-kube-api-access-fgmfz\") pod \"metallb-operator-webhook-server-54896b5f59-xjslq\" (UID: \"154d74df-d117-44e5-86c4-b4a72182153e\") " pod="metallb-system/metallb-operator-webhook-server-54896b5f59-xjslq" Feb 26 17:32:23 crc kubenswrapper[4805]: I0226 17:32:23.504924 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/154d74df-d117-44e5-86c4-b4a72182153e-apiservice-cert\") pod \"metallb-operator-webhook-server-54896b5f59-xjslq\" (UID: \"154d74df-d117-44e5-86c4-b4a72182153e\") " pod="metallb-system/metallb-operator-webhook-server-54896b5f59-xjslq" Feb 26 17:32:23 crc kubenswrapper[4805]: I0226 17:32:23.504949 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/154d74df-d117-44e5-86c4-b4a72182153e-webhook-cert\") pod \"metallb-operator-webhook-server-54896b5f59-xjslq\" (UID: \"154d74df-d117-44e5-86c4-b4a72182153e\") " pod="metallb-system/metallb-operator-webhook-server-54896b5f59-xjslq" Feb 26 17:32:23 crc kubenswrapper[4805]: I0226 17:32:23.606716 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgmfz\" (UniqueName: \"kubernetes.io/projected/154d74df-d117-44e5-86c4-b4a72182153e-kube-api-access-fgmfz\") pod \"metallb-operator-webhook-server-54896b5f59-xjslq\" (UID: \"154d74df-d117-44e5-86c4-b4a72182153e\") " pod="metallb-system/metallb-operator-webhook-server-54896b5f59-xjslq" Feb 26 17:32:23 crc kubenswrapper[4805]: I0226 17:32:23.606785 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/154d74df-d117-44e5-86c4-b4a72182153e-apiservice-cert\") pod \"metallb-operator-webhook-server-54896b5f59-xjslq\" (UID: \"154d74df-d117-44e5-86c4-b4a72182153e\") " pod="metallb-system/metallb-operator-webhook-server-54896b5f59-xjslq" Feb 26 17:32:23 crc kubenswrapper[4805]: I0226 17:32:23.606810 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/154d74df-d117-44e5-86c4-b4a72182153e-webhook-cert\") pod \"metallb-operator-webhook-server-54896b5f59-xjslq\" (UID: \"154d74df-d117-44e5-86c4-b4a72182153e\") " pod="metallb-system/metallb-operator-webhook-server-54896b5f59-xjslq" Feb 26 17:32:23 crc kubenswrapper[4805]: I0226 17:32:23.610074 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/154d74df-d117-44e5-86c4-b4a72182153e-webhook-cert\") pod \"metallb-operator-webhook-server-54896b5f59-xjslq\" (UID: \"154d74df-d117-44e5-86c4-b4a72182153e\") " pod="metallb-system/metallb-operator-webhook-server-54896b5f59-xjslq" Feb 26 17:32:23 crc kubenswrapper[4805]: I0226 17:32:23.611916 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/154d74df-d117-44e5-86c4-b4a72182153e-apiservice-cert\") pod \"metallb-operator-webhook-server-54896b5f59-xjslq\" (UID: \"154d74df-d117-44e5-86c4-b4a72182153e\") " pod="metallb-system/metallb-operator-webhook-server-54896b5f59-xjslq" Feb 26 17:32:23 crc kubenswrapper[4805]: I0226 17:32:23.632610 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgmfz\" (UniqueName: \"kubernetes.io/projected/154d74df-d117-44e5-86c4-b4a72182153e-kube-api-access-fgmfz\") pod \"metallb-operator-webhook-server-54896b5f59-xjslq\" (UID: \"154d74df-d117-44e5-86c4-b4a72182153e\") " pod="metallb-system/metallb-operator-webhook-server-54896b5f59-xjslq" Feb 26 17:32:23 crc kubenswrapper[4805]: I0226 17:32:23.718478 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-54558c788d-wmhq8" Feb 26 17:32:23 crc kubenswrapper[4805]: I0226 17:32:23.796489 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-54896b5f59-xjslq" Feb 26 17:32:24 crc kubenswrapper[4805]: I0226 17:32:24.121524 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-54896b5f59-xjslq"] Feb 26 17:32:24 crc kubenswrapper[4805]: I0226 17:32:24.245663 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-54558c788d-wmhq8"] Feb 26 17:32:24 crc kubenswrapper[4805]: W0226 17:32:24.248706 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22a4700e_04cb_4a48_9596_e0813f515868.slice/crio-4b2f26511670001cd2f78e63c4b539db1466cd070954a5d9e1b5ada01f2e0365 WatchSource:0}: Error finding container 4b2f26511670001cd2f78e63c4b539db1466cd070954a5d9e1b5ada01f2e0365: Status 404 returned error can't find the container with id 4b2f26511670001cd2f78e63c4b539db1466cd070954a5d9e1b5ada01f2e0365 Feb 26 17:32:24 crc kubenswrapper[4805]: I0226 17:32:24.925579 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-54558c788d-wmhq8" event={"ID":"22a4700e-04cb-4a48-9596-e0813f515868","Type":"ContainerStarted","Data":"4b2f26511670001cd2f78e63c4b539db1466cd070954a5d9e1b5ada01f2e0365"} Feb 26 17:32:24 crc kubenswrapper[4805]: I0226 17:32:24.927177 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-54896b5f59-xjslq" event={"ID":"154d74df-d117-44e5-86c4-b4a72182153e","Type":"ContainerStarted","Data":"9a961f92054e77688e5b9a61facdd075e499ad1f682d3af0ed3a2dd7e0cfe964"} Feb 26 17:32:30 crc kubenswrapper[4805]: I0226 17:32:30.987368 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-54558c788d-wmhq8" event={"ID":"22a4700e-04cb-4a48-9596-e0813f515868","Type":"ContainerStarted","Data":"d5ab577518aee5fcc54ee8212940144eceba3ddb37c0e6631629aa61d07f4b03"} Feb 26 17:32:30 crc kubenswrapper[4805]: I0226 17:32:30.987852 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-54558c788d-wmhq8" Feb 26 17:32:30 crc kubenswrapper[4805]: I0226 17:32:30.989200 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-54896b5f59-xjslq" event={"ID":"154d74df-d117-44e5-86c4-b4a72182153e","Type":"ContainerStarted","Data":"fa31743447899f475acd311d92b98779853b7cca2cca6a26982a01a9440ffb64"} Feb 26 17:32:30 crc kubenswrapper[4805]: I0226 17:32:30.989367 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-54896b5f59-xjslq" Feb 26 17:32:31 crc kubenswrapper[4805]: I0226 17:32:31.006543 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-54558c788d-wmhq8" podStartSLOduration=1.765124119 podStartE2EDuration="8.006518342s" podCreationTimestamp="2026-02-26 17:32:23 +0000 UTC" firstStartedPulling="2026-02-26 17:32:24.25364294 +0000 UTC m=+1058.815397279" lastFinishedPulling="2026-02-26 17:32:30.495037163 +0000 UTC m=+1065.056791502" observedRunningTime="2026-02-26 17:32:31.004271006 +0000 UTC m=+1065.566025355" watchObservedRunningTime="2026-02-26 17:32:31.006518342 +0000 UTC m=+1065.568272701" Feb 26 17:32:31 crc kubenswrapper[4805]: I0226 17:32:31.026848 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-54896b5f59-xjslq" podStartSLOduration=1.658379197 podStartE2EDuration="8.026826597s" podCreationTimestamp="2026-02-26 17:32:23 +0000 UTC" firstStartedPulling="2026-02-26 17:32:24.13044728 +0000 UTC m=+1058.692201619" lastFinishedPulling="2026-02-26 17:32:30.49889469 +0000 UTC m=+1065.060649019" observedRunningTime="2026-02-26 17:32:31.024505888 +0000 UTC m=+1065.586260227" watchObservedRunningTime="2026-02-26 17:32:31.026826597 +0000 UTC m=+1065.588580946" Feb 26 17:32:43 crc kubenswrapper[4805]: I0226 17:32:43.800827 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-54896b5f59-xjslq" Feb 26 17:32:49 crc kubenswrapper[4805]: I0226 17:32:49.213233 4805 scope.go:117] "RemoveContainer" containerID="4ef3f1e43cc6400150fb7d1ee5c46370790b44e1041fdf1f06bf0af1bb23ae5d" Feb 26 17:33:02 crc kubenswrapper[4805]: I0226 17:33:02.978349 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 17:33:02 crc kubenswrapper[4805]: I0226 17:33:02.978920 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 17:33:03 crc kubenswrapper[4805]: I0226 17:33:03.720757 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-54558c788d-wmhq8" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.480476 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-qpcbj"] Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.481656 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-qpcbj" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.487766 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-bnhld" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.494613 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-gb6n5"] Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.497158 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-gb6n5" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.499812 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-qpcbj"] Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.500090 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.501057 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.508578 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.560804 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-djvlh"] Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.561957 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-djvlh" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.564051 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-ptl58" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.564284 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.564415 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.568908 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.599645 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-86ddb6bd46-4jj6l"] Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.601128 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-4jj6l" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.605102 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.612810 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-4jj6l"] Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.656770 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/98934f34-7841-49dc-b326-c76aa0c09017-metrics\") pod \"frr-k8s-gb6n5\" (UID: \"98934f34-7841-49dc-b326-c76aa0c09017\") " pod="metallb-system/frr-k8s-gb6n5" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.656820 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29c75\" (UniqueName: \"kubernetes.io/projected/98934f34-7841-49dc-b326-c76aa0c09017-kube-api-access-29c75\") pod \"frr-k8s-gb6n5\" (UID: \"98934f34-7841-49dc-b326-c76aa0c09017\") " pod="metallb-system/frr-k8s-gb6n5" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.656842 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0cde9605-24e4-48e2-b0b7-a4bf09039031-metrics-certs\") pod \"speaker-djvlh\" (UID: \"0cde9605-24e4-48e2-b0b7-a4bf09039031\") " pod="metallb-system/speaker-djvlh" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.656879 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/98934f34-7841-49dc-b326-c76aa0c09017-frr-sockets\") pod \"frr-k8s-gb6n5\" (UID: \"98934f34-7841-49dc-b326-c76aa0c09017\") " pod="metallb-system/frr-k8s-gb6n5" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.656909 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n69kx\" (UniqueName: \"kubernetes.io/projected/417cdacf-3299-4488-abd5-ceb51272f3be-kube-api-access-n69kx\") pod \"frr-k8s-webhook-server-7f989f654f-qpcbj\" (UID: \"417cdacf-3299-4488-abd5-ceb51272f3be\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-qpcbj" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.656941 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjmcx\" (UniqueName: \"kubernetes.io/projected/0cde9605-24e4-48e2-b0b7-a4bf09039031-kube-api-access-rjmcx\") pod \"speaker-djvlh\" (UID: \"0cde9605-24e4-48e2-b0b7-a4bf09039031\") " pod="metallb-system/speaker-djvlh" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.656967 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/98934f34-7841-49dc-b326-c76aa0c09017-frr-startup\") pod \"frr-k8s-gb6n5\" (UID: \"98934f34-7841-49dc-b326-c76aa0c09017\") " pod="metallb-system/frr-k8s-gb6n5" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.656991 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98934f34-7841-49dc-b326-c76aa0c09017-metrics-certs\") pod \"frr-k8s-gb6n5\" (UID: \"98934f34-7841-49dc-b326-c76aa0c09017\") " pod="metallb-system/frr-k8s-gb6n5" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.657013 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/98934f34-7841-49dc-b326-c76aa0c09017-reloader\") pod \"frr-k8s-gb6n5\" (UID: \"98934f34-7841-49dc-b326-c76aa0c09017\") " pod="metallb-system/frr-k8s-gb6n5" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.657102 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/417cdacf-3299-4488-abd5-ceb51272f3be-cert\") pod \"frr-k8s-webhook-server-7f989f654f-qpcbj\" (UID: \"417cdacf-3299-4488-abd5-ceb51272f3be\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-qpcbj" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.657125 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/98934f34-7841-49dc-b326-c76aa0c09017-frr-conf\") pod \"frr-k8s-gb6n5\" (UID: \"98934f34-7841-49dc-b326-c76aa0c09017\") " pod="metallb-system/frr-k8s-gb6n5" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.657143 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0cde9605-24e4-48e2-b0b7-a4bf09039031-memberlist\") pod \"speaker-djvlh\" (UID: \"0cde9605-24e4-48e2-b0b7-a4bf09039031\") " pod="metallb-system/speaker-djvlh" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.657172 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0cde9605-24e4-48e2-b0b7-a4bf09039031-metallb-excludel2\") pod \"speaker-djvlh\" (UID: \"0cde9605-24e4-48e2-b0b7-a4bf09039031\") " pod="metallb-system/speaker-djvlh" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.758127 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/417cdacf-3299-4488-abd5-ceb51272f3be-cert\") pod \"frr-k8s-webhook-server-7f989f654f-qpcbj\" (UID: \"417cdacf-3299-4488-abd5-ceb51272f3be\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-qpcbj" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.758188 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/98934f34-7841-49dc-b326-c76aa0c09017-frr-conf\") pod \"frr-k8s-gb6n5\" (UID: \"98934f34-7841-49dc-b326-c76aa0c09017\") " pod="metallb-system/frr-k8s-gb6n5" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.758211 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0cde9605-24e4-48e2-b0b7-a4bf09039031-metallb-excludel2\") pod \"speaker-djvlh\" (UID: \"0cde9605-24e4-48e2-b0b7-a4bf09039031\") " pod="metallb-system/speaker-djvlh" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.758231 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0cde9605-24e4-48e2-b0b7-a4bf09039031-memberlist\") pod \"speaker-djvlh\" (UID: \"0cde9605-24e4-48e2-b0b7-a4bf09039031\") " pod="metallb-system/speaker-djvlh" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.758256 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/98934f34-7841-49dc-b326-c76aa0c09017-metrics\") pod \"frr-k8s-gb6n5\" (UID: \"98934f34-7841-49dc-b326-c76aa0c09017\") " pod="metallb-system/frr-k8s-gb6n5" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.758273 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29c75\" (UniqueName: \"kubernetes.io/projected/98934f34-7841-49dc-b326-c76aa0c09017-kube-api-access-29c75\") pod \"frr-k8s-gb6n5\" (UID: \"98934f34-7841-49dc-b326-c76aa0c09017\") " pod="metallb-system/frr-k8s-gb6n5" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.758287 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0cde9605-24e4-48e2-b0b7-a4bf09039031-metrics-certs\") pod \"speaker-djvlh\" (UID: \"0cde9605-24e4-48e2-b0b7-a4bf09039031\") " pod="metallb-system/speaker-djvlh" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.758313 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/98934f34-7841-49dc-b326-c76aa0c09017-frr-sockets\") pod \"frr-k8s-gb6n5\" (UID: \"98934f34-7841-49dc-b326-c76aa0c09017\") " pod="metallb-system/frr-k8s-gb6n5" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.758336 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2vn7\" (UniqueName: \"kubernetes.io/projected/2481b089-843f-4898-9f85-36769bac7219-kube-api-access-p2vn7\") pod \"controller-86ddb6bd46-4jj6l\" (UID: \"2481b089-843f-4898-9f85-36769bac7219\") " pod="metallb-system/controller-86ddb6bd46-4jj6l" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.758361 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n69kx\" (UniqueName: \"kubernetes.io/projected/417cdacf-3299-4488-abd5-ceb51272f3be-kube-api-access-n69kx\") pod \"frr-k8s-webhook-server-7f989f654f-qpcbj\" (UID: \"417cdacf-3299-4488-abd5-ceb51272f3be\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-qpcbj" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.758395 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjmcx\" (UniqueName: \"kubernetes.io/projected/0cde9605-24e4-48e2-b0b7-a4bf09039031-kube-api-access-rjmcx\") pod \"speaker-djvlh\" (UID: \"0cde9605-24e4-48e2-b0b7-a4bf09039031\") " pod="metallb-system/speaker-djvlh" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.758430 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/98934f34-7841-49dc-b326-c76aa0c09017-frr-startup\") pod \"frr-k8s-gb6n5\" (UID: \"98934f34-7841-49dc-b326-c76aa0c09017\") " pod="metallb-system/frr-k8s-gb6n5" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.758458 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2481b089-843f-4898-9f85-36769bac7219-metrics-certs\") pod \"controller-86ddb6bd46-4jj6l\" (UID: \"2481b089-843f-4898-9f85-36769bac7219\") " pod="metallb-system/controller-86ddb6bd46-4jj6l" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.758486 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98934f34-7841-49dc-b326-c76aa0c09017-metrics-certs\") pod \"frr-k8s-gb6n5\" (UID: \"98934f34-7841-49dc-b326-c76aa0c09017\") " pod="metallb-system/frr-k8s-gb6n5" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.758516 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/98934f34-7841-49dc-b326-c76aa0c09017-reloader\") pod \"frr-k8s-gb6n5\" (UID: \"98934f34-7841-49dc-b326-c76aa0c09017\") " pod="metallb-system/frr-k8s-gb6n5" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.758549 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2481b089-843f-4898-9f85-36769bac7219-cert\") pod \"controller-86ddb6bd46-4jj6l\" (UID: \"2481b089-843f-4898-9f85-36769bac7219\") " pod="metallb-system/controller-86ddb6bd46-4jj6l" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.759934 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/98934f34-7841-49dc-b326-c76aa0c09017-frr-sockets\") pod \"frr-k8s-gb6n5\" (UID: \"98934f34-7841-49dc-b326-c76aa0c09017\") " pod="metallb-system/frr-k8s-gb6n5" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.760142 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/98934f34-7841-49dc-b326-c76aa0c09017-frr-conf\") pod \"frr-k8s-gb6n5\" (UID: \"98934f34-7841-49dc-b326-c76aa0c09017\") " pod="metallb-system/frr-k8s-gb6n5" Feb 26 17:33:04 crc kubenswrapper[4805]: E0226 17:33:04.760337 4805 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 26 17:33:04 crc kubenswrapper[4805]: E0226 17:33:04.760410 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0cde9605-24e4-48e2-b0b7-a4bf09039031-memberlist podName:0cde9605-24e4-48e2-b0b7-a4bf09039031 nodeName:}" failed. No retries permitted until 2026-02-26 17:33:05.260393017 +0000 UTC m=+1099.822147356 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/0cde9605-24e4-48e2-b0b7-a4bf09039031-memberlist") pod "speaker-djvlh" (UID: "0cde9605-24e4-48e2-b0b7-a4bf09039031") : secret "metallb-memberlist" not found Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.760490 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/98934f34-7841-49dc-b326-c76aa0c09017-metrics\") pod \"frr-k8s-gb6n5\" (UID: \"98934f34-7841-49dc-b326-c76aa0c09017\") " pod="metallb-system/frr-k8s-gb6n5" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.760912 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0cde9605-24e4-48e2-b0b7-a4bf09039031-metallb-excludel2\") pod \"speaker-djvlh\" (UID: \"0cde9605-24e4-48e2-b0b7-a4bf09039031\") " pod="metallb-system/speaker-djvlh" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.761154 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/98934f34-7841-49dc-b326-c76aa0c09017-reloader\") pod \"frr-k8s-gb6n5\" (UID: \"98934f34-7841-49dc-b326-c76aa0c09017\") " pod="metallb-system/frr-k8s-gb6n5" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.761457 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/98934f34-7841-49dc-b326-c76aa0c09017-frr-startup\") pod \"frr-k8s-gb6n5\" (UID: \"98934f34-7841-49dc-b326-c76aa0c09017\") " pod="metallb-system/frr-k8s-gb6n5" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.764618 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0cde9605-24e4-48e2-b0b7-a4bf09039031-metrics-certs\") pod \"speaker-djvlh\" (UID: \"0cde9605-24e4-48e2-b0b7-a4bf09039031\") " pod="metallb-system/speaker-djvlh" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.764885 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/417cdacf-3299-4488-abd5-ceb51272f3be-cert\") pod \"frr-k8s-webhook-server-7f989f654f-qpcbj\" (UID: \"417cdacf-3299-4488-abd5-ceb51272f3be\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-qpcbj" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.767824 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98934f34-7841-49dc-b326-c76aa0c09017-metrics-certs\") pod \"frr-k8s-gb6n5\" (UID: \"98934f34-7841-49dc-b326-c76aa0c09017\") " pod="metallb-system/frr-k8s-gb6n5" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.789862 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjmcx\" (UniqueName: \"kubernetes.io/projected/0cde9605-24e4-48e2-b0b7-a4bf09039031-kube-api-access-rjmcx\") pod \"speaker-djvlh\" (UID: \"0cde9605-24e4-48e2-b0b7-a4bf09039031\") " pod="metallb-system/speaker-djvlh" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.797623 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n69kx\" (UniqueName: \"kubernetes.io/projected/417cdacf-3299-4488-abd5-ceb51272f3be-kube-api-access-n69kx\") pod \"frr-k8s-webhook-server-7f989f654f-qpcbj\" (UID: \"417cdacf-3299-4488-abd5-ceb51272f3be\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-qpcbj" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.799239 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29c75\" (UniqueName: \"kubernetes.io/projected/98934f34-7841-49dc-b326-c76aa0c09017-kube-api-access-29c75\") pod \"frr-k8s-gb6n5\" (UID: \"98934f34-7841-49dc-b326-c76aa0c09017\") " pod="metallb-system/frr-k8s-gb6n5" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.803395 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-qpcbj" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.816331 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-gb6n5" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.860147 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2481b089-843f-4898-9f85-36769bac7219-cert\") pod \"controller-86ddb6bd46-4jj6l\" (UID: \"2481b089-843f-4898-9f85-36769bac7219\") " pod="metallb-system/controller-86ddb6bd46-4jj6l" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.860604 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2vn7\" (UniqueName: \"kubernetes.io/projected/2481b089-843f-4898-9f85-36769bac7219-kube-api-access-p2vn7\") pod \"controller-86ddb6bd46-4jj6l\" (UID: \"2481b089-843f-4898-9f85-36769bac7219\") " pod="metallb-system/controller-86ddb6bd46-4jj6l" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.860648 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2481b089-843f-4898-9f85-36769bac7219-metrics-certs\") pod \"controller-86ddb6bd46-4jj6l\" (UID: \"2481b089-843f-4898-9f85-36769bac7219\") " pod="metallb-system/controller-86ddb6bd46-4jj6l" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.862422 4805 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.864147 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2481b089-843f-4898-9f85-36769bac7219-metrics-certs\") pod \"controller-86ddb6bd46-4jj6l\" (UID: \"2481b089-843f-4898-9f85-36769bac7219\") " pod="metallb-system/controller-86ddb6bd46-4jj6l" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.873856 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2481b089-843f-4898-9f85-36769bac7219-cert\") pod \"controller-86ddb6bd46-4jj6l\" (UID: \"2481b089-843f-4898-9f85-36769bac7219\") " pod="metallb-system/controller-86ddb6bd46-4jj6l" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.874573 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2vn7\" (UniqueName: \"kubernetes.io/projected/2481b089-843f-4898-9f85-36769bac7219-kube-api-access-p2vn7\") pod \"controller-86ddb6bd46-4jj6l\" (UID: \"2481b089-843f-4898-9f85-36769bac7219\") " pod="metallb-system/controller-86ddb6bd46-4jj6l" Feb 26 17:33:04 crc kubenswrapper[4805]: I0226 17:33:04.917224 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-4jj6l" Feb 26 17:33:05 crc kubenswrapper[4805]: I0226 17:33:05.193634 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gb6n5" event={"ID":"98934f34-7841-49dc-b326-c76aa0c09017","Type":"ContainerStarted","Data":"1171087f0cb7808ebe7481fc0887336bba5bd69dfbf602fc5d4a62c7902beb54"} Feb 26 17:33:05 crc kubenswrapper[4805]: I0226 17:33:05.249622 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-qpcbj"] Feb 26 17:33:05 crc kubenswrapper[4805]: I0226 17:33:05.267814 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0cde9605-24e4-48e2-b0b7-a4bf09039031-memberlist\") pod \"speaker-djvlh\" (UID: \"0cde9605-24e4-48e2-b0b7-a4bf09039031\") " pod="metallb-system/speaker-djvlh" Feb 26 17:33:05 crc kubenswrapper[4805]: E0226 17:33:05.267957 4805 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 26 17:33:05 crc kubenswrapper[4805]: E0226 17:33:05.267999 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0cde9605-24e4-48e2-b0b7-a4bf09039031-memberlist podName:0cde9605-24e4-48e2-b0b7-a4bf09039031 nodeName:}" failed. No retries permitted until 2026-02-26 17:33:06.267986738 +0000 UTC m=+1100.829741077 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/0cde9605-24e4-48e2-b0b7-a4bf09039031-memberlist") pod "speaker-djvlh" (UID: "0cde9605-24e4-48e2-b0b7-a4bf09039031") : secret "metallb-memberlist" not found Feb 26 17:33:05 crc kubenswrapper[4805]: I0226 17:33:05.323303 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-4jj6l"] Feb 26 17:33:05 crc kubenswrapper[4805]: W0226 17:33:05.323881 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2481b089_843f_4898_9f85_36769bac7219.slice/crio-b3a251ceec4b11656785a8dde301987403c8a1855dbb365851cc35c6f5bbb099 WatchSource:0}: Error finding container b3a251ceec4b11656785a8dde301987403c8a1855dbb365851cc35c6f5bbb099: Status 404 returned error can't find the container with id b3a251ceec4b11656785a8dde301987403c8a1855dbb365851cc35c6f5bbb099 Feb 26 17:33:06 crc kubenswrapper[4805]: I0226 17:33:06.200870 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-4jj6l" event={"ID":"2481b089-843f-4898-9f85-36769bac7219","Type":"ContainerStarted","Data":"6cb9ec64989f6eb77dc0a67ab29855042d47cae980b1835b13090c8bf5cff8c8"} Feb 26 17:33:06 crc kubenswrapper[4805]: I0226 17:33:06.201196 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-86ddb6bd46-4jj6l" Feb 26 17:33:06 crc kubenswrapper[4805]: I0226 17:33:06.201208 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-4jj6l" event={"ID":"2481b089-843f-4898-9f85-36769bac7219","Type":"ContainerStarted","Data":"4a877e0701fc2a0141a399424c36471ac4be0806a4e7bfc08c76332002577b18"} Feb 26 17:33:06 crc kubenswrapper[4805]: I0226 17:33:06.201217 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-4jj6l" event={"ID":"2481b089-843f-4898-9f85-36769bac7219","Type":"ContainerStarted","Data":"b3a251ceec4b11656785a8dde301987403c8a1855dbb365851cc35c6f5bbb099"} Feb 26 17:33:06 crc kubenswrapper[4805]: I0226 17:33:06.202087 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-qpcbj" event={"ID":"417cdacf-3299-4488-abd5-ceb51272f3be","Type":"ContainerStarted","Data":"5781c7396e96da9fdeb2a63387c2720c30fdc000b573c4825b4130ecf0d27e70"} Feb 26 17:33:06 crc kubenswrapper[4805]: I0226 17:33:06.217469 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-86ddb6bd46-4jj6l" podStartSLOduration=2.217449605 podStartE2EDuration="2.217449605s" podCreationTimestamp="2026-02-26 17:33:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:33:06.213379512 +0000 UTC m=+1100.775133851" watchObservedRunningTime="2026-02-26 17:33:06.217449605 +0000 UTC m=+1100.779203944" Feb 26 17:33:06 crc kubenswrapper[4805]: I0226 17:33:06.281922 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0cde9605-24e4-48e2-b0b7-a4bf09039031-memberlist\") pod \"speaker-djvlh\" (UID: \"0cde9605-24e4-48e2-b0b7-a4bf09039031\") " pod="metallb-system/speaker-djvlh" Feb 26 17:33:06 crc kubenswrapper[4805]: I0226 17:33:06.287155 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0cde9605-24e4-48e2-b0b7-a4bf09039031-memberlist\") pod \"speaker-djvlh\" (UID: \"0cde9605-24e4-48e2-b0b7-a4bf09039031\") " pod="metallb-system/speaker-djvlh" Feb 26 17:33:06 crc kubenswrapper[4805]: I0226 17:33:06.380830 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-djvlh" Feb 26 17:33:07 crc kubenswrapper[4805]: I0226 17:33:07.211680 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-djvlh" event={"ID":"0cde9605-24e4-48e2-b0b7-a4bf09039031","Type":"ContainerStarted","Data":"7bd3b5bb8331138eaa561d9c35e4d910c337236556af10c9d0d2d33be0db123b"} Feb 26 17:33:07 crc kubenswrapper[4805]: I0226 17:33:07.211993 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-djvlh" event={"ID":"0cde9605-24e4-48e2-b0b7-a4bf09039031","Type":"ContainerStarted","Data":"4513b14bfd5aab2b4730fbc1fdf350bf9eebe1be800ab66f95054c57804a5a61"} Feb 26 17:33:07 crc kubenswrapper[4805]: I0226 17:33:07.212005 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-djvlh" event={"ID":"0cde9605-24e4-48e2-b0b7-a4bf09039031","Type":"ContainerStarted","Data":"c8053779e47c1a11acb9db3418dd014c8b868364f4b269e6dd90df7a1f07fc5b"} Feb 26 17:33:07 crc kubenswrapper[4805]: I0226 17:33:07.212632 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-djvlh" Feb 26 17:33:07 crc kubenswrapper[4805]: I0226 17:33:07.238486 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-djvlh" podStartSLOduration=3.238470595 podStartE2EDuration="3.238470595s" podCreationTimestamp="2026-02-26 17:33:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:33:07.231836177 +0000 UTC m=+1101.793590516" watchObservedRunningTime="2026-02-26 17:33:07.238470595 +0000 UTC m=+1101.800224934" Feb 26 17:33:12 crc kubenswrapper[4805]: I0226 17:33:12.266555 4805 generic.go:334] "Generic (PLEG): container finished" podID="98934f34-7841-49dc-b326-c76aa0c09017" containerID="1f41c81217dabcbf34aa353c9f1699d046c0037940f2dbc437a6fd05432fe157" exitCode=0 Feb 26 17:33:12 crc kubenswrapper[4805]: I0226 17:33:12.266903 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gb6n5" event={"ID":"98934f34-7841-49dc-b326-c76aa0c09017","Type":"ContainerDied","Data":"1f41c81217dabcbf34aa353c9f1699d046c0037940f2dbc437a6fd05432fe157"} Feb 26 17:33:13 crc kubenswrapper[4805]: I0226 17:33:13.274636 4805 generic.go:334] "Generic (PLEG): container finished" podID="98934f34-7841-49dc-b326-c76aa0c09017" containerID="94b117c23399537734f5be4a78fb729ffe97258163a5bbc4034bcd229928065d" exitCode=0 Feb 26 17:33:13 crc kubenswrapper[4805]: I0226 17:33:13.274745 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gb6n5" event={"ID":"98934f34-7841-49dc-b326-c76aa0c09017","Type":"ContainerDied","Data":"94b117c23399537734f5be4a78fb729ffe97258163a5bbc4034bcd229928065d"} Feb 26 17:33:13 crc kubenswrapper[4805]: I0226 17:33:13.276258 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-qpcbj" event={"ID":"417cdacf-3299-4488-abd5-ceb51272f3be","Type":"ContainerStarted","Data":"c1d0a48f5e6f0293726730efbb6b08f9daa3acad264d7d539a71729e4f92f2c5"} Feb 26 17:33:13 crc kubenswrapper[4805]: I0226 17:33:13.276702 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-qpcbj" Feb 26 17:33:13 crc kubenswrapper[4805]: I0226 17:33:13.316635 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-qpcbj" podStartSLOduration=1.8122847389999999 podStartE2EDuration="9.31661674s" podCreationTimestamp="2026-02-26 17:33:04 +0000 UTC" firstStartedPulling="2026-02-26 17:33:05.260348664 +0000 UTC m=+1099.822103003" lastFinishedPulling="2026-02-26 17:33:12.764680655 +0000 UTC m=+1107.326435004" observedRunningTime="2026-02-26 17:33:13.316234981 +0000 UTC m=+1107.877989340" watchObservedRunningTime="2026-02-26 17:33:13.31661674 +0000 UTC m=+1107.878371079" Feb 26 17:33:14 crc kubenswrapper[4805]: I0226 17:33:14.284078 4805 generic.go:334] "Generic (PLEG): container finished" podID="98934f34-7841-49dc-b326-c76aa0c09017" containerID="63dda6f888e6fc07b7ba9da4d18bb8538e8ba27c9cf13847616138abf564470f" exitCode=0 Feb 26 17:33:14 crc kubenswrapper[4805]: I0226 17:33:14.284126 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gb6n5" event={"ID":"98934f34-7841-49dc-b326-c76aa0c09017","Type":"ContainerDied","Data":"63dda6f888e6fc07b7ba9da4d18bb8538e8ba27c9cf13847616138abf564470f"} Feb 26 17:33:15 crc kubenswrapper[4805]: I0226 17:33:15.296477 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gb6n5" event={"ID":"98934f34-7841-49dc-b326-c76aa0c09017","Type":"ContainerStarted","Data":"5018975bb4565012cf617f07b3069788677cdbb423035083b213b8c452557a56"} Feb 26 17:33:15 crc kubenswrapper[4805]: I0226 17:33:15.296521 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gb6n5" event={"ID":"98934f34-7841-49dc-b326-c76aa0c09017","Type":"ContainerStarted","Data":"835f076f5ba237d39039bb3e44b68d688bb04e9132f006e8dc4c9f85d930c174"} Feb 26 17:33:15 crc kubenswrapper[4805]: I0226 17:33:15.296533 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gb6n5" event={"ID":"98934f34-7841-49dc-b326-c76aa0c09017","Type":"ContainerStarted","Data":"159ac5c74f6c7ab89321ff254b3097706d6107f90d66a67897ea5097c1d53204"} Feb 26 17:33:15 crc kubenswrapper[4805]: I0226 17:33:15.296543 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gb6n5" event={"ID":"98934f34-7841-49dc-b326-c76aa0c09017","Type":"ContainerStarted","Data":"f3021b0f443e3b1228f737776ce2069dab79a03a9820d3f157d948e9e5c18252"} Feb 26 17:33:15 crc kubenswrapper[4805]: I0226 17:33:15.296555 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gb6n5" event={"ID":"98934f34-7841-49dc-b326-c76aa0c09017","Type":"ContainerStarted","Data":"2e1fb186648b07918caf68e55d661a04f9bd9c8d6ee0eab785c5f73ecd15fecb"} Feb 26 17:33:15 crc kubenswrapper[4805]: I0226 17:33:15.296565 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gb6n5" event={"ID":"98934f34-7841-49dc-b326-c76aa0c09017","Type":"ContainerStarted","Data":"9c00f1562d961c0cacbf965660a060f2199000a416a0d663a77192a1f927a33a"} Feb 26 17:33:15 crc kubenswrapper[4805]: I0226 17:33:15.296595 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-gb6n5" Feb 26 17:33:15 crc kubenswrapper[4805]: I0226 17:33:15.346424 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-gb6n5" podStartSLOduration=4.913976797 podStartE2EDuration="11.346399669s" podCreationTimestamp="2026-02-26 17:33:04 +0000 UTC" firstStartedPulling="2026-02-26 17:33:05.091351552 +0000 UTC m=+1099.653105891" lastFinishedPulling="2026-02-26 17:33:11.523774424 +0000 UTC m=+1106.085528763" observedRunningTime="2026-02-26 17:33:15.339407582 +0000 UTC m=+1109.901161921" watchObservedRunningTime="2026-02-26 17:33:15.346399669 +0000 UTC m=+1109.908154008" Feb 26 17:33:16 crc kubenswrapper[4805]: I0226 17:33:16.384328 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-djvlh" Feb 26 17:33:19 crc kubenswrapper[4805]: I0226 17:33:19.363092 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-z95bm"] Feb 26 17:33:19 crc kubenswrapper[4805]: I0226 17:33:19.364066 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-z95bm" Feb 26 17:33:19 crc kubenswrapper[4805]: I0226 17:33:19.365734 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-xrt7v" Feb 26 17:33:19 crc kubenswrapper[4805]: I0226 17:33:19.366310 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 26 17:33:19 crc kubenswrapper[4805]: I0226 17:33:19.367373 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 26 17:33:19 crc kubenswrapper[4805]: I0226 17:33:19.377418 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-z95bm"] Feb 26 17:33:19 crc kubenswrapper[4805]: I0226 17:33:19.462957 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66knd\" (UniqueName: \"kubernetes.io/projected/99d01e35-c134-4312-8b5b-1a84ecc2dd68-kube-api-access-66knd\") pod \"openstack-operator-index-z95bm\" (UID: \"99d01e35-c134-4312-8b5b-1a84ecc2dd68\") " pod="openstack-operators/openstack-operator-index-z95bm" Feb 26 17:33:19 crc kubenswrapper[4805]: I0226 17:33:19.564410 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66knd\" (UniqueName: \"kubernetes.io/projected/99d01e35-c134-4312-8b5b-1a84ecc2dd68-kube-api-access-66knd\") pod \"openstack-operator-index-z95bm\" (UID: \"99d01e35-c134-4312-8b5b-1a84ecc2dd68\") " pod="openstack-operators/openstack-operator-index-z95bm" Feb 26 17:33:19 crc kubenswrapper[4805]: I0226 17:33:19.580883 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66knd\" (UniqueName: \"kubernetes.io/projected/99d01e35-c134-4312-8b5b-1a84ecc2dd68-kube-api-access-66knd\") pod \"openstack-operator-index-z95bm\" (UID: \"99d01e35-c134-4312-8b5b-1a84ecc2dd68\") " pod="openstack-operators/openstack-operator-index-z95bm" Feb 26 17:33:19 crc kubenswrapper[4805]: I0226 17:33:19.682740 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-z95bm" Feb 26 17:33:19 crc kubenswrapper[4805]: I0226 17:33:19.817249 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-gb6n5" Feb 26 17:33:19 crc kubenswrapper[4805]: I0226 17:33:19.858270 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-gb6n5" Feb 26 17:33:20 crc kubenswrapper[4805]: I0226 17:33:20.134887 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-z95bm"] Feb 26 17:33:20 crc kubenswrapper[4805]: W0226 17:33:20.144699 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99d01e35_c134_4312_8b5b_1a84ecc2dd68.slice/crio-eef0e5cc3d2e4f119b7b0218aafdda728d597f7d8f94d8e48485eaa9c480499b WatchSource:0}: Error finding container eef0e5cc3d2e4f119b7b0218aafdda728d597f7d8f94d8e48485eaa9c480499b: Status 404 returned error can't find the container with id eef0e5cc3d2e4f119b7b0218aafdda728d597f7d8f94d8e48485eaa9c480499b Feb 26 17:33:20 crc kubenswrapper[4805]: I0226 17:33:20.328792 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-z95bm" event={"ID":"99d01e35-c134-4312-8b5b-1a84ecc2dd68","Type":"ContainerStarted","Data":"eef0e5cc3d2e4f119b7b0218aafdda728d597f7d8f94d8e48485eaa9c480499b"} Feb 26 17:33:22 crc kubenswrapper[4805]: I0226 17:33:22.749158 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-z95bm"] Feb 26 17:33:23 crc kubenswrapper[4805]: I0226 17:33:23.353534 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-z95bm" event={"ID":"99d01e35-c134-4312-8b5b-1a84ecc2dd68","Type":"ContainerStarted","Data":"237a21c8a516e863ae5745e4a4d48590e0ec9e64f370f82eea21494d6ba88d77"} Feb 26 17:33:23 crc kubenswrapper[4805]: I0226 17:33:23.353726 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-z95bm" podUID="99d01e35-c134-4312-8b5b-1a84ecc2dd68" containerName="registry-server" containerID="cri-o://237a21c8a516e863ae5745e4a4d48590e0ec9e64f370f82eea21494d6ba88d77" gracePeriod=2 Feb 26 17:33:23 crc kubenswrapper[4805]: I0226 17:33:23.354600 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-gq9w7"] Feb 26 17:33:23 crc kubenswrapper[4805]: I0226 17:33:23.355516 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gq9w7" Feb 26 17:33:23 crc kubenswrapper[4805]: I0226 17:33:23.368438 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-gq9w7"] Feb 26 17:33:23 crc kubenswrapper[4805]: I0226 17:33:23.380392 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-z95bm" podStartSLOduration=1.828880743 podStartE2EDuration="4.38037341s" podCreationTimestamp="2026-02-26 17:33:19 +0000 UTC" firstStartedPulling="2026-02-26 17:33:20.147294853 +0000 UTC m=+1114.709049222" lastFinishedPulling="2026-02-26 17:33:22.69878756 +0000 UTC m=+1117.260541889" observedRunningTime="2026-02-26 17:33:23.378926684 +0000 UTC m=+1117.940681063" watchObservedRunningTime="2026-02-26 17:33:23.38037341 +0000 UTC m=+1117.942127749" Feb 26 17:33:23 crc kubenswrapper[4805]: I0226 17:33:23.522092 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d56g\" (UniqueName: \"kubernetes.io/projected/c98178ae-c45b-4d31-a1b9-afa30a2b25d2-kube-api-access-6d56g\") pod \"openstack-operator-index-gq9w7\" (UID: \"c98178ae-c45b-4d31-a1b9-afa30a2b25d2\") " pod="openstack-operators/openstack-operator-index-gq9w7" Feb 26 17:33:23 crc kubenswrapper[4805]: I0226 17:33:23.623894 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d56g\" (UniqueName: \"kubernetes.io/projected/c98178ae-c45b-4d31-a1b9-afa30a2b25d2-kube-api-access-6d56g\") pod \"openstack-operator-index-gq9w7\" (UID: \"c98178ae-c45b-4d31-a1b9-afa30a2b25d2\") " pod="openstack-operators/openstack-operator-index-gq9w7" Feb 26 17:33:23 crc kubenswrapper[4805]: I0226 17:33:23.652140 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d56g\" (UniqueName: \"kubernetes.io/projected/c98178ae-c45b-4d31-a1b9-afa30a2b25d2-kube-api-access-6d56g\") pod \"openstack-operator-index-gq9w7\" (UID: \"c98178ae-c45b-4d31-a1b9-afa30a2b25d2\") " pod="openstack-operators/openstack-operator-index-gq9w7" Feb 26 17:33:23 crc kubenswrapper[4805]: I0226 17:33:23.686678 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gq9w7" Feb 26 17:33:23 crc kubenswrapper[4805]: I0226 17:33:23.759779 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-z95bm" Feb 26 17:33:23 crc kubenswrapper[4805]: I0226 17:33:23.926785 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66knd\" (UniqueName: \"kubernetes.io/projected/99d01e35-c134-4312-8b5b-1a84ecc2dd68-kube-api-access-66knd\") pod \"99d01e35-c134-4312-8b5b-1a84ecc2dd68\" (UID: \"99d01e35-c134-4312-8b5b-1a84ecc2dd68\") " Feb 26 17:33:23 crc kubenswrapper[4805]: I0226 17:33:23.930770 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99d01e35-c134-4312-8b5b-1a84ecc2dd68-kube-api-access-66knd" (OuterVolumeSpecName: "kube-api-access-66knd") pod "99d01e35-c134-4312-8b5b-1a84ecc2dd68" (UID: "99d01e35-c134-4312-8b5b-1a84ecc2dd68"). InnerVolumeSpecName "kube-api-access-66knd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:33:24 crc kubenswrapper[4805]: I0226 17:33:24.028547 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66knd\" (UniqueName: \"kubernetes.io/projected/99d01e35-c134-4312-8b5b-1a84ecc2dd68-kube-api-access-66knd\") on node \"crc\" DevicePath \"\"" Feb 26 17:33:24 crc kubenswrapper[4805]: I0226 17:33:24.138952 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-gq9w7"] Feb 26 17:33:24 crc kubenswrapper[4805]: W0226 17:33:24.145694 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc98178ae_c45b_4d31_a1b9_afa30a2b25d2.slice/crio-b4477888c89b7cad939e3f44916276deca02ad1a4afde0e17b77cfa60324c539 WatchSource:0}: Error finding container b4477888c89b7cad939e3f44916276deca02ad1a4afde0e17b77cfa60324c539: Status 404 returned error can't find the container with id b4477888c89b7cad939e3f44916276deca02ad1a4afde0e17b77cfa60324c539 Feb 26 17:33:24 crc kubenswrapper[4805]: I0226 17:33:24.361541 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gq9w7" event={"ID":"c98178ae-c45b-4d31-a1b9-afa30a2b25d2","Type":"ContainerStarted","Data":"b4477888c89b7cad939e3f44916276deca02ad1a4afde0e17b77cfa60324c539"} Feb 26 17:33:24 crc kubenswrapper[4805]: I0226 17:33:24.363217 4805 generic.go:334] "Generic (PLEG): container finished" podID="99d01e35-c134-4312-8b5b-1a84ecc2dd68" containerID="237a21c8a516e863ae5745e4a4d48590e0ec9e64f370f82eea21494d6ba88d77" exitCode=0 Feb 26 17:33:24 crc kubenswrapper[4805]: I0226 17:33:24.363245 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-z95bm" event={"ID":"99d01e35-c134-4312-8b5b-1a84ecc2dd68","Type":"ContainerDied","Data":"237a21c8a516e863ae5745e4a4d48590e0ec9e64f370f82eea21494d6ba88d77"} Feb 26 17:33:24 crc kubenswrapper[4805]: I0226 17:33:24.363260 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-z95bm" event={"ID":"99d01e35-c134-4312-8b5b-1a84ecc2dd68","Type":"ContainerDied","Data":"eef0e5cc3d2e4f119b7b0218aafdda728d597f7d8f94d8e48485eaa9c480499b"} Feb 26 17:33:24 crc kubenswrapper[4805]: I0226 17:33:24.363277 4805 scope.go:117] "RemoveContainer" containerID="237a21c8a516e863ae5745e4a4d48590e0ec9e64f370f82eea21494d6ba88d77" Feb 26 17:33:24 crc kubenswrapper[4805]: I0226 17:33:24.363276 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-z95bm" Feb 26 17:33:24 crc kubenswrapper[4805]: I0226 17:33:24.395329 4805 scope.go:117] "RemoveContainer" containerID="237a21c8a516e863ae5745e4a4d48590e0ec9e64f370f82eea21494d6ba88d77" Feb 26 17:33:24 crc kubenswrapper[4805]: E0226 17:33:24.395906 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"237a21c8a516e863ae5745e4a4d48590e0ec9e64f370f82eea21494d6ba88d77\": container with ID starting with 237a21c8a516e863ae5745e4a4d48590e0ec9e64f370f82eea21494d6ba88d77 not found: ID does not exist" containerID="237a21c8a516e863ae5745e4a4d48590e0ec9e64f370f82eea21494d6ba88d77" Feb 26 17:33:24 crc kubenswrapper[4805]: I0226 17:33:24.395950 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"237a21c8a516e863ae5745e4a4d48590e0ec9e64f370f82eea21494d6ba88d77"} err="failed to get container status \"237a21c8a516e863ae5745e4a4d48590e0ec9e64f370f82eea21494d6ba88d77\": rpc error: code = NotFound desc = could not find container \"237a21c8a516e863ae5745e4a4d48590e0ec9e64f370f82eea21494d6ba88d77\": container with ID starting with 237a21c8a516e863ae5745e4a4d48590e0ec9e64f370f82eea21494d6ba88d77 not found: ID does not exist" Feb 26 17:33:24 crc kubenswrapper[4805]: I0226 17:33:24.401958 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-z95bm"] Feb 26 17:33:24 crc kubenswrapper[4805]: I0226 17:33:24.406988 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-z95bm"] Feb 26 17:33:24 crc kubenswrapper[4805]: I0226 17:33:24.808599 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-qpcbj" Feb 26 17:33:24 crc kubenswrapper[4805]: I0226 17:33:24.820742 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-gb6n5" Feb 26 17:33:24 crc kubenswrapper[4805]: I0226 17:33:24.924039 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-86ddb6bd46-4jj6l" Feb 26 17:33:24 crc kubenswrapper[4805]: I0226 17:33:24.960376 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99d01e35-c134-4312-8b5b-1a84ecc2dd68" path="/var/lib/kubelet/pods/99d01e35-c134-4312-8b5b-1a84ecc2dd68/volumes" Feb 26 17:33:25 crc kubenswrapper[4805]: I0226 17:33:25.372064 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gq9w7" event={"ID":"c98178ae-c45b-4d31-a1b9-afa30a2b25d2","Type":"ContainerStarted","Data":"005b61eb01dc8f8e41d2d025a631006ba1d852beaa4433ff071bf9e898353575"} Feb 26 17:33:25 crc kubenswrapper[4805]: I0226 17:33:25.387216 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-gq9w7" podStartSLOduration=2.298433869 podStartE2EDuration="2.387198728s" podCreationTimestamp="2026-02-26 17:33:23 +0000 UTC" firstStartedPulling="2026-02-26 17:33:24.150216827 +0000 UTC m=+1118.711971166" lastFinishedPulling="2026-02-26 17:33:24.238981676 +0000 UTC m=+1118.800736025" observedRunningTime="2026-02-26 17:33:25.384854799 +0000 UTC m=+1119.946609158" watchObservedRunningTime="2026-02-26 17:33:25.387198728 +0000 UTC m=+1119.948953067" Feb 26 17:33:32 crc kubenswrapper[4805]: I0226 17:33:32.977989 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 17:33:32 crc kubenswrapper[4805]: I0226 17:33:32.978981 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 17:33:33 crc kubenswrapper[4805]: I0226 17:33:33.687240 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-gq9w7" Feb 26 17:33:33 crc kubenswrapper[4805]: I0226 17:33:33.687317 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-gq9w7" Feb 26 17:33:33 crc kubenswrapper[4805]: I0226 17:33:33.719091 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-gq9w7" Feb 26 17:33:34 crc kubenswrapper[4805]: I0226 17:33:34.468899 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-gq9w7" Feb 26 17:33:39 crc kubenswrapper[4805]: I0226 17:33:39.740460 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/e8e56771b3ca26b306ce98435cdf3ec981b348d8eb54fc86b3202e39b74bkfd"] Feb 26 17:33:39 crc kubenswrapper[4805]: E0226 17:33:39.742498 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99d01e35-c134-4312-8b5b-1a84ecc2dd68" containerName="registry-server" Feb 26 17:33:39 crc kubenswrapper[4805]: I0226 17:33:39.742594 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="99d01e35-c134-4312-8b5b-1a84ecc2dd68" containerName="registry-server" Feb 26 17:33:39 crc kubenswrapper[4805]: I0226 17:33:39.742797 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="99d01e35-c134-4312-8b5b-1a84ecc2dd68" containerName="registry-server" Feb 26 17:33:39 crc kubenswrapper[4805]: I0226 17:33:39.743818 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e8e56771b3ca26b306ce98435cdf3ec981b348d8eb54fc86b3202e39b74bkfd" Feb 26 17:33:39 crc kubenswrapper[4805]: I0226 17:33:39.746130 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-4rccw" Feb 26 17:33:39 crc kubenswrapper[4805]: I0226 17:33:39.751533 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/e8e56771b3ca26b306ce98435cdf3ec981b348d8eb54fc86b3202e39b74bkfd"] Feb 26 17:33:39 crc kubenswrapper[4805]: I0226 17:33:39.845109 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/780ef4ce-e438-4cdf-8a02-9d7e5fda96f3-bundle\") pod \"e8e56771b3ca26b306ce98435cdf3ec981b348d8eb54fc86b3202e39b74bkfd\" (UID: \"780ef4ce-e438-4cdf-8a02-9d7e5fda96f3\") " pod="openstack-operators/e8e56771b3ca26b306ce98435cdf3ec981b348d8eb54fc86b3202e39b74bkfd" Feb 26 17:33:39 crc kubenswrapper[4805]: I0226 17:33:39.845557 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5tg5\" (UniqueName: \"kubernetes.io/projected/780ef4ce-e438-4cdf-8a02-9d7e5fda96f3-kube-api-access-v5tg5\") pod \"e8e56771b3ca26b306ce98435cdf3ec981b348d8eb54fc86b3202e39b74bkfd\" (UID: \"780ef4ce-e438-4cdf-8a02-9d7e5fda96f3\") " pod="openstack-operators/e8e56771b3ca26b306ce98435cdf3ec981b348d8eb54fc86b3202e39b74bkfd" Feb 26 17:33:39 crc kubenswrapper[4805]: I0226 17:33:39.845736 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/780ef4ce-e438-4cdf-8a02-9d7e5fda96f3-util\") pod \"e8e56771b3ca26b306ce98435cdf3ec981b348d8eb54fc86b3202e39b74bkfd\" (UID: \"780ef4ce-e438-4cdf-8a02-9d7e5fda96f3\") " pod="openstack-operators/e8e56771b3ca26b306ce98435cdf3ec981b348d8eb54fc86b3202e39b74bkfd" Feb 26 17:33:39 crc kubenswrapper[4805]: I0226 17:33:39.947475 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5tg5\" (UniqueName: \"kubernetes.io/projected/780ef4ce-e438-4cdf-8a02-9d7e5fda96f3-kube-api-access-v5tg5\") pod \"e8e56771b3ca26b306ce98435cdf3ec981b348d8eb54fc86b3202e39b74bkfd\" (UID: \"780ef4ce-e438-4cdf-8a02-9d7e5fda96f3\") " pod="openstack-operators/e8e56771b3ca26b306ce98435cdf3ec981b348d8eb54fc86b3202e39b74bkfd" Feb 26 17:33:39 crc kubenswrapper[4805]: I0226 17:33:39.947791 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/780ef4ce-e438-4cdf-8a02-9d7e5fda96f3-util\") pod \"e8e56771b3ca26b306ce98435cdf3ec981b348d8eb54fc86b3202e39b74bkfd\" (UID: \"780ef4ce-e438-4cdf-8a02-9d7e5fda96f3\") " pod="openstack-operators/e8e56771b3ca26b306ce98435cdf3ec981b348d8eb54fc86b3202e39b74bkfd" Feb 26 17:33:39 crc kubenswrapper[4805]: I0226 17:33:39.947964 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/780ef4ce-e438-4cdf-8a02-9d7e5fda96f3-bundle\") pod \"e8e56771b3ca26b306ce98435cdf3ec981b348d8eb54fc86b3202e39b74bkfd\" (UID: \"780ef4ce-e438-4cdf-8a02-9d7e5fda96f3\") " pod="openstack-operators/e8e56771b3ca26b306ce98435cdf3ec981b348d8eb54fc86b3202e39b74bkfd" Feb 26 17:33:39 crc kubenswrapper[4805]: I0226 17:33:39.948223 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/780ef4ce-e438-4cdf-8a02-9d7e5fda96f3-util\") pod \"e8e56771b3ca26b306ce98435cdf3ec981b348d8eb54fc86b3202e39b74bkfd\" (UID: \"780ef4ce-e438-4cdf-8a02-9d7e5fda96f3\") " pod="openstack-operators/e8e56771b3ca26b306ce98435cdf3ec981b348d8eb54fc86b3202e39b74bkfd" Feb 26 17:33:39 crc kubenswrapper[4805]: I0226 17:33:39.948316 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/780ef4ce-e438-4cdf-8a02-9d7e5fda96f3-bundle\") pod \"e8e56771b3ca26b306ce98435cdf3ec981b348d8eb54fc86b3202e39b74bkfd\" (UID: \"780ef4ce-e438-4cdf-8a02-9d7e5fda96f3\") " pod="openstack-operators/e8e56771b3ca26b306ce98435cdf3ec981b348d8eb54fc86b3202e39b74bkfd" Feb 26 17:33:39 crc kubenswrapper[4805]: I0226 17:33:39.969586 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5tg5\" (UniqueName: \"kubernetes.io/projected/780ef4ce-e438-4cdf-8a02-9d7e5fda96f3-kube-api-access-v5tg5\") pod \"e8e56771b3ca26b306ce98435cdf3ec981b348d8eb54fc86b3202e39b74bkfd\" (UID: \"780ef4ce-e438-4cdf-8a02-9d7e5fda96f3\") " pod="openstack-operators/e8e56771b3ca26b306ce98435cdf3ec981b348d8eb54fc86b3202e39b74bkfd" Feb 26 17:33:40 crc kubenswrapper[4805]: I0226 17:33:40.058722 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e8e56771b3ca26b306ce98435cdf3ec981b348d8eb54fc86b3202e39b74bkfd" Feb 26 17:33:40 crc kubenswrapper[4805]: I0226 17:33:40.460531 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/e8e56771b3ca26b306ce98435cdf3ec981b348d8eb54fc86b3202e39b74bkfd"] Feb 26 17:33:40 crc kubenswrapper[4805]: I0226 17:33:40.495674 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e8e56771b3ca26b306ce98435cdf3ec981b348d8eb54fc86b3202e39b74bkfd" event={"ID":"780ef4ce-e438-4cdf-8a02-9d7e5fda96f3","Type":"ContainerStarted","Data":"50e3757ec9d213afdd09656b215352b13833e8e26a5e8102e3246f60fbfb6859"} Feb 26 17:33:41 crc kubenswrapper[4805]: I0226 17:33:41.502368 4805 generic.go:334] "Generic (PLEG): container finished" podID="780ef4ce-e438-4cdf-8a02-9d7e5fda96f3" containerID="d478285b166981e0933426cb4c20badc4388a10a99c6f12af62e8444919a98cc" exitCode=0 Feb 26 17:33:41 crc kubenswrapper[4805]: I0226 17:33:41.502429 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e8e56771b3ca26b306ce98435cdf3ec981b348d8eb54fc86b3202e39b74bkfd" event={"ID":"780ef4ce-e438-4cdf-8a02-9d7e5fda96f3","Type":"ContainerDied","Data":"d478285b166981e0933426cb4c20badc4388a10a99c6f12af62e8444919a98cc"} Feb 26 17:33:42 crc kubenswrapper[4805]: I0226 17:33:42.511432 4805 generic.go:334] "Generic (PLEG): container finished" podID="780ef4ce-e438-4cdf-8a02-9d7e5fda96f3" containerID="a86e87962529f2a5fd1beb8056e15849bbf833c92c3e3a47cc41f74a224087c2" exitCode=0 Feb 26 17:33:42 crc kubenswrapper[4805]: I0226 17:33:42.511510 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e8e56771b3ca26b306ce98435cdf3ec981b348d8eb54fc86b3202e39b74bkfd" event={"ID":"780ef4ce-e438-4cdf-8a02-9d7e5fda96f3","Type":"ContainerDied","Data":"a86e87962529f2a5fd1beb8056e15849bbf833c92c3e3a47cc41f74a224087c2"} Feb 26 17:33:43 crc kubenswrapper[4805]: I0226 17:33:43.519748 4805 generic.go:334] "Generic (PLEG): container finished" podID="780ef4ce-e438-4cdf-8a02-9d7e5fda96f3" containerID="e1f9336404c1b6d75664433e62dcfc2ea24cb85eaa7025e8d27aa4274e50d9c4" exitCode=0 Feb 26 17:33:43 crc kubenswrapper[4805]: I0226 17:33:43.519791 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e8e56771b3ca26b306ce98435cdf3ec981b348d8eb54fc86b3202e39b74bkfd" event={"ID":"780ef4ce-e438-4cdf-8a02-9d7e5fda96f3","Type":"ContainerDied","Data":"e1f9336404c1b6d75664433e62dcfc2ea24cb85eaa7025e8d27aa4274e50d9c4"} Feb 26 17:33:44 crc kubenswrapper[4805]: I0226 17:33:44.746538 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e8e56771b3ca26b306ce98435cdf3ec981b348d8eb54fc86b3202e39b74bkfd" Feb 26 17:33:44 crc kubenswrapper[4805]: I0226 17:33:44.807408 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5tg5\" (UniqueName: \"kubernetes.io/projected/780ef4ce-e438-4cdf-8a02-9d7e5fda96f3-kube-api-access-v5tg5\") pod \"780ef4ce-e438-4cdf-8a02-9d7e5fda96f3\" (UID: \"780ef4ce-e438-4cdf-8a02-9d7e5fda96f3\") " Feb 26 17:33:44 crc kubenswrapper[4805]: I0226 17:33:44.807570 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/780ef4ce-e438-4cdf-8a02-9d7e5fda96f3-bundle\") pod \"780ef4ce-e438-4cdf-8a02-9d7e5fda96f3\" (UID: \"780ef4ce-e438-4cdf-8a02-9d7e5fda96f3\") " Feb 26 17:33:44 crc kubenswrapper[4805]: I0226 17:33:44.807634 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/780ef4ce-e438-4cdf-8a02-9d7e5fda96f3-util\") pod \"780ef4ce-e438-4cdf-8a02-9d7e5fda96f3\" (UID: \"780ef4ce-e438-4cdf-8a02-9d7e5fda96f3\") " Feb 26 17:33:44 crc kubenswrapper[4805]: I0226 17:33:44.808215 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/780ef4ce-e438-4cdf-8a02-9d7e5fda96f3-bundle" (OuterVolumeSpecName: "bundle") pod "780ef4ce-e438-4cdf-8a02-9d7e5fda96f3" (UID: "780ef4ce-e438-4cdf-8a02-9d7e5fda96f3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:33:44 crc kubenswrapper[4805]: I0226 17:33:44.814184 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/780ef4ce-e438-4cdf-8a02-9d7e5fda96f3-kube-api-access-v5tg5" (OuterVolumeSpecName: "kube-api-access-v5tg5") pod "780ef4ce-e438-4cdf-8a02-9d7e5fda96f3" (UID: "780ef4ce-e438-4cdf-8a02-9d7e5fda96f3"). InnerVolumeSpecName "kube-api-access-v5tg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:33:44 crc kubenswrapper[4805]: I0226 17:33:44.825248 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/780ef4ce-e438-4cdf-8a02-9d7e5fda96f3-util" (OuterVolumeSpecName: "util") pod "780ef4ce-e438-4cdf-8a02-9d7e5fda96f3" (UID: "780ef4ce-e438-4cdf-8a02-9d7e5fda96f3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:33:44 crc kubenswrapper[4805]: I0226 17:33:44.909462 4805 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/780ef4ce-e438-4cdf-8a02-9d7e5fda96f3-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:33:44 crc kubenswrapper[4805]: I0226 17:33:44.909494 4805 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/780ef4ce-e438-4cdf-8a02-9d7e5fda96f3-util\") on node \"crc\" DevicePath \"\"" Feb 26 17:33:44 crc kubenswrapper[4805]: I0226 17:33:44.909504 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5tg5\" (UniqueName: \"kubernetes.io/projected/780ef4ce-e438-4cdf-8a02-9d7e5fda96f3-kube-api-access-v5tg5\") on node \"crc\" DevicePath \"\"" Feb 26 17:33:45 crc kubenswrapper[4805]: I0226 17:33:45.535813 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e8e56771b3ca26b306ce98435cdf3ec981b348d8eb54fc86b3202e39b74bkfd" event={"ID":"780ef4ce-e438-4cdf-8a02-9d7e5fda96f3","Type":"ContainerDied","Data":"50e3757ec9d213afdd09656b215352b13833e8e26a5e8102e3246f60fbfb6859"} Feb 26 17:33:45 crc kubenswrapper[4805]: I0226 17:33:45.536131 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50e3757ec9d213afdd09656b215352b13833e8e26a5e8102e3246f60fbfb6859" Feb 26 17:33:45 crc kubenswrapper[4805]: I0226 17:33:45.535897 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e8e56771b3ca26b306ce98435cdf3ec981b348d8eb54fc86b3202e39b74bkfd" Feb 26 17:33:52 crc kubenswrapper[4805]: I0226 17:33:52.382937 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-558769b65d-mlxxb"] Feb 26 17:33:52 crc kubenswrapper[4805]: E0226 17:33:52.383982 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="780ef4ce-e438-4cdf-8a02-9d7e5fda96f3" containerName="util" Feb 26 17:33:52 crc kubenswrapper[4805]: I0226 17:33:52.384000 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="780ef4ce-e438-4cdf-8a02-9d7e5fda96f3" containerName="util" Feb 26 17:33:52 crc kubenswrapper[4805]: E0226 17:33:52.384050 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="780ef4ce-e438-4cdf-8a02-9d7e5fda96f3" containerName="pull" Feb 26 17:33:52 crc kubenswrapper[4805]: I0226 17:33:52.384060 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="780ef4ce-e438-4cdf-8a02-9d7e5fda96f3" containerName="pull" Feb 26 17:33:52 crc kubenswrapper[4805]: E0226 17:33:52.384087 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="780ef4ce-e438-4cdf-8a02-9d7e5fda96f3" containerName="extract" Feb 26 17:33:52 crc kubenswrapper[4805]: I0226 17:33:52.384097 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="780ef4ce-e438-4cdf-8a02-9d7e5fda96f3" containerName="extract" Feb 26 17:33:52 crc kubenswrapper[4805]: I0226 17:33:52.384269 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="780ef4ce-e438-4cdf-8a02-9d7e5fda96f3" containerName="extract" Feb 26 17:33:52 crc kubenswrapper[4805]: I0226 17:33:52.384955 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-558769b65d-mlxxb" Feb 26 17:33:52 crc kubenswrapper[4805]: I0226 17:33:52.387267 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-cqhnv" Feb 26 17:33:52 crc kubenswrapper[4805]: I0226 17:33:52.407071 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lk4w\" (UniqueName: \"kubernetes.io/projected/2e41fb9f-3f0b-4e98-b5da-6d3b8cc681a5-kube-api-access-2lk4w\") pod \"openstack-operator-controller-init-558769b65d-mlxxb\" (UID: \"2e41fb9f-3f0b-4e98-b5da-6d3b8cc681a5\") " pod="openstack-operators/openstack-operator-controller-init-558769b65d-mlxxb" Feb 26 17:33:52 crc kubenswrapper[4805]: I0226 17:33:52.409469 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-558769b65d-mlxxb"] Feb 26 17:33:52 crc kubenswrapper[4805]: I0226 17:33:52.508365 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lk4w\" (UniqueName: \"kubernetes.io/projected/2e41fb9f-3f0b-4e98-b5da-6d3b8cc681a5-kube-api-access-2lk4w\") pod \"openstack-operator-controller-init-558769b65d-mlxxb\" (UID: \"2e41fb9f-3f0b-4e98-b5da-6d3b8cc681a5\") " pod="openstack-operators/openstack-operator-controller-init-558769b65d-mlxxb" Feb 26 17:33:52 crc kubenswrapper[4805]: I0226 17:33:52.527920 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lk4w\" (UniqueName: \"kubernetes.io/projected/2e41fb9f-3f0b-4e98-b5da-6d3b8cc681a5-kube-api-access-2lk4w\") pod \"openstack-operator-controller-init-558769b65d-mlxxb\" (UID: \"2e41fb9f-3f0b-4e98-b5da-6d3b8cc681a5\") " pod="openstack-operators/openstack-operator-controller-init-558769b65d-mlxxb" Feb 26 17:33:52 crc kubenswrapper[4805]: I0226 17:33:52.704365 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-558769b65d-mlxxb" Feb 26 17:33:52 crc kubenswrapper[4805]: I0226 17:33:52.944229 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-558769b65d-mlxxb"] Feb 26 17:33:53 crc kubenswrapper[4805]: I0226 17:33:53.603187 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-558769b65d-mlxxb" event={"ID":"2e41fb9f-3f0b-4e98-b5da-6d3b8cc681a5","Type":"ContainerStarted","Data":"00fa99f799fbbab4f9ea1658954bb624a4fb48d18efc58047263527869f8f64a"} Feb 26 17:33:57 crc kubenswrapper[4805]: I0226 17:33:57.649776 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-558769b65d-mlxxb" event={"ID":"2e41fb9f-3f0b-4e98-b5da-6d3b8cc681a5","Type":"ContainerStarted","Data":"f26e4b2b074f86083c5c3ce05a9f96163bcf9923310b18a673c8b9990d09affe"} Feb 26 17:33:57 crc kubenswrapper[4805]: I0226 17:33:57.650084 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-558769b65d-mlxxb" Feb 26 17:33:57 crc kubenswrapper[4805]: I0226 17:33:57.688249 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-558769b65d-mlxxb" podStartSLOduration=1.288401984 podStartE2EDuration="5.688228974s" podCreationTimestamp="2026-02-26 17:33:52 +0000 UTC" firstStartedPulling="2026-02-26 17:33:52.948013749 +0000 UTC m=+1147.509768088" lastFinishedPulling="2026-02-26 17:33:57.347840729 +0000 UTC m=+1151.909595078" observedRunningTime="2026-02-26 17:33:57.681913244 +0000 UTC m=+1152.243667603" watchObservedRunningTime="2026-02-26 17:33:57.688228974 +0000 UTC m=+1152.249983313" Feb 26 17:34:00 crc kubenswrapper[4805]: I0226 17:34:00.149279 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535454-zzksv"] Feb 26 17:34:00 crc kubenswrapper[4805]: I0226 17:34:00.150564 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535454-zzksv" Feb 26 17:34:00 crc kubenswrapper[4805]: I0226 17:34:00.157315 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 17:34:00 crc kubenswrapper[4805]: I0226 17:34:00.157735 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 17:34:00 crc kubenswrapper[4805]: I0226 17:34:00.157962 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 17:34:00 crc kubenswrapper[4805]: I0226 17:34:00.162330 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535454-zzksv"] Feb 26 17:34:00 crc kubenswrapper[4805]: I0226 17:34:00.253371 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4pxg\" (UniqueName: \"kubernetes.io/projected/8ea99294-1de4-49ab-8e64-ae73b59d2b0d-kube-api-access-q4pxg\") pod \"auto-csr-approver-29535454-zzksv\" (UID: \"8ea99294-1de4-49ab-8e64-ae73b59d2b0d\") " pod="openshift-infra/auto-csr-approver-29535454-zzksv" Feb 26 17:34:00 crc kubenswrapper[4805]: I0226 17:34:00.354281 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4pxg\" (UniqueName: \"kubernetes.io/projected/8ea99294-1de4-49ab-8e64-ae73b59d2b0d-kube-api-access-q4pxg\") pod \"auto-csr-approver-29535454-zzksv\" (UID: \"8ea99294-1de4-49ab-8e64-ae73b59d2b0d\") " pod="openshift-infra/auto-csr-approver-29535454-zzksv" Feb 26 17:34:00 crc kubenswrapper[4805]: I0226 17:34:00.381941 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4pxg\" (UniqueName: \"kubernetes.io/projected/8ea99294-1de4-49ab-8e64-ae73b59d2b0d-kube-api-access-q4pxg\") pod \"auto-csr-approver-29535454-zzksv\" (UID: \"8ea99294-1de4-49ab-8e64-ae73b59d2b0d\") " pod="openshift-infra/auto-csr-approver-29535454-zzksv" Feb 26 17:34:00 crc kubenswrapper[4805]: I0226 17:34:00.473267 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535454-zzksv" Feb 26 17:34:00 crc kubenswrapper[4805]: I0226 17:34:00.675234 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535454-zzksv"] Feb 26 17:34:01 crc kubenswrapper[4805]: I0226 17:34:01.681598 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535454-zzksv" event={"ID":"8ea99294-1de4-49ab-8e64-ae73b59d2b0d","Type":"ContainerStarted","Data":"7682ee4b67b55435c3796656fcf9248fbc4e6fd1f38ec98c512bc023aa99f45a"} Feb 26 17:34:02 crc kubenswrapper[4805]: I0226 17:34:02.692631 4805 generic.go:334] "Generic (PLEG): container finished" podID="8ea99294-1de4-49ab-8e64-ae73b59d2b0d" containerID="4f9475e21c47b527dd3f54569e139ace42bcb514cf5521d32eb78e1fd7b024fc" exitCode=0 Feb 26 17:34:02 crc kubenswrapper[4805]: I0226 17:34:02.692683 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535454-zzksv" event={"ID":"8ea99294-1de4-49ab-8e64-ae73b59d2b0d","Type":"ContainerDied","Data":"4f9475e21c47b527dd3f54569e139ace42bcb514cf5521d32eb78e1fd7b024fc"} Feb 26 17:34:02 crc kubenswrapper[4805]: I0226 17:34:02.706956 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-558769b65d-mlxxb" Feb 26 17:34:02 crc kubenswrapper[4805]: I0226 17:34:02.977784 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 17:34:02 crc kubenswrapper[4805]: I0226 17:34:02.978142 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 17:34:02 crc kubenswrapper[4805]: I0226 17:34:02.978211 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" Feb 26 17:34:02 crc kubenswrapper[4805]: I0226 17:34:02.978871 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"990e4f598fe9e20fbcba699271a0052db1db79e508164396a947ff262b514683"} pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 17:34:02 crc kubenswrapper[4805]: I0226 17:34:02.978944 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" containerID="cri-o://990e4f598fe9e20fbcba699271a0052db1db79e508164396a947ff262b514683" gracePeriod=600 Feb 26 17:34:03 crc kubenswrapper[4805]: I0226 17:34:03.709250 4805 generic.go:334] "Generic (PLEG): container finished" podID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerID="990e4f598fe9e20fbcba699271a0052db1db79e508164396a947ff262b514683" exitCode=0 Feb 26 17:34:03 crc kubenswrapper[4805]: I0226 17:34:03.709725 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" event={"ID":"25e83477-65d0-41be-8e55-fdacfc5871a8","Type":"ContainerDied","Data":"990e4f598fe9e20fbcba699271a0052db1db79e508164396a947ff262b514683"} Feb 26 17:34:03 crc kubenswrapper[4805]: I0226 17:34:03.709758 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" event={"ID":"25e83477-65d0-41be-8e55-fdacfc5871a8","Type":"ContainerStarted","Data":"790d5b8d614e0ff22c848f43674f8b7d4c300d976397a943518fa87467a5a9a3"} Feb 26 17:34:03 crc kubenswrapper[4805]: I0226 17:34:03.709777 4805 scope.go:117] "RemoveContainer" containerID="1d44138a55ef33aa1de9eac7f541bad377db04ef7075e41168f322227c042d08" Feb 26 17:34:03 crc kubenswrapper[4805]: I0226 17:34:03.961044 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535454-zzksv" Feb 26 17:34:04 crc kubenswrapper[4805]: I0226 17:34:04.107897 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4pxg\" (UniqueName: \"kubernetes.io/projected/8ea99294-1de4-49ab-8e64-ae73b59d2b0d-kube-api-access-q4pxg\") pod \"8ea99294-1de4-49ab-8e64-ae73b59d2b0d\" (UID: \"8ea99294-1de4-49ab-8e64-ae73b59d2b0d\") " Feb 26 17:34:04 crc kubenswrapper[4805]: I0226 17:34:04.113160 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ea99294-1de4-49ab-8e64-ae73b59d2b0d-kube-api-access-q4pxg" (OuterVolumeSpecName: "kube-api-access-q4pxg") pod "8ea99294-1de4-49ab-8e64-ae73b59d2b0d" (UID: "8ea99294-1de4-49ab-8e64-ae73b59d2b0d"). InnerVolumeSpecName "kube-api-access-q4pxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:34:04 crc kubenswrapper[4805]: I0226 17:34:04.209447 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4pxg\" (UniqueName: \"kubernetes.io/projected/8ea99294-1de4-49ab-8e64-ae73b59d2b0d-kube-api-access-q4pxg\") on node \"crc\" DevicePath \"\"" Feb 26 17:34:04 crc kubenswrapper[4805]: I0226 17:34:04.719534 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535454-zzksv" event={"ID":"8ea99294-1de4-49ab-8e64-ae73b59d2b0d","Type":"ContainerDied","Data":"7682ee4b67b55435c3796656fcf9248fbc4e6fd1f38ec98c512bc023aa99f45a"} Feb 26 17:34:04 crc kubenswrapper[4805]: I0226 17:34:04.719660 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7682ee4b67b55435c3796656fcf9248fbc4e6fd1f38ec98c512bc023aa99f45a" Feb 26 17:34:04 crc kubenswrapper[4805]: I0226 17:34:04.719580 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535454-zzksv" Feb 26 17:34:05 crc kubenswrapper[4805]: I0226 17:34:05.007993 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535448-8n8lp"] Feb 26 17:34:05 crc kubenswrapper[4805]: I0226 17:34:05.012714 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535448-8n8lp"] Feb 26 17:34:06 crc kubenswrapper[4805]: I0226 17:34:06.961668 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d139485b-01a9-4993-b8a7-66dcc1008841" path="/var/lib/kubelet/pods/d139485b-01a9-4993-b8a7-66dcc1008841/volumes" Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.761746 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-vtqrv"] Feb 26 17:34:22 crc kubenswrapper[4805]: E0226 17:34:22.762522 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ea99294-1de4-49ab-8e64-ae73b59d2b0d" containerName="oc" Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.762535 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ea99294-1de4-49ab-8e64-ae73b59d2b0d" containerName="oc" Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.762681 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ea99294-1de4-49ab-8e64-ae73b59d2b0d" containerName="oc" Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.763132 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-vtqrv" Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.764938 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-n6qf5" Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.774657 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-lss24"] Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.775702 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-lss24" Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.777407 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-4jzcj" Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.779228 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-vtqrv"] Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.790456 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-tzsf5"] Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.791629 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-tzsf5" Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.794203 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-w7ppd" Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.798276 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-lss24"] Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.801171 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb2dg\" (UniqueName: \"kubernetes.io/projected/60a1ca6f-55b2-43e0-a86b-b38ecf7190f6-kube-api-access-tb2dg\") pod \"barbican-operator-controller-manager-868647ff47-vtqrv\" (UID: \"60a1ca6f-55b2-43e0-a86b-b38ecf7190f6\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-vtqrv" Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.823516 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-qcp9p"] Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.824804 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-qcp9p" Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.827823 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-7bpft" Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.828706 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-69rn4"] Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.829510 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-69rn4" Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.835517 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-tzsf5"] Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.836654 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-2xkwf" Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.849203 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-qcp9p"] Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.872268 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-24fxg"] Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.873243 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-24fxg" Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.879755 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-69rn4"] Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.882971 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-rt6mj" Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.891536 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-24fxg"] Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.902570 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp4pk\" (UniqueName: \"kubernetes.io/projected/b5c2bc9e-6d6f-4d6e-9fa1-379b26708be3-kube-api-access-tp4pk\") pod \"designate-operator-controller-manager-6d8bf5c495-tzsf5\" (UID: \"b5c2bc9e-6d6f-4d6e-9fa1-379b26708be3\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-tzsf5" Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.902622 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6hhn\" (UniqueName: \"kubernetes.io/projected/1a0573de-bd9c-4917-93d6-4fe8ae9fde94-kube-api-access-t6hhn\") pod \"heat-operator-controller-manager-69f49c598c-69rn4\" (UID: \"1a0573de-bd9c-4917-93d6-4fe8ae9fde94\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-69rn4" Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.902651 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxsnf\" (UniqueName: \"kubernetes.io/projected/e649f248-07e5-4bf2-83bf-0c7fb532dc16-kube-api-access-cxsnf\") pod \"glance-operator-controller-manager-784b5bb6c5-qcp9p\" (UID: \"e649f248-07e5-4bf2-83bf-0c7fb532dc16\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-qcp9p" Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.902784 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb2dg\" (UniqueName: \"kubernetes.io/projected/60a1ca6f-55b2-43e0-a86b-b38ecf7190f6-kube-api-access-tb2dg\") pod \"barbican-operator-controller-manager-868647ff47-vtqrv\" (UID: \"60a1ca6f-55b2-43e0-a86b-b38ecf7190f6\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-vtqrv" Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.902820 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpv2m\" (UniqueName: \"kubernetes.io/projected/d7d01c08-16c4-411f-82f4-b7747d6222f7-kube-api-access-gpv2m\") pod \"cinder-operator-controller-manager-55d77d7b5c-lss24\" (UID: \"d7d01c08-16c4-411f-82f4-b7747d6222f7\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-lss24" Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.910492 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-6xzvs"] Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.911415 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-6xzvs" Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.914574 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.914722 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-fkd7r" Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.942662 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-8xw2w"] Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.943976 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-8xw2w" Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.950556 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-n7q9w" Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.957869 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb2dg\" (UniqueName: \"kubernetes.io/projected/60a1ca6f-55b2-43e0-a86b-b38ecf7190f6-kube-api-access-tb2dg\") pod \"barbican-operator-controller-manager-868647ff47-vtqrv\" (UID: \"60a1ca6f-55b2-43e0-a86b-b38ecf7190f6\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-vtqrv" Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.974970 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-6xzvs"] Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.990345 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-wxtbt"] Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.992393 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wxtbt" Feb 26 17:34:22 crc kubenswrapper[4805]: I0226 17:34:22.998987 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-xxw79" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.002704 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-8xw2w"] Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.005128 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp4pk\" (UniqueName: \"kubernetes.io/projected/b5c2bc9e-6d6f-4d6e-9fa1-379b26708be3-kube-api-access-tp4pk\") pod \"designate-operator-controller-manager-6d8bf5c495-tzsf5\" (UID: \"b5c2bc9e-6d6f-4d6e-9fa1-379b26708be3\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-tzsf5" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.005173 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8f2e984c-ff5c-419b-bcd2-3e0d53825b89-cert\") pod \"infra-operator-controller-manager-79d975b745-6xzvs\" (UID: \"8f2e984c-ff5c-419b-bcd2-3e0d53825b89\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-6xzvs" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.005194 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6hhn\" (UniqueName: \"kubernetes.io/projected/1a0573de-bd9c-4917-93d6-4fe8ae9fde94-kube-api-access-t6hhn\") pod \"heat-operator-controller-manager-69f49c598c-69rn4\" (UID: \"1a0573de-bd9c-4917-93d6-4fe8ae9fde94\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-69rn4" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.005213 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxsnf\" (UniqueName: \"kubernetes.io/projected/e649f248-07e5-4bf2-83bf-0c7fb532dc16-kube-api-access-cxsnf\") pod \"glance-operator-controller-manager-784b5bb6c5-qcp9p\" (UID: \"e649f248-07e5-4bf2-83bf-0c7fb532dc16\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-qcp9p" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.005280 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxmtj\" (UniqueName: \"kubernetes.io/projected/8f2e984c-ff5c-419b-bcd2-3e0d53825b89-kube-api-access-gxmtj\") pod \"infra-operator-controller-manager-79d975b745-6xzvs\" (UID: \"8f2e984c-ff5c-419b-bcd2-3e0d53825b89\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-6xzvs" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.005302 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n6cr\" (UniqueName: \"kubernetes.io/projected/2cf12233-5655-4a94-8f0a-bdd68756de74-kube-api-access-4n6cr\") pod \"horizon-operator-controller-manager-5b9b8895d5-24fxg\" (UID: \"2cf12233-5655-4a94-8f0a-bdd68756de74\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-24fxg" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.005332 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpv2m\" (UniqueName: \"kubernetes.io/projected/d7d01c08-16c4-411f-82f4-b7747d6222f7-kube-api-access-gpv2m\") pod \"cinder-operator-controller-manager-55d77d7b5c-lss24\" (UID: \"d7d01c08-16c4-411f-82f4-b7747d6222f7\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-lss24" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.005354 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f99g6\" (UniqueName: \"kubernetes.io/projected/23a0badb-7b1f-4d18-8622-3248adbfe0ea-kube-api-access-f99g6\") pod \"ironic-operator-controller-manager-554564d7fc-8xw2w\" (UID: \"23a0badb-7b1f-4d18-8622-3248adbfe0ea\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-8xw2w" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.031107 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-7lmx7"] Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.032249 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-7lmx7" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.036941 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-wxtbt"] Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.042371 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-7lmx7"] Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.044927 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6hhn\" (UniqueName: \"kubernetes.io/projected/1a0573de-bd9c-4917-93d6-4fe8ae9fde94-kube-api-access-t6hhn\") pod \"heat-operator-controller-manager-69f49c598c-69rn4\" (UID: \"1a0573de-bd9c-4917-93d6-4fe8ae9fde94\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-69rn4" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.047522 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxsnf\" (UniqueName: \"kubernetes.io/projected/e649f248-07e5-4bf2-83bf-0c7fb532dc16-kube-api-access-cxsnf\") pod \"glance-operator-controller-manager-784b5bb6c5-qcp9p\" (UID: \"e649f248-07e5-4bf2-83bf-0c7fb532dc16\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-qcp9p" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.047676 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpv2m\" (UniqueName: \"kubernetes.io/projected/d7d01c08-16c4-411f-82f4-b7747d6222f7-kube-api-access-gpv2m\") pod \"cinder-operator-controller-manager-55d77d7b5c-lss24\" (UID: \"d7d01c08-16c4-411f-82f4-b7747d6222f7\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-lss24" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.047745 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-5x6cj"] Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.051356 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-f92gl" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.055947 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-5x6cj" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.059316 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-mprfl" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.069105 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-5x6cj"] Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.079444 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp4pk\" (UniqueName: \"kubernetes.io/projected/b5c2bc9e-6d6f-4d6e-9fa1-379b26708be3-kube-api-access-tp4pk\") pod \"designate-operator-controller-manager-6d8bf5c495-tzsf5\" (UID: \"b5c2bc9e-6d6f-4d6e-9fa1-379b26708be3\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-tzsf5" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.083076 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-sfq49"] Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.084100 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-sfq49" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.087955 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-vtqrv" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.089486 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-n8kht" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.109663 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-lss24" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.110672 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx7tp\" (UniqueName: \"kubernetes.io/projected/171cc925-66fe-4ceb-b9b5-56b48a121642-kube-api-access-dx7tp\") pod \"mariadb-operator-controller-manager-6994f66f48-5x6cj\" (UID: \"171cc925-66fe-4ceb-b9b5-56b48a121642\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-5x6cj" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.110739 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxmtj\" (UniqueName: \"kubernetes.io/projected/8f2e984c-ff5c-419b-bcd2-3e0d53825b89-kube-api-access-gxmtj\") pod \"infra-operator-controller-manager-79d975b745-6xzvs\" (UID: \"8f2e984c-ff5c-419b-bcd2-3e0d53825b89\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-6xzvs" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.110770 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gblf\" (UniqueName: \"kubernetes.io/projected/e4c0ad7f-8daf-4599-a457-135483730ac6-kube-api-access-5gblf\") pod \"manila-operator-controller-manager-67d996989d-7lmx7\" (UID: \"e4c0ad7f-8daf-4599-a457-135483730ac6\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-7lmx7" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.110798 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n6cr\" (UniqueName: \"kubernetes.io/projected/2cf12233-5655-4a94-8f0a-bdd68756de74-kube-api-access-4n6cr\") pod \"horizon-operator-controller-manager-5b9b8895d5-24fxg\" (UID: \"2cf12233-5655-4a94-8f0a-bdd68756de74\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-24fxg" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.110853 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f99g6\" (UniqueName: \"kubernetes.io/projected/23a0badb-7b1f-4d18-8622-3248adbfe0ea-kube-api-access-f99g6\") pod \"ironic-operator-controller-manager-554564d7fc-8xw2w\" (UID: \"23a0badb-7b1f-4d18-8622-3248adbfe0ea\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-8xw2w" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.110931 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8f2e984c-ff5c-419b-bcd2-3e0d53825b89-cert\") pod \"infra-operator-controller-manager-79d975b745-6xzvs\" (UID: \"8f2e984c-ff5c-419b-bcd2-3e0d53825b89\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-6xzvs" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.110966 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzl8l\" (UniqueName: \"kubernetes.io/projected/a824c389-facb-49e7-91e9-a05c95cdd2b9-kube-api-access-tzl8l\") pod \"keystone-operator-controller-manager-b4d948c87-wxtbt\" (UID: \"a824c389-facb-49e7-91e9-a05c95cdd2b9\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wxtbt" Feb 26 17:34:23 crc kubenswrapper[4805]: E0226 17:34:23.111679 4805 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 26 17:34:23 crc kubenswrapper[4805]: E0226 17:34:23.111758 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8f2e984c-ff5c-419b-bcd2-3e0d53825b89-cert podName:8f2e984c-ff5c-419b-bcd2-3e0d53825b89 nodeName:}" failed. No retries permitted until 2026-02-26 17:34:23.611731144 +0000 UTC m=+1178.173485483 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8f2e984c-ff5c-419b-bcd2-3e0d53825b89-cert") pod "infra-operator-controller-manager-79d975b745-6xzvs" (UID: "8f2e984c-ff5c-419b-bcd2-3e0d53825b89") : secret "infra-operator-webhook-server-cert" not found Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.126117 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-n8jrv"] Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.154531 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-qcp9p" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.161663 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxmtj\" (UniqueName: \"kubernetes.io/projected/8f2e984c-ff5c-419b-bcd2-3e0d53825b89-kube-api-access-gxmtj\") pod \"infra-operator-controller-manager-79d975b745-6xzvs\" (UID: \"8f2e984c-ff5c-419b-bcd2-3e0d53825b89\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-6xzvs" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.166596 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n6cr\" (UniqueName: \"kubernetes.io/projected/2cf12233-5655-4a94-8f0a-bdd68756de74-kube-api-access-4n6cr\") pod \"horizon-operator-controller-manager-5b9b8895d5-24fxg\" (UID: \"2cf12233-5655-4a94-8f0a-bdd68756de74\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-24fxg" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.171142 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f99g6\" (UniqueName: \"kubernetes.io/projected/23a0badb-7b1f-4d18-8622-3248adbfe0ea-kube-api-access-f99g6\") pod \"ironic-operator-controller-manager-554564d7fc-8xw2w\" (UID: \"23a0badb-7b1f-4d18-8622-3248adbfe0ea\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-8xw2w" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.208528 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-69rn4" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.209402 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-n8jrv"] Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.209482 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-n8jrv" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.208041 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-tzsf5" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.211987 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-nw7j7" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.212443 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-24fxg" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.212804 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzl8l\" (UniqueName: \"kubernetes.io/projected/a824c389-facb-49e7-91e9-a05c95cdd2b9-kube-api-access-tzl8l\") pod \"keystone-operator-controller-manager-b4d948c87-wxtbt\" (UID: \"a824c389-facb-49e7-91e9-a05c95cdd2b9\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wxtbt" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.212848 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx7tp\" (UniqueName: \"kubernetes.io/projected/171cc925-66fe-4ceb-b9b5-56b48a121642-kube-api-access-dx7tp\") pod \"mariadb-operator-controller-manager-6994f66f48-5x6cj\" (UID: \"171cc925-66fe-4ceb-b9b5-56b48a121642\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-5x6cj" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.213147 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gblf\" (UniqueName: \"kubernetes.io/projected/e4c0ad7f-8daf-4599-a457-135483730ac6-kube-api-access-5gblf\") pod \"manila-operator-controller-manager-67d996989d-7lmx7\" (UID: \"e4c0ad7f-8daf-4599-a457-135483730ac6\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-7lmx7" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.213230 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz8zd\" (UniqueName: \"kubernetes.io/projected/4a5c2658-ad3f-49b7-bb08-64aa33210ea4-kube-api-access-fz8zd\") pod \"neutron-operator-controller-manager-6bd4687957-sfq49\" (UID: \"4a5c2658-ad3f-49b7-bb08-64aa33210ea4\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-sfq49" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.213266 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4lnz\" (UniqueName: \"kubernetes.io/projected/a0bc07cc-8639-49ce-824d-b1cde1a7c500-kube-api-access-b4lnz\") pod \"nova-operator-controller-manager-567668f5cf-n8jrv\" (UID: \"a0bc07cc-8639-49ce-824d-b1cde1a7c500\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-n8jrv" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.218966 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-sfq49"] Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.248038 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-pc48x"] Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.257703 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzl8l\" (UniqueName: \"kubernetes.io/projected/a824c389-facb-49e7-91e9-a05c95cdd2b9-kube-api-access-tzl8l\") pod \"keystone-operator-controller-manager-b4d948c87-wxtbt\" (UID: \"a824c389-facb-49e7-91e9-a05c95cdd2b9\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wxtbt" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.258463 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx7tp\" (UniqueName: \"kubernetes.io/projected/171cc925-66fe-4ceb-b9b5-56b48a121642-kube-api-access-dx7tp\") pod \"mariadb-operator-controller-manager-6994f66f48-5x6cj\" (UID: \"171cc925-66fe-4ceb-b9b5-56b48a121642\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-5x6cj" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.258541 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gblf\" (UniqueName: \"kubernetes.io/projected/e4c0ad7f-8daf-4599-a457-135483730ac6-kube-api-access-5gblf\") pod \"manila-operator-controller-manager-67d996989d-7lmx7\" (UID: \"e4c0ad7f-8daf-4599-a457-135483730ac6\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-7lmx7" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.261791 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-pc48x"] Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.261885 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-pc48x" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.264374 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-cl6lv" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.269684 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-qk4bw"] Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.270972 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-qk4bw" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.274329 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-ghhzx" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.278597 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cphz8p"] Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.281522 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-8xw2w" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.285093 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cphz8p" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.293472 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.293677 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-zsgbw" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.305868 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-qk4bw"] Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.311009 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wxtbt" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.315415 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4lnz\" (UniqueName: \"kubernetes.io/projected/a0bc07cc-8639-49ce-824d-b1cde1a7c500-kube-api-access-b4lnz\") pod \"nova-operator-controller-manager-567668f5cf-n8jrv\" (UID: \"a0bc07cc-8639-49ce-824d-b1cde1a7c500\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-n8jrv" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.315544 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hstsd\" (UniqueName: \"kubernetes.io/projected/a566e6f7-f550-4c90-a3fe-f5b66061d126-kube-api-access-hstsd\") pod \"ovn-operator-controller-manager-5955d8c787-qk4bw\" (UID: \"a566e6f7-f550-4c90-a3fe-f5b66061d126\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-qk4bw" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.315612 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cphz8p\" (UID: \"ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cphz8p" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.315667 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbh9n\" (UniqueName: \"kubernetes.io/projected/ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8-kube-api-access-fbh9n\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cphz8p\" (UID: \"ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cphz8p" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.315722 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr45f\" (UniqueName: \"kubernetes.io/projected/10b3e5a9-53df-421d-a6dc-ccb44f03f432-kube-api-access-zr45f\") pod \"octavia-operator-controller-manager-659dc6bbfc-pc48x\" (UID: \"10b3e5a9-53df-421d-a6dc-ccb44f03f432\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-pc48x" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.315748 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz8zd\" (UniqueName: \"kubernetes.io/projected/4a5c2658-ad3f-49b7-bb08-64aa33210ea4-kube-api-access-fz8zd\") pod \"neutron-operator-controller-manager-6bd4687957-sfq49\" (UID: \"4a5c2658-ad3f-49b7-bb08-64aa33210ea4\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-sfq49" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.342789 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cphz8p"] Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.343004 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz8zd\" (UniqueName: \"kubernetes.io/projected/4a5c2658-ad3f-49b7-bb08-64aa33210ea4-kube-api-access-fz8zd\") pod \"neutron-operator-controller-manager-6bd4687957-sfq49\" (UID: \"4a5c2658-ad3f-49b7-bb08-64aa33210ea4\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-sfq49" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.346983 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4lnz\" (UniqueName: \"kubernetes.io/projected/a0bc07cc-8639-49ce-824d-b1cde1a7c500-kube-api-access-b4lnz\") pod \"nova-operator-controller-manager-567668f5cf-n8jrv\" (UID: \"a0bc07cc-8639-49ce-824d-b1cde1a7c500\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-n8jrv" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.368363 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-s49nh"] Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.371173 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-s49nh" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.376124 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-jwq48" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.376278 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-s49nh"] Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.413972 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-rdgbm"] Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.415161 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-rdgbm" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.418131 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr45f\" (UniqueName: \"kubernetes.io/projected/10b3e5a9-53df-421d-a6dc-ccb44f03f432-kube-api-access-zr45f\") pod \"octavia-operator-controller-manager-659dc6bbfc-pc48x\" (UID: \"10b3e5a9-53df-421d-a6dc-ccb44f03f432\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-pc48x" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.418197 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xj55\" (UniqueName: \"kubernetes.io/projected/e234f401-20b1-4f4b-b884-0ccae8a82887-kube-api-access-9xj55\") pod \"placement-operator-controller-manager-8497b45c89-s49nh\" (UID: \"e234f401-20b1-4f4b-b884-0ccae8a82887\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-s49nh" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.418235 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtqf2\" (UniqueName: \"kubernetes.io/projected/2e440888-07ce-4a09-ac04-ab52fae67596-kube-api-access-rtqf2\") pod \"swift-operator-controller-manager-68f46476f-rdgbm\" (UID: \"2e440888-07ce-4a09-ac04-ab52fae67596\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-rdgbm" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.418357 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hstsd\" (UniqueName: \"kubernetes.io/projected/a566e6f7-f550-4c90-a3fe-f5b66061d126-kube-api-access-hstsd\") pod \"ovn-operator-controller-manager-5955d8c787-qk4bw\" (UID: \"a566e6f7-f550-4c90-a3fe-f5b66061d126\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-qk4bw" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.418427 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cphz8p\" (UID: \"ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cphz8p" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.418493 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbh9n\" (UniqueName: \"kubernetes.io/projected/ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8-kube-api-access-fbh9n\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cphz8p\" (UID: \"ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cphz8p" Feb 26 17:34:23 crc kubenswrapper[4805]: E0226 17:34:23.418989 4805 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 17:34:23 crc kubenswrapper[4805]: E0226 17:34:23.419066 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8-cert podName:ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8 nodeName:}" failed. No retries permitted until 2026-02-26 17:34:23.919045429 +0000 UTC m=+1178.480799758 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cphz8p" (UID: "ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.424115 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-rdgbm"] Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.424617 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-lm9pt" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.432002 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-554d785765-j74p9"] Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.433784 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-554d785765-j74p9" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.435567 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-hzghv" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.461304 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbh9n\" (UniqueName: \"kubernetes.io/projected/ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8-kube-api-access-fbh9n\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cphz8p\" (UID: \"ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cphz8p" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.461727 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hstsd\" (UniqueName: \"kubernetes.io/projected/a566e6f7-f550-4c90-a3fe-f5b66061d126-kube-api-access-hstsd\") pod \"ovn-operator-controller-manager-5955d8c787-qk4bw\" (UID: \"a566e6f7-f550-4c90-a3fe-f5b66061d126\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-qk4bw" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.463930 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-554d785765-j74p9"] Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.464136 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr45f\" (UniqueName: \"kubernetes.io/projected/10b3e5a9-53df-421d-a6dc-ccb44f03f432-kube-api-access-zr45f\") pod \"octavia-operator-controller-manager-659dc6bbfc-pc48x\" (UID: \"10b3e5a9-53df-421d-a6dc-ccb44f03f432\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-pc48x" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.468982 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-7lmx7" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.524687 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xj55\" (UniqueName: \"kubernetes.io/projected/e234f401-20b1-4f4b-b884-0ccae8a82887-kube-api-access-9xj55\") pod \"placement-operator-controller-manager-8497b45c89-s49nh\" (UID: \"e234f401-20b1-4f4b-b884-0ccae8a82887\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-s49nh" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.524734 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtqf2\" (UniqueName: \"kubernetes.io/projected/2e440888-07ce-4a09-ac04-ab52fae67596-kube-api-access-rtqf2\") pod \"swift-operator-controller-manager-68f46476f-rdgbm\" (UID: \"2e440888-07ce-4a09-ac04-ab52fae67596\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-rdgbm" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.550080 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-4czft"] Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.551645 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4czft" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.554441 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-5x6cj" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.556009 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-5wsv5" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.556322 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtqf2\" (UniqueName: \"kubernetes.io/projected/2e440888-07ce-4a09-ac04-ab52fae67596-kube-api-access-rtqf2\") pod \"swift-operator-controller-manager-68f46476f-rdgbm\" (UID: \"2e440888-07ce-4a09-ac04-ab52fae67596\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-rdgbm" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.559366 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-sfq49" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.570930 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-4czft"] Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.578943 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-n8jrv" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.584166 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xj55\" (UniqueName: \"kubernetes.io/projected/e234f401-20b1-4f4b-b884-0ccae8a82887-kube-api-access-9xj55\") pod \"placement-operator-controller-manager-8497b45c89-s49nh\" (UID: \"e234f401-20b1-4f4b-b884-0ccae8a82887\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-s49nh" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.599433 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-pc48x" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.625937 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lkd2\" (UniqueName: \"kubernetes.io/projected/dfb3f113-7bae-465f-aac0-2697ba32d846-kube-api-access-9lkd2\") pod \"telemetry-operator-controller-manager-554d785765-j74p9\" (UID: \"dfb3f113-7bae-465f-aac0-2697ba32d846\") " pod="openstack-operators/telemetry-operator-controller-manager-554d785765-j74p9" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.626039 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8f2e984c-ff5c-419b-bcd2-3e0d53825b89-cert\") pod \"infra-operator-controller-manager-79d975b745-6xzvs\" (UID: \"8f2e984c-ff5c-419b-bcd2-3e0d53825b89\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-6xzvs" Feb 26 17:34:23 crc kubenswrapper[4805]: E0226 17:34:23.626205 4805 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 26 17:34:23 crc kubenswrapper[4805]: E0226 17:34:23.626256 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8f2e984c-ff5c-419b-bcd2-3e0d53825b89-cert podName:8f2e984c-ff5c-419b-bcd2-3e0d53825b89 nodeName:}" failed. No retries permitted until 2026-02-26 17:34:24.626240627 +0000 UTC m=+1179.187994956 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8f2e984c-ff5c-419b-bcd2-3e0d53825b89-cert") pod "infra-operator-controller-manager-79d975b745-6xzvs" (UID: "8f2e984c-ff5c-419b-bcd2-3e0d53825b89") : secret "infra-operator-webhook-server-cert" not found Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.627436 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-29g7k"] Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.628470 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-29g7k" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.632763 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-n2gvc" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.641906 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-29g7k"] Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.642360 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-qk4bw" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.719242 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-s49nh" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.727705 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26gn4\" (UniqueName: \"kubernetes.io/projected/8afb1a24-2085-497a-aea6-c5e35d58d2c2-kube-api-access-26gn4\") pod \"test-operator-controller-manager-5dc6794d5b-4czft\" (UID: \"8afb1a24-2085-497a-aea6-c5e35d58d2c2\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4czft" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.727921 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9v7r\" (UniqueName: \"kubernetes.io/projected/12c25d57-ec5f-48f9-83c6-9f099d56c313-kube-api-access-f9v7r\") pod \"watcher-operator-controller-manager-bccc79885-29g7k\" (UID: \"12c25d57-ec5f-48f9-83c6-9f099d56c313\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-29g7k" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.728003 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lkd2\" (UniqueName: \"kubernetes.io/projected/dfb3f113-7bae-465f-aac0-2697ba32d846-kube-api-access-9lkd2\") pod \"telemetry-operator-controller-manager-554d785765-j74p9\" (UID: \"dfb3f113-7bae-465f-aac0-2697ba32d846\") " pod="openstack-operators/telemetry-operator-controller-manager-554d785765-j74p9" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.733681 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-79f489f8d5-t8gwl"] Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.735047 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-79f489f8d5-t8gwl" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.746597 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-qlddv" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.746823 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.746867 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.757998 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-79f489f8d5-t8gwl"] Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.775240 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lkd2\" (UniqueName: \"kubernetes.io/projected/dfb3f113-7bae-465f-aac0-2697ba32d846-kube-api-access-9lkd2\") pod \"telemetry-operator-controller-manager-554d785765-j74p9\" (UID: \"dfb3f113-7bae-465f-aac0-2697ba32d846\") " pod="openstack-operators/telemetry-operator-controller-manager-554d785765-j74p9" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.788869 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-rdgbm" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.796744 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-554d785765-j74p9" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.829298 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26gn4\" (UniqueName: \"kubernetes.io/projected/8afb1a24-2085-497a-aea6-c5e35d58d2c2-kube-api-access-26gn4\") pod \"test-operator-controller-manager-5dc6794d5b-4czft\" (UID: \"8afb1a24-2085-497a-aea6-c5e35d58d2c2\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4czft" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.829380 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9v7r\" (UniqueName: \"kubernetes.io/projected/12c25d57-ec5f-48f9-83c6-9f099d56c313-kube-api-access-f9v7r\") pod \"watcher-operator-controller-manager-bccc79885-29g7k\" (UID: \"12c25d57-ec5f-48f9-83c6-9f099d56c313\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-29g7k" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.846822 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gnkt"] Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.848783 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.849109 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gnkt" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.851950 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-7mls9" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.855110 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9v7r\" (UniqueName: \"kubernetes.io/projected/12c25d57-ec5f-48f9-83c6-9f099d56c313-kube-api-access-f9v7r\") pod \"watcher-operator-controller-manager-bccc79885-29g7k\" (UID: \"12c25d57-ec5f-48f9-83c6-9f099d56c313\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-29g7k" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.864641 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26gn4\" (UniqueName: \"kubernetes.io/projected/8afb1a24-2085-497a-aea6-c5e35d58d2c2-kube-api-access-26gn4\") pod \"test-operator-controller-manager-5dc6794d5b-4czft\" (UID: \"8afb1a24-2085-497a-aea6-c5e35d58d2c2\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4czft" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.867570 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gnkt"] Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.875540 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-lss24"] Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.884897 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-vtqrv"] Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.897215 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-vtqrv" event={"ID":"60a1ca6f-55b2-43e0-a86b-b38ecf7190f6","Type":"ContainerStarted","Data":"528f2625efe9b687c0827cbce410144a36f7b140cebfe32b6089c08bdb3f2f1c"} Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.899958 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-lss24" event={"ID":"d7d01c08-16c4-411f-82f4-b7747d6222f7","Type":"ContainerStarted","Data":"97f8516612f3787d9d4068110ecff427add8cda3a50afc7d747902114439f174"} Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.900635 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4czft" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.904089 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-69rn4"] Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.933047 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/32486358-f7d6-45da-bade-d5af6fc319fd-metrics-certs\") pod \"openstack-operator-controller-manager-79f489f8d5-t8gwl\" (UID: \"32486358-f7d6-45da-bade-d5af6fc319fd\") " pod="openstack-operators/openstack-operator-controller-manager-79f489f8d5-t8gwl" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.933144 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cphz8p\" (UID: \"ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cphz8p" Feb 26 17:34:23 crc kubenswrapper[4805]: E0226 17:34:23.933358 4805 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 17:34:23 crc kubenswrapper[4805]: E0226 17:34:23.933615 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8-cert podName:ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8 nodeName:}" failed. No retries permitted until 2026-02-26 17:34:24.933596242 +0000 UTC m=+1179.495350581 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cphz8p" (UID: "ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.933704 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf8t8\" (UniqueName: \"kubernetes.io/projected/32486358-f7d6-45da-bade-d5af6fc319fd-kube-api-access-bf8t8\") pod \"openstack-operator-controller-manager-79f489f8d5-t8gwl\" (UID: \"32486358-f7d6-45da-bade-d5af6fc319fd\") " pod="openstack-operators/openstack-operator-controller-manager-79f489f8d5-t8gwl" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.933774 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/32486358-f7d6-45da-bade-d5af6fc319fd-webhook-certs\") pod \"openstack-operator-controller-manager-79f489f8d5-t8gwl\" (UID: \"32486358-f7d6-45da-bade-d5af6fc319fd\") " pod="openstack-operators/openstack-operator-controller-manager-79f489f8d5-t8gwl" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.966339 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-29g7k" Feb 26 17:34:23 crc kubenswrapper[4805]: I0226 17:34:23.985449 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-qcp9p"] Feb 26 17:34:24 crc kubenswrapper[4805]: I0226 17:34:24.035067 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhmfl\" (UniqueName: \"kubernetes.io/projected/7bdb778d-d9d7-4d46-ba96-208bc22804e9-kube-api-access-zhmfl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7gnkt\" (UID: \"7bdb778d-d9d7-4d46-ba96-208bc22804e9\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gnkt" Feb 26 17:34:24 crc kubenswrapper[4805]: I0226 17:34:24.035127 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf8t8\" (UniqueName: \"kubernetes.io/projected/32486358-f7d6-45da-bade-d5af6fc319fd-kube-api-access-bf8t8\") pod \"openstack-operator-controller-manager-79f489f8d5-t8gwl\" (UID: \"32486358-f7d6-45da-bade-d5af6fc319fd\") " pod="openstack-operators/openstack-operator-controller-manager-79f489f8d5-t8gwl" Feb 26 17:34:24 crc kubenswrapper[4805]: I0226 17:34:24.035156 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/32486358-f7d6-45da-bade-d5af6fc319fd-webhook-certs\") pod \"openstack-operator-controller-manager-79f489f8d5-t8gwl\" (UID: \"32486358-f7d6-45da-bade-d5af6fc319fd\") " pod="openstack-operators/openstack-operator-controller-manager-79f489f8d5-t8gwl" Feb 26 17:34:24 crc kubenswrapper[4805]: I0226 17:34:24.035176 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/32486358-f7d6-45da-bade-d5af6fc319fd-metrics-certs\") pod \"openstack-operator-controller-manager-79f489f8d5-t8gwl\" (UID: \"32486358-f7d6-45da-bade-d5af6fc319fd\") " pod="openstack-operators/openstack-operator-controller-manager-79f489f8d5-t8gwl" Feb 26 17:34:24 crc kubenswrapper[4805]: E0226 17:34:24.035387 4805 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 26 17:34:24 crc kubenswrapper[4805]: E0226 17:34:24.035445 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32486358-f7d6-45da-bade-d5af6fc319fd-metrics-certs podName:32486358-f7d6-45da-bade-d5af6fc319fd nodeName:}" failed. No retries permitted until 2026-02-26 17:34:24.535427331 +0000 UTC m=+1179.097181670 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/32486358-f7d6-45da-bade-d5af6fc319fd-metrics-certs") pod "openstack-operator-controller-manager-79f489f8d5-t8gwl" (UID: "32486358-f7d6-45da-bade-d5af6fc319fd") : secret "metrics-server-cert" not found Feb 26 17:34:24 crc kubenswrapper[4805]: E0226 17:34:24.035493 4805 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 26 17:34:24 crc kubenswrapper[4805]: E0226 17:34:24.035517 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32486358-f7d6-45da-bade-d5af6fc319fd-webhook-certs podName:32486358-f7d6-45da-bade-d5af6fc319fd nodeName:}" failed. No retries permitted until 2026-02-26 17:34:24.535508163 +0000 UTC m=+1179.097262502 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/32486358-f7d6-45da-bade-d5af6fc319fd-webhook-certs") pod "openstack-operator-controller-manager-79f489f8d5-t8gwl" (UID: "32486358-f7d6-45da-bade-d5af6fc319fd") : secret "webhook-server-cert" not found Feb 26 17:34:24 crc kubenswrapper[4805]: I0226 17:34:24.056788 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf8t8\" (UniqueName: \"kubernetes.io/projected/32486358-f7d6-45da-bade-d5af6fc319fd-kube-api-access-bf8t8\") pod \"openstack-operator-controller-manager-79f489f8d5-t8gwl\" (UID: \"32486358-f7d6-45da-bade-d5af6fc319fd\") " pod="openstack-operators/openstack-operator-controller-manager-79f489f8d5-t8gwl" Feb 26 17:34:24 crc kubenswrapper[4805]: I0226 17:34:24.116119 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-tzsf5"] Feb 26 17:34:24 crc kubenswrapper[4805]: I0226 17:34:24.138598 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhmfl\" (UniqueName: \"kubernetes.io/projected/7bdb778d-d9d7-4d46-ba96-208bc22804e9-kube-api-access-zhmfl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7gnkt\" (UID: \"7bdb778d-d9d7-4d46-ba96-208bc22804e9\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gnkt" Feb 26 17:34:24 crc kubenswrapper[4805]: I0226 17:34:24.159503 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-24fxg"] Feb 26 17:34:24 crc kubenswrapper[4805]: I0226 17:34:24.177829 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhmfl\" (UniqueName: \"kubernetes.io/projected/7bdb778d-d9d7-4d46-ba96-208bc22804e9-kube-api-access-zhmfl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7gnkt\" (UID: \"7bdb778d-d9d7-4d46-ba96-208bc22804e9\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gnkt" Feb 26 17:34:24 crc kubenswrapper[4805]: I0226 17:34:24.185084 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gnkt" Feb 26 17:34:24 crc kubenswrapper[4805]: W0226 17:34:24.216162 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cf12233_5655_4a94_8f0a_bdd68756de74.slice/crio-8b60dab6cbef1f85d0af149d1232b6bf94249c7ec21ecee045cd4b51dcf958dd WatchSource:0}: Error finding container 8b60dab6cbef1f85d0af149d1232b6bf94249c7ec21ecee045cd4b51dcf958dd: Status 404 returned error can't find the container with id 8b60dab6cbef1f85d0af149d1232b6bf94249c7ec21ecee045cd4b51dcf958dd Feb 26 17:34:24 crc kubenswrapper[4805]: I0226 17:34:24.226799 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-7lmx7"] Feb 26 17:34:24 crc kubenswrapper[4805]: I0226 17:34:24.255191 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-8xw2w"] Feb 26 17:34:24 crc kubenswrapper[4805]: I0226 17:34:24.276837 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-wxtbt"] Feb 26 17:34:24 crc kubenswrapper[4805]: W0226 17:34:24.287573 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda824c389_facb_49e7_91e9_a05c95cdd2b9.slice/crio-af102a4613af7f7aab73dff2d24a8dba0b8aa501efe11aebcb81c6779ef834be WatchSource:0}: Error finding container af102a4613af7f7aab73dff2d24a8dba0b8aa501efe11aebcb81c6779ef834be: Status 404 returned error can't find the container with id af102a4613af7f7aab73dff2d24a8dba0b8aa501efe11aebcb81c6779ef834be Feb 26 17:34:24 crc kubenswrapper[4805]: I0226 17:34:24.405044 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-5x6cj"] Feb 26 17:34:24 crc kubenswrapper[4805]: I0226 17:34:24.505914 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-pc48x"] Feb 26 17:34:24 crc kubenswrapper[4805]: I0226 17:34:24.517922 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-n8jrv"] Feb 26 17:34:24 crc kubenswrapper[4805]: I0226 17:34:24.532984 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-sfq49"] Feb 26 17:34:24 crc kubenswrapper[4805]: W0226 17:34:24.539628 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0bc07cc_8639_49ce_824d_b1cde1a7c500.slice/crio-47f91c8c16bbc71ecabd2e2f8fd5ce28717e69a267a080075febaad1458de933 WatchSource:0}: Error finding container 47f91c8c16bbc71ecabd2e2f8fd5ce28717e69a267a080075febaad1458de933: Status 404 returned error can't find the container with id 47f91c8c16bbc71ecabd2e2f8fd5ce28717e69a267a080075febaad1458de933 Feb 26 17:34:24 crc kubenswrapper[4805]: I0226 17:34:24.541845 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/32486358-f7d6-45da-bade-d5af6fc319fd-webhook-certs\") pod \"openstack-operator-controller-manager-79f489f8d5-t8gwl\" (UID: \"32486358-f7d6-45da-bade-d5af6fc319fd\") " pod="openstack-operators/openstack-operator-controller-manager-79f489f8d5-t8gwl" Feb 26 17:34:24 crc kubenswrapper[4805]: I0226 17:34:24.541964 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/32486358-f7d6-45da-bade-d5af6fc319fd-metrics-certs\") pod \"openstack-operator-controller-manager-79f489f8d5-t8gwl\" (UID: \"32486358-f7d6-45da-bade-d5af6fc319fd\") " pod="openstack-operators/openstack-operator-controller-manager-79f489f8d5-t8gwl" Feb 26 17:34:24 crc kubenswrapper[4805]: E0226 17:34:24.542158 4805 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 26 17:34:24 crc kubenswrapper[4805]: E0226 17:34:24.542208 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32486358-f7d6-45da-bade-d5af6fc319fd-metrics-certs podName:32486358-f7d6-45da-bade-d5af6fc319fd nodeName:}" failed. No retries permitted until 2026-02-26 17:34:25.542194327 +0000 UTC m=+1180.103948666 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/32486358-f7d6-45da-bade-d5af6fc319fd-metrics-certs") pod "openstack-operator-controller-manager-79f489f8d5-t8gwl" (UID: "32486358-f7d6-45da-bade-d5af6fc319fd") : secret "metrics-server-cert" not found Feb 26 17:34:24 crc kubenswrapper[4805]: E0226 17:34:24.542457 4805 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 26 17:34:24 crc kubenswrapper[4805]: E0226 17:34:24.542494 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32486358-f7d6-45da-bade-d5af6fc319fd-webhook-certs podName:32486358-f7d6-45da-bade-d5af6fc319fd nodeName:}" failed. No retries permitted until 2026-02-26 17:34:25.542485434 +0000 UTC m=+1180.104239773 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/32486358-f7d6-45da-bade-d5af6fc319fd-webhook-certs") pod "openstack-operator-controller-manager-79f489f8d5-t8gwl" (UID: "32486358-f7d6-45da-bade-d5af6fc319fd") : secret "webhook-server-cert" not found Feb 26 17:34:24 crc kubenswrapper[4805]: I0226 17:34:24.648893 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8f2e984c-ff5c-419b-bcd2-3e0d53825b89-cert\") pod \"infra-operator-controller-manager-79d975b745-6xzvs\" (UID: \"8f2e984c-ff5c-419b-bcd2-3e0d53825b89\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-6xzvs" Feb 26 17:34:24 crc kubenswrapper[4805]: E0226 17:34:24.649051 4805 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 26 17:34:24 crc kubenswrapper[4805]: E0226 17:34:24.649120 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8f2e984c-ff5c-419b-bcd2-3e0d53825b89-cert podName:8f2e984c-ff5c-419b-bcd2-3e0d53825b89 nodeName:}" failed. No retries permitted until 2026-02-26 17:34:26.649101535 +0000 UTC m=+1181.210855874 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8f2e984c-ff5c-419b-bcd2-3e0d53825b89-cert") pod "infra-operator-controller-manager-79d975b745-6xzvs" (UID: "8f2e984c-ff5c-419b-bcd2-3e0d53825b89") : secret "infra-operator-webhook-server-cert" not found Feb 26 17:34:24 crc kubenswrapper[4805]: I0226 17:34:24.668242 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-qk4bw"] Feb 26 17:34:24 crc kubenswrapper[4805]: I0226 17:34:24.678288 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-554d785765-j74p9"] Feb 26 17:34:24 crc kubenswrapper[4805]: W0226 17:34:24.685155 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda566e6f7_f550_4c90_a3fe_f5b66061d126.slice/crio-7fd037adad8d1d6de5641617e44223ccb5cbe5f9b1cc70881388219052fb7dca WatchSource:0}: Error finding container 7fd037adad8d1d6de5641617e44223ccb5cbe5f9b1cc70881388219052fb7dca: Status 404 returned error can't find the container with id 7fd037adad8d1d6de5641617e44223ccb5cbe5f9b1cc70881388219052fb7dca Feb 26 17:34:24 crc kubenswrapper[4805]: W0226 17:34:24.696162 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddfb3f113_7bae_465f_aac0_2697ba32d846.slice/crio-7abd4b9fd69bf358ae58b0f1fc4a8fcae7a3b2582ccc81a98dc9cb96d6f2b3d2 WatchSource:0}: Error finding container 7abd4b9fd69bf358ae58b0f1fc4a8fcae7a3b2582ccc81a98dc9cb96d6f2b3d2: Status 404 returned error can't find the container with id 7abd4b9fd69bf358ae58b0f1fc4a8fcae7a3b2582ccc81a98dc9cb96d6f2b3d2 Feb 26 17:34:24 crc kubenswrapper[4805]: E0226 17:34:24.703318 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.110:5001/openstack-k8s-operators/telemetry-operator:39a4be8a175d9e84fa6ba159f906a95524540b13,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9lkd2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-554d785765-j74p9_openstack-operators(dfb3f113-7bae-465f-aac0-2697ba32d846): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 26 17:34:24 crc kubenswrapper[4805]: E0226 17:34:24.704478 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-554d785765-j74p9" podUID="dfb3f113-7bae-465f-aac0-2697ba32d846" Feb 26 17:34:24 crc kubenswrapper[4805]: I0226 17:34:24.769579 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-29g7k"] Feb 26 17:34:24 crc kubenswrapper[4805]: E0226 17:34:24.789229 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f9v7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-bccc79885-29g7k_openstack-operators(12c25d57-ec5f-48f9-83c6-9f099d56c313): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 26 17:34:24 crc kubenswrapper[4805]: E0226 17:34:24.790857 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-29g7k" podUID="12c25d57-ec5f-48f9-83c6-9f099d56c313" Feb 26 17:34:24 crc kubenswrapper[4805]: E0226 17:34:24.790980 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9xj55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-s49nh_openstack-operators(e234f401-20b1-4f4b-b884-0ccae8a82887): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 26 17:34:24 crc kubenswrapper[4805]: E0226 17:34:24.792160 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-s49nh" podUID="e234f401-20b1-4f4b-b884-0ccae8a82887" Feb 26 17:34:24 crc kubenswrapper[4805]: I0226 17:34:24.796140 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-s49nh"] Feb 26 17:34:24 crc kubenswrapper[4805]: I0226 17:34:24.802712 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-4czft"] Feb 26 17:34:24 crc kubenswrapper[4805]: E0226 17:34:24.822840 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-26gn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5dc6794d5b-4czft_openstack-operators(8afb1a24-2085-497a-aea6-c5e35d58d2c2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 26 17:34:24 crc kubenswrapper[4805]: I0226 17:34:24.823512 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-rdgbm"] Feb 26 17:34:24 crc kubenswrapper[4805]: E0226 17:34:24.824116 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4czft" podUID="8afb1a24-2085-497a-aea6-c5e35d58d2c2" Feb 26 17:34:24 crc kubenswrapper[4805]: I0226 17:34:24.922100 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-pc48x" event={"ID":"10b3e5a9-53df-421d-a6dc-ccb44f03f432","Type":"ContainerStarted","Data":"13ab346c808c326111e61c3b94f7024797a795f16280f964e09980e7f7b1c631"} Feb 26 17:34:24 crc kubenswrapper[4805]: I0226 17:34:24.939256 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-sfq49" event={"ID":"4a5c2658-ad3f-49b7-bb08-64aa33210ea4","Type":"ContainerStarted","Data":"4857e5f6983d6d5930db789b3ab9a19bbbcd49c6c33ad70a6aac3a15c14fcc63"} Feb 26 17:34:24 crc kubenswrapper[4805]: I0226 17:34:24.954009 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-rdgbm" event={"ID":"2e440888-07ce-4a09-ac04-ab52fae67596","Type":"ContainerStarted","Data":"1e2307ce7e362916d7c0ca29bb898190c64de393eae217ccc35ea03a3202220b"} Feb 26 17:34:24 crc kubenswrapper[4805]: I0226 17:34:24.967752 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cphz8p\" (UID: \"ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cphz8p" Feb 26 17:34:24 crc kubenswrapper[4805]: E0226 17:34:24.968004 4805 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 17:34:24 crc kubenswrapper[4805]: E0226 17:34:24.968091 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8-cert podName:ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8 nodeName:}" failed. No retries permitted until 2026-02-26 17:34:26.968072734 +0000 UTC m=+1181.529827073 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cphz8p" (UID: "ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 17:34:24 crc kubenswrapper[4805]: E0226 17:34:24.987640 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-29g7k" podUID="12c25d57-ec5f-48f9-83c6-9f099d56c313" Feb 26 17:34:24 crc kubenswrapper[4805]: E0226 17:34:24.992249 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98\\\"\"" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4czft" podUID="8afb1a24-2085-497a-aea6-c5e35d58d2c2" Feb 26 17:34:25 crc kubenswrapper[4805]: I0226 17:34:25.011228 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gnkt"] Feb 26 17:34:25 crc kubenswrapper[4805]: I0226 17:34:25.011271 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-24fxg" event={"ID":"2cf12233-5655-4a94-8f0a-bdd68756de74","Type":"ContainerStarted","Data":"8b60dab6cbef1f85d0af149d1232b6bf94249c7ec21ecee045cd4b51dcf958dd"} Feb 26 17:34:25 crc kubenswrapper[4805]: I0226 17:34:25.011292 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-29g7k" event={"ID":"12c25d57-ec5f-48f9-83c6-9f099d56c313","Type":"ContainerStarted","Data":"b2959c6b0b8909843c997e1f55bf0ad6d5cbea876c73f5ac7240028ba2d94eeb"} Feb 26 17:34:25 crc kubenswrapper[4805]: I0226 17:34:25.011333 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4czft" event={"ID":"8afb1a24-2085-497a-aea6-c5e35d58d2c2","Type":"ContainerStarted","Data":"f3aa2694a8636f36c5fa5aa17af03ae08a484007da086c9210521d453d0f3552"} Feb 26 17:34:25 crc kubenswrapper[4805]: I0226 17:34:25.011344 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-n8jrv" event={"ID":"a0bc07cc-8639-49ce-824d-b1cde1a7c500","Type":"ContainerStarted","Data":"47f91c8c16bbc71ecabd2e2f8fd5ce28717e69a267a080075febaad1458de933"} Feb 26 17:34:25 crc kubenswrapper[4805]: I0226 17:34:25.011358 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-5x6cj" event={"ID":"171cc925-66fe-4ceb-b9b5-56b48a121642","Type":"ContainerStarted","Data":"97986542c8a9218fc283da8953d27490130f079a87d09a3735c19a5f6088f424"} Feb 26 17:34:25 crc kubenswrapper[4805]: I0226 17:34:25.011631 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wxtbt" event={"ID":"a824c389-facb-49e7-91e9-a05c95cdd2b9","Type":"ContainerStarted","Data":"af102a4613af7f7aab73dff2d24a8dba0b8aa501efe11aebcb81c6779ef834be"} Feb 26 17:34:25 crc kubenswrapper[4805]: I0226 17:34:25.029522 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-554d785765-j74p9" event={"ID":"dfb3f113-7bae-465f-aac0-2697ba32d846","Type":"ContainerStarted","Data":"7abd4b9fd69bf358ae58b0f1fc4a8fcae7a3b2582ccc81a98dc9cb96d6f2b3d2"} Feb 26 17:34:25 crc kubenswrapper[4805]: E0226 17:34:25.032619 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.110:5001/openstack-k8s-operators/telemetry-operator:39a4be8a175d9e84fa6ba159f906a95524540b13\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-554d785765-j74p9" podUID="dfb3f113-7bae-465f-aac0-2697ba32d846" Feb 26 17:34:25 crc kubenswrapper[4805]: I0226 17:34:25.034537 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-69rn4" event={"ID":"1a0573de-bd9c-4917-93d6-4fe8ae9fde94","Type":"ContainerStarted","Data":"c15c89dd3f04c0f25f81b07a52e3e13643d22edef783cb63183fd97383729e43"} Feb 26 17:34:25 crc kubenswrapper[4805]: E0226 17:34:25.047418 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zhmfl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-7gnkt_openstack-operators(7bdb778d-d9d7-4d46-ba96-208bc22804e9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 26 17:34:25 crc kubenswrapper[4805]: E0226 17:34:25.048642 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gnkt" podUID="7bdb778d-d9d7-4d46-ba96-208bc22804e9" Feb 26 17:34:25 crc kubenswrapper[4805]: I0226 17:34:25.050608 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-7lmx7" event={"ID":"e4c0ad7f-8daf-4599-a457-135483730ac6","Type":"ContainerStarted","Data":"01bba3003650cefa1e5e71c91543a15eb6dad2de6cb4b617055cee4dc7697ace"} Feb 26 17:34:25 crc kubenswrapper[4805]: I0226 17:34:25.063843 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-tzsf5" event={"ID":"b5c2bc9e-6d6f-4d6e-9fa1-379b26708be3","Type":"ContainerStarted","Data":"b24e5ae1194f4f378978862052726cb750a230b31e891afc393e90f80df39fa4"} Feb 26 17:34:25 crc kubenswrapper[4805]: I0226 17:34:25.066285 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-qk4bw" event={"ID":"a566e6f7-f550-4c90-a3fe-f5b66061d126","Type":"ContainerStarted","Data":"7fd037adad8d1d6de5641617e44223ccb5cbe5f9b1cc70881388219052fb7dca"} Feb 26 17:34:25 crc kubenswrapper[4805]: I0226 17:34:25.068377 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-qcp9p" event={"ID":"e649f248-07e5-4bf2-83bf-0c7fb532dc16","Type":"ContainerStarted","Data":"7d657248d7fa5e1cd0b2dc301927ea7240f4a301fc11a44cbf7da0c32e4a7ba9"} Feb 26 17:34:25 crc kubenswrapper[4805]: I0226 17:34:25.072322 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-s49nh" event={"ID":"e234f401-20b1-4f4b-b884-0ccae8a82887","Type":"ContainerStarted","Data":"a2e5f977adc4e4edbee3952760adafeb9ab8601374238b921ae24943b57bfe0e"} Feb 26 17:34:25 crc kubenswrapper[4805]: E0226 17:34:25.073636 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-s49nh" podUID="e234f401-20b1-4f4b-b884-0ccae8a82887" Feb 26 17:34:25 crc kubenswrapper[4805]: I0226 17:34:25.074220 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-8xw2w" event={"ID":"23a0badb-7b1f-4d18-8622-3248adbfe0ea","Type":"ContainerStarted","Data":"988c0c5d202552b04cfaf13562bb7a7db3578aad8a0807c0d4dba26f7625ba71"} Feb 26 17:34:25 crc kubenswrapper[4805]: I0226 17:34:25.580875 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/32486358-f7d6-45da-bade-d5af6fc319fd-webhook-certs\") pod \"openstack-operator-controller-manager-79f489f8d5-t8gwl\" (UID: \"32486358-f7d6-45da-bade-d5af6fc319fd\") " pod="openstack-operators/openstack-operator-controller-manager-79f489f8d5-t8gwl" Feb 26 17:34:25 crc kubenswrapper[4805]: I0226 17:34:25.581111 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/32486358-f7d6-45da-bade-d5af6fc319fd-metrics-certs\") pod \"openstack-operator-controller-manager-79f489f8d5-t8gwl\" (UID: \"32486358-f7d6-45da-bade-d5af6fc319fd\") " pod="openstack-operators/openstack-operator-controller-manager-79f489f8d5-t8gwl" Feb 26 17:34:25 crc kubenswrapper[4805]: E0226 17:34:25.581024 4805 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 26 17:34:25 crc kubenswrapper[4805]: E0226 17:34:25.581201 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32486358-f7d6-45da-bade-d5af6fc319fd-webhook-certs podName:32486358-f7d6-45da-bade-d5af6fc319fd nodeName:}" failed. No retries permitted until 2026-02-26 17:34:27.581180624 +0000 UTC m=+1182.142934963 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/32486358-f7d6-45da-bade-d5af6fc319fd-webhook-certs") pod "openstack-operator-controller-manager-79f489f8d5-t8gwl" (UID: "32486358-f7d6-45da-bade-d5af6fc319fd") : secret "webhook-server-cert" not found Feb 26 17:34:25 crc kubenswrapper[4805]: E0226 17:34:25.581843 4805 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 26 17:34:25 crc kubenswrapper[4805]: E0226 17:34:25.581885 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32486358-f7d6-45da-bade-d5af6fc319fd-metrics-certs podName:32486358-f7d6-45da-bade-d5af6fc319fd nodeName:}" failed. No retries permitted until 2026-02-26 17:34:27.581875431 +0000 UTC m=+1182.143629770 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/32486358-f7d6-45da-bade-d5af6fc319fd-metrics-certs") pod "openstack-operator-controller-manager-79f489f8d5-t8gwl" (UID: "32486358-f7d6-45da-bade-d5af6fc319fd") : secret "metrics-server-cert" not found Feb 26 17:34:26 crc kubenswrapper[4805]: I0226 17:34:26.100365 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gnkt" event={"ID":"7bdb778d-d9d7-4d46-ba96-208bc22804e9","Type":"ContainerStarted","Data":"07f431d674783db1a651161206db079b6ff230e5e7ada35b1d5958ed11ba9751"} Feb 26 17:34:26 crc kubenswrapper[4805]: E0226 17:34:26.101065 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-s49nh" podUID="e234f401-20b1-4f4b-b884-0ccae8a82887" Feb 26 17:34:26 crc kubenswrapper[4805]: E0226 17:34:26.103217 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-29g7k" podUID="12c25d57-ec5f-48f9-83c6-9f099d56c313" Feb 26 17:34:26 crc kubenswrapper[4805]: E0226 17:34:26.103228 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98\\\"\"" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4czft" podUID="8afb1a24-2085-497a-aea6-c5e35d58d2c2" Feb 26 17:34:26 crc kubenswrapper[4805]: E0226 17:34:26.103476 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.110:5001/openstack-k8s-operators/telemetry-operator:39a4be8a175d9e84fa6ba159f906a95524540b13\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-554d785765-j74p9" podUID="dfb3f113-7bae-465f-aac0-2697ba32d846" Feb 26 17:34:26 crc kubenswrapper[4805]: E0226 17:34:26.103554 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gnkt" podUID="7bdb778d-d9d7-4d46-ba96-208bc22804e9" Feb 26 17:34:26 crc kubenswrapper[4805]: I0226 17:34:26.694967 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8f2e984c-ff5c-419b-bcd2-3e0d53825b89-cert\") pod \"infra-operator-controller-manager-79d975b745-6xzvs\" (UID: \"8f2e984c-ff5c-419b-bcd2-3e0d53825b89\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-6xzvs" Feb 26 17:34:26 crc kubenswrapper[4805]: E0226 17:34:26.695262 4805 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 26 17:34:26 crc kubenswrapper[4805]: E0226 17:34:26.695560 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8f2e984c-ff5c-419b-bcd2-3e0d53825b89-cert podName:8f2e984c-ff5c-419b-bcd2-3e0d53825b89 nodeName:}" failed. No retries permitted until 2026-02-26 17:34:30.695519398 +0000 UTC m=+1185.257273737 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8f2e984c-ff5c-419b-bcd2-3e0d53825b89-cert") pod "infra-operator-controller-manager-79d975b745-6xzvs" (UID: "8f2e984c-ff5c-419b-bcd2-3e0d53825b89") : secret "infra-operator-webhook-server-cert" not found Feb 26 17:34:27 crc kubenswrapper[4805]: I0226 17:34:26.999762 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cphz8p\" (UID: \"ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cphz8p" Feb 26 17:34:27 crc kubenswrapper[4805]: E0226 17:34:26.999917 4805 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 17:34:27 crc kubenswrapper[4805]: E0226 17:34:26.999970 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8-cert podName:ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8 nodeName:}" failed. No retries permitted until 2026-02-26 17:34:30.99995467 +0000 UTC m=+1185.561709009 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cphz8p" (UID: "ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 17:34:27 crc kubenswrapper[4805]: E0226 17:34:27.105885 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gnkt" podUID="7bdb778d-d9d7-4d46-ba96-208bc22804e9" Feb 26 17:34:27 crc kubenswrapper[4805]: I0226 17:34:27.607741 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/32486358-f7d6-45da-bade-d5af6fc319fd-webhook-certs\") pod \"openstack-operator-controller-manager-79f489f8d5-t8gwl\" (UID: \"32486358-f7d6-45da-bade-d5af6fc319fd\") " pod="openstack-operators/openstack-operator-controller-manager-79f489f8d5-t8gwl" Feb 26 17:34:27 crc kubenswrapper[4805]: I0226 17:34:27.607786 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/32486358-f7d6-45da-bade-d5af6fc319fd-metrics-certs\") pod \"openstack-operator-controller-manager-79f489f8d5-t8gwl\" (UID: \"32486358-f7d6-45da-bade-d5af6fc319fd\") " pod="openstack-operators/openstack-operator-controller-manager-79f489f8d5-t8gwl" Feb 26 17:34:27 crc kubenswrapper[4805]: E0226 17:34:27.607916 4805 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 26 17:34:27 crc kubenswrapper[4805]: E0226 17:34:27.607990 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32486358-f7d6-45da-bade-d5af6fc319fd-metrics-certs podName:32486358-f7d6-45da-bade-d5af6fc319fd nodeName:}" failed. No retries permitted until 2026-02-26 17:34:31.60797331 +0000 UTC m=+1186.169727649 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/32486358-f7d6-45da-bade-d5af6fc319fd-metrics-certs") pod "openstack-operator-controller-manager-79f489f8d5-t8gwl" (UID: "32486358-f7d6-45da-bade-d5af6fc319fd") : secret "metrics-server-cert" not found Feb 26 17:34:27 crc kubenswrapper[4805]: E0226 17:34:27.607995 4805 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 26 17:34:27 crc kubenswrapper[4805]: E0226 17:34:27.608119 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32486358-f7d6-45da-bade-d5af6fc319fd-webhook-certs podName:32486358-f7d6-45da-bade-d5af6fc319fd nodeName:}" failed. No retries permitted until 2026-02-26 17:34:31.608093693 +0000 UTC m=+1186.169848102 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/32486358-f7d6-45da-bade-d5af6fc319fd-webhook-certs") pod "openstack-operator-controller-manager-79f489f8d5-t8gwl" (UID: "32486358-f7d6-45da-bade-d5af6fc319fd") : secret "webhook-server-cert" not found Feb 26 17:34:30 crc kubenswrapper[4805]: I0226 17:34:30.771828 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8f2e984c-ff5c-419b-bcd2-3e0d53825b89-cert\") pod \"infra-operator-controller-manager-79d975b745-6xzvs\" (UID: \"8f2e984c-ff5c-419b-bcd2-3e0d53825b89\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-6xzvs" Feb 26 17:34:30 crc kubenswrapper[4805]: E0226 17:34:30.772008 4805 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 26 17:34:30 crc kubenswrapper[4805]: E0226 17:34:30.772362 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8f2e984c-ff5c-419b-bcd2-3e0d53825b89-cert podName:8f2e984c-ff5c-419b-bcd2-3e0d53825b89 nodeName:}" failed. No retries permitted until 2026-02-26 17:34:38.772341291 +0000 UTC m=+1193.334095620 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8f2e984c-ff5c-419b-bcd2-3e0d53825b89-cert") pod "infra-operator-controller-manager-79d975b745-6xzvs" (UID: "8f2e984c-ff5c-419b-bcd2-3e0d53825b89") : secret "infra-operator-webhook-server-cert" not found Feb 26 17:34:31 crc kubenswrapper[4805]: I0226 17:34:31.077621 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cphz8p\" (UID: \"ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cphz8p" Feb 26 17:34:31 crc kubenswrapper[4805]: E0226 17:34:31.077817 4805 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 17:34:31 crc kubenswrapper[4805]: E0226 17:34:31.078139 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8-cert podName:ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8 nodeName:}" failed. No retries permitted until 2026-02-26 17:34:39.078118096 +0000 UTC m=+1193.639872435 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cphz8p" (UID: "ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 26 17:34:31 crc kubenswrapper[4805]: I0226 17:34:31.684647 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/32486358-f7d6-45da-bade-d5af6fc319fd-webhook-certs\") pod \"openstack-operator-controller-manager-79f489f8d5-t8gwl\" (UID: \"32486358-f7d6-45da-bade-d5af6fc319fd\") " pod="openstack-operators/openstack-operator-controller-manager-79f489f8d5-t8gwl" Feb 26 17:34:31 crc kubenswrapper[4805]: I0226 17:34:31.684718 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/32486358-f7d6-45da-bade-d5af6fc319fd-metrics-certs\") pod \"openstack-operator-controller-manager-79f489f8d5-t8gwl\" (UID: \"32486358-f7d6-45da-bade-d5af6fc319fd\") " pod="openstack-operators/openstack-operator-controller-manager-79f489f8d5-t8gwl" Feb 26 17:34:31 crc kubenswrapper[4805]: E0226 17:34:31.684943 4805 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 26 17:34:31 crc kubenswrapper[4805]: E0226 17:34:31.684997 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32486358-f7d6-45da-bade-d5af6fc319fd-metrics-certs podName:32486358-f7d6-45da-bade-d5af6fc319fd nodeName:}" failed. No retries permitted until 2026-02-26 17:34:39.684980978 +0000 UTC m=+1194.246735317 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/32486358-f7d6-45da-bade-d5af6fc319fd-metrics-certs") pod "openstack-operator-controller-manager-79f489f8d5-t8gwl" (UID: "32486358-f7d6-45da-bade-d5af6fc319fd") : secret "metrics-server-cert" not found Feb 26 17:34:31 crc kubenswrapper[4805]: E0226 17:34:31.685225 4805 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 26 17:34:31 crc kubenswrapper[4805]: E0226 17:34:31.685322 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32486358-f7d6-45da-bade-d5af6fc319fd-webhook-certs podName:32486358-f7d6-45da-bade-d5af6fc319fd nodeName:}" failed. No retries permitted until 2026-02-26 17:34:39.685301866 +0000 UTC m=+1194.247056245 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/32486358-f7d6-45da-bade-d5af6fc319fd-webhook-certs") pod "openstack-operator-controller-manager-79f489f8d5-t8gwl" (UID: "32486358-f7d6-45da-bade-d5af6fc319fd") : secret "webhook-server-cert" not found Feb 26 17:34:36 crc kubenswrapper[4805]: E0226 17:34:36.700480 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" Feb 26 17:34:36 crc kubenswrapper[4805]: E0226 17:34:36.701135 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rtqf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-rdgbm_openstack-operators(2e440888-07ce-4a09-ac04-ab52fae67596): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 17:34:36 crc kubenswrapper[4805]: E0226 17:34:36.702589 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-rdgbm" podUID="2e440888-07ce-4a09-ac04-ab52fae67596" Feb 26 17:34:37 crc kubenswrapper[4805]: E0226 17:34:37.747724 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-rdgbm" podUID="2e440888-07ce-4a09-ac04-ab52fae67596" Feb 26 17:34:37 crc kubenswrapper[4805]: E0226 17:34:37.872693 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 26 17:34:37 crc kubenswrapper[4805]: E0226 17:34:37.873506 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b4lnz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-n8jrv_openstack-operators(a0bc07cc-8639-49ce-824d-b1cde1a7c500): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 17:34:37 crc kubenswrapper[4805]: E0226 17:34:37.874689 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-n8jrv" podUID="a0bc07cc-8639-49ce-824d-b1cde1a7c500" Feb 26 17:34:38 crc kubenswrapper[4805]: I0226 17:34:38.961525 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8f2e984c-ff5c-419b-bcd2-3e0d53825b89-cert\") pod \"infra-operator-controller-manager-79d975b745-6xzvs\" (UID: \"8f2e984c-ff5c-419b-bcd2-3e0d53825b89\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-6xzvs" Feb 26 17:34:38 crc kubenswrapper[4805]: I0226 17:34:38.980378 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8f2e984c-ff5c-419b-bcd2-3e0d53825b89-cert\") pod \"infra-operator-controller-manager-79d975b745-6xzvs\" (UID: \"8f2e984c-ff5c-419b-bcd2-3e0d53825b89\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-6xzvs" Feb 26 17:34:38 crc kubenswrapper[4805]: E0226 17:34:38.995806 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-n8jrv" podUID="a0bc07cc-8639-49ce-824d-b1cde1a7c500" Feb 26 17:34:39 crc kubenswrapper[4805]: I0226 17:34:39.180845 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-6xzvs" Feb 26 17:34:39 crc kubenswrapper[4805]: I0226 17:34:39.181734 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cphz8p\" (UID: \"ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cphz8p" Feb 26 17:34:39 crc kubenswrapper[4805]: I0226 17:34:39.186668 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cphz8p\" (UID: \"ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cphz8p" Feb 26 17:34:39 crc kubenswrapper[4805]: I0226 17:34:39.241265 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cphz8p" Feb 26 17:34:39 crc kubenswrapper[4805]: I0226 17:34:39.688549 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/32486358-f7d6-45da-bade-d5af6fc319fd-webhook-certs\") pod \"openstack-operator-controller-manager-79f489f8d5-t8gwl\" (UID: \"32486358-f7d6-45da-bade-d5af6fc319fd\") " pod="openstack-operators/openstack-operator-controller-manager-79f489f8d5-t8gwl" Feb 26 17:34:39 crc kubenswrapper[4805]: I0226 17:34:39.688615 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/32486358-f7d6-45da-bade-d5af6fc319fd-metrics-certs\") pod \"openstack-operator-controller-manager-79f489f8d5-t8gwl\" (UID: \"32486358-f7d6-45da-bade-d5af6fc319fd\") " pod="openstack-operators/openstack-operator-controller-manager-79f489f8d5-t8gwl" Feb 26 17:34:39 crc kubenswrapper[4805]: I0226 17:34:39.702206 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/32486358-f7d6-45da-bade-d5af6fc319fd-metrics-certs\") pod \"openstack-operator-controller-manager-79f489f8d5-t8gwl\" (UID: \"32486358-f7d6-45da-bade-d5af6fc319fd\") " pod="openstack-operators/openstack-operator-controller-manager-79f489f8d5-t8gwl" Feb 26 17:34:39 crc kubenswrapper[4805]: I0226 17:34:39.702256 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/32486358-f7d6-45da-bade-d5af6fc319fd-webhook-certs\") pod \"openstack-operator-controller-manager-79f489f8d5-t8gwl\" (UID: \"32486358-f7d6-45da-bade-d5af6fc319fd\") " pod="openstack-operators/openstack-operator-controller-manager-79f489f8d5-t8gwl" Feb 26 17:34:39 crc kubenswrapper[4805]: I0226 17:34:39.716942 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-79f489f8d5-t8gwl" Feb 26 17:34:47 crc kubenswrapper[4805]: E0226 17:34:47.070954 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:14ae1fb8d065e2317959ce7490a878dc87731d27ebf40259f801ba1a83cfefcf" Feb 26 17:34:47 crc kubenswrapper[4805]: E0226 17:34:47.071714 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:14ae1fb8d065e2317959ce7490a878dc87731d27ebf40259f801ba1a83cfefcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fz8zd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-6bd4687957-sfq49_openstack-operators(4a5c2658-ad3f-49b7-bb08-64aa33210ea4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 17:34:47 crc kubenswrapper[4805]: E0226 17:34:47.073005 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-sfq49" podUID="4a5c2658-ad3f-49b7-bb08-64aa33210ea4" Feb 26 17:34:48 crc kubenswrapper[4805]: E0226 17:34:48.055401 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:14ae1fb8d065e2317959ce7490a878dc87731d27ebf40259f801ba1a83cfefcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-sfq49" podUID="4a5c2658-ad3f-49b7-bb08-64aa33210ea4" Feb 26 17:34:48 crc kubenswrapper[4805]: E0226 17:34:48.938315 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:f4143497c70c048a7733c284060347a0c74ef4e628aca22ee191e5bc9e4c7192" Feb 26 17:34:48 crc kubenswrapper[4805]: E0226 17:34:48.938496 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:f4143497c70c048a7733c284060347a0c74ef4e628aca22ee191e5bc9e4c7192,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hstsd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-5955d8c787-qk4bw_openstack-operators(a566e6f7-f550-4c90-a3fe-f5b66061d126): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 17:34:48 crc kubenswrapper[4805]: E0226 17:34:48.939672 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-qk4bw" podUID="a566e6f7-f550-4c90-a3fe-f5b66061d126" Feb 26 17:34:49 crc kubenswrapper[4805]: E0226 17:34:49.061741 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:f4143497c70c048a7733c284060347a0c74ef4e628aca22ee191e5bc9e4c7192\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-qk4bw" podUID="a566e6f7-f550-4c90-a3fe-f5b66061d126" Feb 26 17:34:49 crc kubenswrapper[4805]: I0226 17:34:49.351703 4805 scope.go:117] "RemoveContainer" containerID="bf3e363664202893aa2a5173369d952647d645a954f1e3faac6b4a08d210a3b2" Feb 26 17:34:50 crc kubenswrapper[4805]: E0226 17:34:50.009252 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc" Feb 26 17:34:50 crc kubenswrapper[4805]: E0226 17:34:50.009492 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tb2dg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-868647ff47-vtqrv_openstack-operators(60a1ca6f-55b2-43e0-a86b-b38ecf7190f6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 17:34:50 crc kubenswrapper[4805]: E0226 17:34:50.021198 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-vtqrv" podUID="60a1ca6f-55b2-43e0-a86b-b38ecf7190f6" Feb 26 17:34:50 crc kubenswrapper[4805]: E0226 17:34:50.070274 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-vtqrv" podUID="60a1ca6f-55b2-43e0-a86b-b38ecf7190f6" Feb 26 17:34:51 crc kubenswrapper[4805]: E0226 17:34:51.603745 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642" Feb 26 17:34:51 crc kubenswrapper[4805]: E0226 17:34:51.604197 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tp4pk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6d8bf5c495-tzsf5_openstack-operators(b5c2bc9e-6d6f-4d6e-9fa1-379b26708be3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 17:34:51 crc kubenswrapper[4805]: E0226 17:34:51.606682 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-tzsf5" podUID="b5c2bc9e-6d6f-4d6e-9fa1-379b26708be3" Feb 26 17:34:52 crc kubenswrapper[4805]: E0226 17:34:52.081558 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-tzsf5" podUID="b5c2bc9e-6d6f-4d6e-9fa1-379b26708be3" Feb 26 17:34:52 crc kubenswrapper[4805]: E0226 17:34:52.205278 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 26 17:34:52 crc kubenswrapper[4805]: E0226 17:34:52.205428 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tzl8l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-wxtbt_openstack-operators(a824c389-facb-49e7-91e9-a05c95cdd2b9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 17:34:52 crc kubenswrapper[4805]: E0226 17:34:52.206728 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wxtbt" podUID="a824c389-facb-49e7-91e9-a05c95cdd2b9" Feb 26 17:34:53 crc kubenswrapper[4805]: E0226 17:34:53.096147 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wxtbt" podUID="a824c389-facb-49e7-91e9-a05c95cdd2b9" Feb 26 17:34:59 crc kubenswrapper[4805]: I0226 17:34:59.286102 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-6xzvs"] Feb 26 17:34:59 crc kubenswrapper[4805]: I0226 17:34:59.352459 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cphz8p"] Feb 26 17:34:59 crc kubenswrapper[4805]: I0226 17:34:59.358128 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-s49nh" event={"ID":"e234f401-20b1-4f4b-b884-0ccae8a82887","Type":"ContainerStarted","Data":"07074fb99fead5a516446b2e8740bb43740e3188233a661a415cbce3a2b6179b"} Feb 26 17:34:59 crc kubenswrapper[4805]: I0226 17:34:59.358805 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-s49nh" Feb 26 17:34:59 crc kubenswrapper[4805]: I0226 17:34:59.370754 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-8xw2w" event={"ID":"23a0badb-7b1f-4d18-8622-3248adbfe0ea","Type":"ContainerStarted","Data":"1dc686deffead85a2069ca320a630f07a01be78cbe5fe9546411b4f4898c7d74"} Feb 26 17:34:59 crc kubenswrapper[4805]: I0226 17:34:59.371140 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-8xw2w" Feb 26 17:34:59 crc kubenswrapper[4805]: W0226 17:34:59.375897 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f2e984c_ff5c_419b_bcd2_3e0d53825b89.slice/crio-b9cbdc6f526136ec3ff305acfcca69a64e8630f3bcf27d5fd45d68d2995c3187 WatchSource:0}: Error finding container b9cbdc6f526136ec3ff305acfcca69a64e8630f3bcf27d5fd45d68d2995c3187: Status 404 returned error can't find the container with id b9cbdc6f526136ec3ff305acfcca69a64e8630f3bcf27d5fd45d68d2995c3187 Feb 26 17:34:59 crc kubenswrapper[4805]: I0226 17:34:59.388079 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-s49nh" podStartSLOduration=2.739184875 podStartE2EDuration="36.388054982s" podCreationTimestamp="2026-02-26 17:34:23 +0000 UTC" firstStartedPulling="2026-02-26 17:34:24.790903366 +0000 UTC m=+1179.352657705" lastFinishedPulling="2026-02-26 17:34:58.439773473 +0000 UTC m=+1213.001527812" observedRunningTime="2026-02-26 17:34:59.387411016 +0000 UTC m=+1213.949165365" watchObservedRunningTime="2026-02-26 17:34:59.388054982 +0000 UTC m=+1213.949809321" Feb 26 17:34:59 crc kubenswrapper[4805]: W0226 17:34:59.402843 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce3a5e9a_ddf3_47a4_8b9c_cf04573c34c8.slice/crio-23402a5aa46ecdf9261a8068de0643bd38e0cbd5f43d12e253c09b11821acdf0 WatchSource:0}: Error finding container 23402a5aa46ecdf9261a8068de0643bd38e0cbd5f43d12e253c09b11821acdf0: Status 404 returned error can't find the container with id 23402a5aa46ecdf9261a8068de0643bd38e0cbd5f43d12e253c09b11821acdf0 Feb 26 17:34:59 crc kubenswrapper[4805]: I0226 17:34:59.406375 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-pc48x" event={"ID":"10b3e5a9-53df-421d-a6dc-ccb44f03f432","Type":"ContainerStarted","Data":"3b2983bcfc6c3521269b92c0dfb81b69fd234abc070dd174c134a46e310f4ff6"} Feb 26 17:34:59 crc kubenswrapper[4805]: I0226 17:34:59.406727 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-pc48x" Feb 26 17:34:59 crc kubenswrapper[4805]: I0226 17:34:59.418044 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-8xw2w" podStartSLOduration=9.487508145 podStartE2EDuration="37.418029662s" podCreationTimestamp="2026-02-26 17:34:22 +0000 UTC" firstStartedPulling="2026-02-26 17:34:24.280214421 +0000 UTC m=+1178.841968760" lastFinishedPulling="2026-02-26 17:34:52.210735938 +0000 UTC m=+1206.772490277" observedRunningTime="2026-02-26 17:34:59.408624403 +0000 UTC m=+1213.970378742" watchObservedRunningTime="2026-02-26 17:34:59.418029662 +0000 UTC m=+1213.979784001" Feb 26 17:34:59 crc kubenswrapper[4805]: I0226 17:34:59.420991 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-79f489f8d5-t8gwl"] Feb 26 17:34:59 crc kubenswrapper[4805]: I0226 17:34:59.466350 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-5x6cj" event={"ID":"171cc925-66fe-4ceb-b9b5-56b48a121642","Type":"ContainerStarted","Data":"39ce749dceeef8b5b152262ae7b9ac8bce3e7ee2e4d522627aca808dafcfd463"} Feb 26 17:34:59 crc kubenswrapper[4805]: I0226 17:34:59.467067 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-5x6cj" Feb 26 17:34:59 crc kubenswrapper[4805]: I0226 17:34:59.654606 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-pc48x" podStartSLOduration=9.414855364 podStartE2EDuration="37.654585063s" podCreationTimestamp="2026-02-26 17:34:22 +0000 UTC" firstStartedPulling="2026-02-26 17:34:24.521705378 +0000 UTC m=+1179.083459717" lastFinishedPulling="2026-02-26 17:34:52.761435077 +0000 UTC m=+1207.323189416" observedRunningTime="2026-02-26 17:34:59.651404793 +0000 UTC m=+1214.213159142" watchObservedRunningTime="2026-02-26 17:34:59.654585063 +0000 UTC m=+1214.216339402" Feb 26 17:34:59 crc kubenswrapper[4805]: I0226 17:34:59.688384 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-5x6cj" podStartSLOduration=9.897888290000001 podStartE2EDuration="37.688356999s" podCreationTimestamp="2026-02-26 17:34:22 +0000 UTC" firstStartedPulling="2026-02-26 17:34:24.419850328 +0000 UTC m=+1178.981604667" lastFinishedPulling="2026-02-26 17:34:52.210319027 +0000 UTC m=+1206.772073376" observedRunningTime="2026-02-26 17:34:59.685888046 +0000 UTC m=+1214.247642405" watchObservedRunningTime="2026-02-26 17:34:59.688356999 +0000 UTC m=+1214.250111348" Feb 26 17:35:00 crc kubenswrapper[4805]: I0226 17:35:00.478752 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-554d785765-j74p9" event={"ID":"dfb3f113-7bae-465f-aac0-2697ba32d846","Type":"ContainerStarted","Data":"8510f414fe0158c9bee51f8e1bc8e73d69282cf227663a66760502caf7435c7d"} Feb 26 17:35:00 crc kubenswrapper[4805]: I0226 17:35:00.481393 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-554d785765-j74p9" Feb 26 17:35:00 crc kubenswrapper[4805]: I0226 17:35:00.484461 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-24fxg" event={"ID":"2cf12233-5655-4a94-8f0a-bdd68756de74","Type":"ContainerStarted","Data":"db5eedacd2d2b6f8775eec99f193cac8912dc367ca0962634ebaee9f5356d301"} Feb 26 17:35:00 crc kubenswrapper[4805]: I0226 17:35:00.484821 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-24fxg" Feb 26 17:35:00 crc kubenswrapper[4805]: I0226 17:35:00.495909 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-qcp9p" event={"ID":"e649f248-07e5-4bf2-83bf-0c7fb532dc16","Type":"ContainerStarted","Data":"3ecef5785d6f606f023660b524cec9f502f2d6ad6caab90d24e22af70f0539b1"} Feb 26 17:35:00 crc kubenswrapper[4805]: I0226 17:35:00.496639 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-qcp9p" Feb 26 17:35:00 crc kubenswrapper[4805]: I0226 17:35:00.511348 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-6xzvs" event={"ID":"8f2e984c-ff5c-419b-bcd2-3e0d53825b89","Type":"ContainerStarted","Data":"b9cbdc6f526136ec3ff305acfcca69a64e8630f3bcf27d5fd45d68d2995c3187"} Feb 26 17:35:00 crc kubenswrapper[4805]: I0226 17:35:00.520209 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-79f489f8d5-t8gwl" event={"ID":"32486358-f7d6-45da-bade-d5af6fc319fd","Type":"ContainerStarted","Data":"71f5b5f982022918d4a9a1499cafa1e538c0e01f30bd613886b34975d9bfd822"} Feb 26 17:35:00 crc kubenswrapper[4805]: I0226 17:35:00.521081 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-79f489f8d5-t8gwl" Feb 26 17:35:00 crc kubenswrapper[4805]: I0226 17:35:00.530286 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cphz8p" event={"ID":"ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8","Type":"ContainerStarted","Data":"23402a5aa46ecdf9261a8068de0643bd38e0cbd5f43d12e253c09b11821acdf0"} Feb 26 17:35:00 crc kubenswrapper[4805]: I0226 17:35:00.536319 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-lss24" event={"ID":"d7d01c08-16c4-411f-82f4-b7747d6222f7","Type":"ContainerStarted","Data":"b2f67dfa40442cc3a5f7304dda5c9413a8529b18fd81bc5def837fb7bdd10280"} Feb 26 17:35:00 crc kubenswrapper[4805]: I0226 17:35:00.536858 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-lss24" Feb 26 17:35:00 crc kubenswrapper[4805]: I0226 17:35:00.537866 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-29g7k" event={"ID":"12c25d57-ec5f-48f9-83c6-9f099d56c313","Type":"ContainerStarted","Data":"589837d56fe42a6e45cfe45a88d8e42aae21bf16033c9e89ba094dbfa15848a0"} Feb 26 17:35:00 crc kubenswrapper[4805]: I0226 17:35:00.538226 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-29g7k" Feb 26 17:35:00 crc kubenswrapper[4805]: I0226 17:35:00.541534 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-n8jrv" event={"ID":"a0bc07cc-8639-49ce-824d-b1cde1a7c500","Type":"ContainerStarted","Data":"2961f654e550421d3bfaedabf8ec45f84357ce49bef9376569943e2112c8c694"} Feb 26 17:35:00 crc kubenswrapper[4805]: I0226 17:35:00.541939 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-n8jrv" Feb 26 17:35:00 crc kubenswrapper[4805]: I0226 17:35:00.544177 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-rdgbm" event={"ID":"2e440888-07ce-4a09-ac04-ab52fae67596","Type":"ContainerStarted","Data":"842723f511ef3dca583b0d492c4659c46a240359000bb971bc11ece0b4a30bf8"} Feb 26 17:35:00 crc kubenswrapper[4805]: I0226 17:35:00.544608 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-rdgbm" Feb 26 17:35:00 crc kubenswrapper[4805]: I0226 17:35:00.568418 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-69rn4" event={"ID":"1a0573de-bd9c-4917-93d6-4fe8ae9fde94","Type":"ContainerStarted","Data":"dc85710cf4b8f28be4dc9537c88b18353a617d66acf5fd04c6d34160d7ee66b6"} Feb 26 17:35:00 crc kubenswrapper[4805]: I0226 17:35:00.569226 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-69rn4" Feb 26 17:35:00 crc kubenswrapper[4805]: I0226 17:35:00.649610 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-7lmx7" event={"ID":"e4c0ad7f-8daf-4599-a457-135483730ac6","Type":"ContainerStarted","Data":"0cdd6a92bc62a8b62038c729c6a0b59d25fa7520d12da24192c00b5d0ae27edf"} Feb 26 17:35:00 crc kubenswrapper[4805]: I0226 17:35:00.650210 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-67d996989d-7lmx7" Feb 26 17:35:00 crc kubenswrapper[4805]: I0226 17:35:00.660630 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gnkt" event={"ID":"7bdb778d-d9d7-4d46-ba96-208bc22804e9","Type":"ContainerStarted","Data":"224b595de4a3ec385c6e687f0da9b74a1bb527c0c7fc8fe182c890de3af5d24f"} Feb 26 17:35:00 crc kubenswrapper[4805]: I0226 17:35:00.671586 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4czft" event={"ID":"8afb1a24-2085-497a-aea6-c5e35d58d2c2","Type":"ContainerStarted","Data":"4dd668ecfb7cba8bdf1c00ca3b92d0d0d0acbcdf2204b4d19ebdcc5f4b947490"} Feb 26 17:35:00 crc kubenswrapper[4805]: I0226 17:35:00.672350 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4czft" Feb 26 17:35:00 crc kubenswrapper[4805]: I0226 17:35:00.707074 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-qcp9p" podStartSLOduration=9.353198273 podStartE2EDuration="38.707056452s" podCreationTimestamp="2026-02-26 17:34:22 +0000 UTC" firstStartedPulling="2026-02-26 17:34:24.07328444 +0000 UTC m=+1178.635038789" lastFinishedPulling="2026-02-26 17:34:53.427142629 +0000 UTC m=+1207.988896968" observedRunningTime="2026-02-26 17:35:00.705910693 +0000 UTC m=+1215.267665032" watchObservedRunningTime="2026-02-26 17:35:00.707056452 +0000 UTC m=+1215.268810791" Feb 26 17:35:00 crc kubenswrapper[4805]: I0226 17:35:00.710057 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-554d785765-j74p9" podStartSLOduration=3.680865459 podStartE2EDuration="37.710045608s" podCreationTimestamp="2026-02-26 17:34:23 +0000 UTC" firstStartedPulling="2026-02-26 17:34:24.703194315 +0000 UTC m=+1179.264948654" lastFinishedPulling="2026-02-26 17:34:58.732374464 +0000 UTC m=+1213.294128803" observedRunningTime="2026-02-26 17:35:00.643727908 +0000 UTC m=+1215.205482257" watchObservedRunningTime="2026-02-26 17:35:00.710045608 +0000 UTC m=+1215.271799947" Feb 26 17:35:01 crc kubenswrapper[4805]: I0226 17:35:01.208885 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-69rn4" podStartSLOduration=10.935878062 podStartE2EDuration="39.208853392s" podCreationTimestamp="2026-02-26 17:34:22 +0000 UTC" firstStartedPulling="2026-02-26 17:34:23.937580223 +0000 UTC m=+1178.499334562" lastFinishedPulling="2026-02-26 17:34:52.210555533 +0000 UTC m=+1206.772309892" observedRunningTime="2026-02-26 17:35:01.208512054 +0000 UTC m=+1215.770266383" watchObservedRunningTime="2026-02-26 17:35:01.208853392 +0000 UTC m=+1215.770607731" Feb 26 17:35:01 crc kubenswrapper[4805]: I0226 17:35:01.215893 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-n8jrv" podStartSLOduration=5.316814107 podStartE2EDuration="39.21587087s" podCreationTimestamp="2026-02-26 17:34:22 +0000 UTC" firstStartedPulling="2026-02-26 17:34:24.541048228 +0000 UTC m=+1179.102802567" lastFinishedPulling="2026-02-26 17:34:58.440104991 +0000 UTC m=+1213.001859330" observedRunningTime="2026-02-26 17:35:00.880442314 +0000 UTC m=+1215.442196653" watchObservedRunningTime="2026-02-26 17:35:01.21587087 +0000 UTC m=+1215.777625199" Feb 26 17:35:01 crc kubenswrapper[4805]: I0226 17:35:01.244578 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-rdgbm" podStartSLOduration=4.614530278 podStartE2EDuration="38.244562127s" podCreationTimestamp="2026-02-26 17:34:23 +0000 UTC" firstStartedPulling="2026-02-26 17:34:24.906804192 +0000 UTC m=+1179.468558521" lastFinishedPulling="2026-02-26 17:34:58.536836031 +0000 UTC m=+1213.098590370" observedRunningTime="2026-02-26 17:35:01.2435216 +0000 UTC m=+1215.805275939" watchObservedRunningTime="2026-02-26 17:35:01.244562127 +0000 UTC m=+1215.806316466" Feb 26 17:35:01 crc kubenswrapper[4805]: I0226 17:35:01.700714 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-79f489f8d5-t8gwl" event={"ID":"32486358-f7d6-45da-bade-d5af6fc319fd","Type":"ContainerStarted","Data":"333965d5d6fa1ca8398cde741c2b2e7e1d38b6e149764e6842e7490f82f467a1"} Feb 26 17:35:01 crc kubenswrapper[4805]: I0226 17:35:01.715747 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-24fxg" podStartSLOduration=11.745018515 podStartE2EDuration="39.71573161s" podCreationTimestamp="2026-02-26 17:34:22 +0000 UTC" firstStartedPulling="2026-02-26 17:34:24.23833359 +0000 UTC m=+1178.800087919" lastFinishedPulling="2026-02-26 17:34:52.209046685 +0000 UTC m=+1206.770801014" observedRunningTime="2026-02-26 17:35:01.708906308 +0000 UTC m=+1216.270660647" watchObservedRunningTime="2026-02-26 17:35:01.71573161 +0000 UTC m=+1216.277485949" Feb 26 17:35:01 crc kubenswrapper[4805]: I0226 17:35:01.724613 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-29g7k" podStartSLOduration=5.0584115 podStartE2EDuration="38.724595715s" podCreationTimestamp="2026-02-26 17:34:23 +0000 UTC" firstStartedPulling="2026-02-26 17:34:24.789118581 +0000 UTC m=+1179.350872920" lastFinishedPulling="2026-02-26 17:34:58.455302796 +0000 UTC m=+1213.017057135" observedRunningTime="2026-02-26 17:35:01.650261543 +0000 UTC m=+1216.212015882" watchObservedRunningTime="2026-02-26 17:35:01.724595715 +0000 UTC m=+1216.286350054" Feb 26 17:35:02 crc kubenswrapper[4805]: I0226 17:35:02.021398 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-lss24" podStartSLOduration=11.658404702 podStartE2EDuration="40.021380292s" podCreationTimestamp="2026-02-26 17:34:22 +0000 UTC" firstStartedPulling="2026-02-26 17:34:23.847785468 +0000 UTC m=+1178.409539817" lastFinishedPulling="2026-02-26 17:34:52.210761058 +0000 UTC m=+1206.772515407" observedRunningTime="2026-02-26 17:35:01.750347037 +0000 UTC m=+1216.312101376" watchObservedRunningTime="2026-02-26 17:35:02.021380292 +0000 UTC m=+1216.583134631" Feb 26 17:35:02 crc kubenswrapper[4805]: I0226 17:35:02.366761 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4czft" podStartSLOduration=5.748097959 podStartE2EDuration="39.36674119s" podCreationTimestamp="2026-02-26 17:34:23 +0000 UTC" firstStartedPulling="2026-02-26 17:34:24.822730632 +0000 UTC m=+1179.384484971" lastFinishedPulling="2026-02-26 17:34:58.441373863 +0000 UTC m=+1213.003128202" observedRunningTime="2026-02-26 17:35:02.363078417 +0000 UTC m=+1216.924832756" watchObservedRunningTime="2026-02-26 17:35:02.36674119 +0000 UTC m=+1216.928495529" Feb 26 17:35:02 crc kubenswrapper[4805]: I0226 17:35:02.369811 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-79f489f8d5-t8gwl" podStartSLOduration=39.369798117 podStartE2EDuration="39.369798117s" podCreationTimestamp="2026-02-26 17:34:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:35:02.06711433 +0000 UTC m=+1216.628868669" watchObservedRunningTime="2026-02-26 17:35:02.369798117 +0000 UTC m=+1216.931552456" Feb 26 17:35:02 crc kubenswrapper[4805]: I0226 17:35:02.397383 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7gnkt" podStartSLOduration=5.497359307 podStartE2EDuration="39.397361905s" podCreationTimestamp="2026-02-26 17:34:23 +0000 UTC" firstStartedPulling="2026-02-26 17:34:25.047213228 +0000 UTC m=+1179.608967557" lastFinishedPulling="2026-02-26 17:34:58.947215816 +0000 UTC m=+1213.508970155" observedRunningTime="2026-02-26 17:35:02.392615555 +0000 UTC m=+1216.954369904" watchObservedRunningTime="2026-02-26 17:35:02.397361905 +0000 UTC m=+1216.959116254" Feb 26 17:35:02 crc kubenswrapper[4805]: I0226 17:35:02.979078 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-67d996989d-7lmx7" podStartSLOduration=13.051084867 podStartE2EDuration="40.979062009s" podCreationTimestamp="2026-02-26 17:34:22 +0000 UTC" firstStartedPulling="2026-02-26 17:34:24.280384026 +0000 UTC m=+1178.842138365" lastFinishedPulling="2026-02-26 17:34:52.208361168 +0000 UTC m=+1206.770115507" observedRunningTime="2026-02-26 17:35:02.609352295 +0000 UTC m=+1217.171106634" watchObservedRunningTime="2026-02-26 17:35:02.979062009 +0000 UTC m=+1217.540816348" Feb 26 17:35:03 crc kubenswrapper[4805]: I0226 17:35:03.992691 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-sfq49" event={"ID":"4a5c2658-ad3f-49b7-bb08-64aa33210ea4","Type":"ContainerStarted","Data":"80b3f1dca1d4b8ba679e8961c6c548893ecd3ec25ff4d28fbb53ea1a9e845cfe"} Feb 26 17:35:03 crc kubenswrapper[4805]: I0226 17:35:03.993739 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-sfq49" Feb 26 17:35:03 crc kubenswrapper[4805]: I0226 17:35:03.997586 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-vtqrv" event={"ID":"60a1ca6f-55b2-43e0-a86b-b38ecf7190f6","Type":"ContainerStarted","Data":"f4933c5a567c193e69decea9d0d318bde0f113ac8511fe1aee941928a4b41c2d"} Feb 26 17:35:03 crc kubenswrapper[4805]: I0226 17:35:03.997972 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-vtqrv" Feb 26 17:35:04 crc kubenswrapper[4805]: I0226 17:35:04.023548 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-sfq49" podStartSLOduration=3.962899232 podStartE2EDuration="42.023529145s" podCreationTimestamp="2026-02-26 17:34:22 +0000 UTC" firstStartedPulling="2026-02-26 17:34:24.542616447 +0000 UTC m=+1179.104370786" lastFinishedPulling="2026-02-26 17:35:02.60324636 +0000 UTC m=+1217.165000699" observedRunningTime="2026-02-26 17:35:04.014092276 +0000 UTC m=+1218.575846635" watchObservedRunningTime="2026-02-26 17:35:04.023529145 +0000 UTC m=+1218.585283494" Feb 26 17:35:04 crc kubenswrapper[4805]: I0226 17:35:04.060979 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-vtqrv" podStartSLOduration=3.314249952 podStartE2EDuration="42.060958843s" podCreationTimestamp="2026-02-26 17:34:22 +0000 UTC" firstStartedPulling="2026-02-26 17:34:23.854034336 +0000 UTC m=+1178.415788675" lastFinishedPulling="2026-02-26 17:35:02.600743227 +0000 UTC m=+1217.162497566" observedRunningTime="2026-02-26 17:35:04.059324992 +0000 UTC m=+1218.621079331" watchObservedRunningTime="2026-02-26 17:35:04.060958843 +0000 UTC m=+1218.622713182" Feb 26 17:35:05 crc kubenswrapper[4805]: I0226 17:35:05.340192 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-qk4bw" event={"ID":"a566e6f7-f550-4c90-a3fe-f5b66061d126","Type":"ContainerStarted","Data":"3309aaacf90353902cee1cc3712a4590f3ae8193fdb575ae7e4849bb98feb3b5"} Feb 26 17:35:05 crc kubenswrapper[4805]: I0226 17:35:05.340749 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-qk4bw" Feb 26 17:35:05 crc kubenswrapper[4805]: I0226 17:35:05.362973 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-qk4bw" podStartSLOduration=3.068702552 podStartE2EDuration="42.362951521s" podCreationTimestamp="2026-02-26 17:34:23 +0000 UTC" firstStartedPulling="2026-02-26 17:34:24.690042032 +0000 UTC m=+1179.251796371" lastFinishedPulling="2026-02-26 17:35:03.984291001 +0000 UTC m=+1218.546045340" observedRunningTime="2026-02-26 17:35:05.358039676 +0000 UTC m=+1219.919794005" watchObservedRunningTime="2026-02-26 17:35:05.362951521 +0000 UTC m=+1219.924705860" Feb 26 17:35:09 crc kubenswrapper[4805]: I0226 17:35:09.726369 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-79f489f8d5-t8gwl" Feb 26 17:35:13 crc kubenswrapper[4805]: I0226 17:35:13.091283 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-vtqrv" Feb 26 17:35:13 crc kubenswrapper[4805]: I0226 17:35:13.112676 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-lss24" Feb 26 17:35:13 crc kubenswrapper[4805]: I0226 17:35:13.157648 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-qcp9p" Feb 26 17:35:13 crc kubenswrapper[4805]: I0226 17:35:13.212609 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-69rn4" Feb 26 17:35:13 crc kubenswrapper[4805]: I0226 17:35:13.214310 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-24fxg" Feb 26 17:35:13 crc kubenswrapper[4805]: I0226 17:35:13.283691 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-8xw2w" Feb 26 17:35:13 crc kubenswrapper[4805]: I0226 17:35:13.472326 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-67d996989d-7lmx7" Feb 26 17:35:13 crc kubenswrapper[4805]: I0226 17:35:13.556501 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-5x6cj" Feb 26 17:35:13 crc kubenswrapper[4805]: I0226 17:35:13.562152 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-sfq49" Feb 26 17:35:13 crc kubenswrapper[4805]: I0226 17:35:13.583671 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-n8jrv" Feb 26 17:35:13 crc kubenswrapper[4805]: I0226 17:35:13.602721 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-pc48x" Feb 26 17:35:13 crc kubenswrapper[4805]: I0226 17:35:13.645265 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-qk4bw" Feb 26 17:35:13 crc kubenswrapper[4805]: I0226 17:35:13.722375 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-s49nh" Feb 26 17:35:13 crc kubenswrapper[4805]: I0226 17:35:13.792422 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-rdgbm" Feb 26 17:35:13 crc kubenswrapper[4805]: I0226 17:35:13.799702 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-554d785765-j74p9" Feb 26 17:35:13 crc kubenswrapper[4805]: I0226 17:35:13.903895 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-4czft" Feb 26 17:35:13 crc kubenswrapper[4805]: I0226 17:35:13.968875 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-29g7k" Feb 26 17:35:15 crc kubenswrapper[4805]: E0226 17:35:15.840108 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/infra-operator@sha256:aef5ea3dc1d4f5b63416ee1cc12d0360a64229bb3fb954be3dd85eec8f4ae62a" Feb 26 17:35:15 crc kubenswrapper[4805]: E0226 17:35:15.841592 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:aef5ea3dc1d4f5b63416ee1cc12d0360a64229bb3fb954be3dd85eec8f4ae62a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gxmtj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-79d975b745-6xzvs_openstack-operators(8f2e984c-ff5c-419b-bcd2-3e0d53825b89): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 17:35:15 crc kubenswrapper[4805]: E0226 17:35:15.842861 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/infra-operator-controller-manager-79d975b745-6xzvs" podUID="8f2e984c-ff5c-419b-bcd2-3e0d53825b89" Feb 26 17:35:16 crc kubenswrapper[4805]: E0226 17:35:16.796757 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:aef5ea3dc1d4f5b63416ee1cc12d0360a64229bb3fb954be3dd85eec8f4ae62a\\\"\"" pod="openstack-operators/infra-operator-controller-manager-79d975b745-6xzvs" podUID="8f2e984c-ff5c-419b-bcd2-3e0d53825b89" Feb 26 17:35:17 crc kubenswrapper[4805]: I0226 17:35:17.802497 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wxtbt" event={"ID":"a824c389-facb-49e7-91e9-a05c95cdd2b9","Type":"ContainerStarted","Data":"ad451b1690ba977facd3687fa20d0c789d481cd341655e120ca4d6aa84d5d13b"} Feb 26 17:35:17 crc kubenswrapper[4805]: I0226 17:35:17.803447 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wxtbt" Feb 26 17:35:17 crc kubenswrapper[4805]: I0226 17:35:17.805854 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cphz8p" event={"ID":"ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8","Type":"ContainerStarted","Data":"5c6f858268a301ca400ca749a813348c318c3d3fec8ccd4c151135f4c1e96ea1"} Feb 26 17:35:17 crc kubenswrapper[4805]: I0226 17:35:17.806278 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cphz8p" Feb 26 17:35:17 crc kubenswrapper[4805]: I0226 17:35:17.808262 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-tzsf5" event={"ID":"b5c2bc9e-6d6f-4d6e-9fa1-379b26708be3","Type":"ContainerStarted","Data":"a4c9ff0462db857a75aa959891c4e71b3b272e7fba30b28c579f77daea5e9ecb"} Feb 26 17:35:17 crc kubenswrapper[4805]: I0226 17:35:17.808683 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-tzsf5" Feb 26 17:35:17 crc kubenswrapper[4805]: I0226 17:35:17.829208 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wxtbt" podStartSLOduration=3.528240442 podStartE2EDuration="55.829189001s" podCreationTimestamp="2026-02-26 17:34:22 +0000 UTC" firstStartedPulling="2026-02-26 17:34:24.294230506 +0000 UTC m=+1178.855984845" lastFinishedPulling="2026-02-26 17:35:16.595179065 +0000 UTC m=+1231.156933404" observedRunningTime="2026-02-26 17:35:17.821039805 +0000 UTC m=+1232.382794144" watchObservedRunningTime="2026-02-26 17:35:17.829189001 +0000 UTC m=+1232.390943340" Feb 26 17:35:17 crc kubenswrapper[4805]: I0226 17:35:17.833929 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-tzsf5" podStartSLOduration=3.429645836 podStartE2EDuration="55.833910111s" podCreationTimestamp="2026-02-26 17:34:22 +0000 UTC" firstStartedPulling="2026-02-26 17:34:24.192277144 +0000 UTC m=+1178.754031483" lastFinishedPulling="2026-02-26 17:35:16.596541419 +0000 UTC m=+1231.158295758" observedRunningTime="2026-02-26 17:35:17.833395838 +0000 UTC m=+1232.395150177" watchObservedRunningTime="2026-02-26 17:35:17.833910111 +0000 UTC m=+1232.395664450" Feb 26 17:35:17 crc kubenswrapper[4805]: I0226 17:35:17.895867 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cphz8p" podStartSLOduration=37.631776515 podStartE2EDuration="54.89584846s" podCreationTimestamp="2026-02-26 17:34:23 +0000 UTC" firstStartedPulling="2026-02-26 17:34:59.414618475 +0000 UTC m=+1213.976372824" lastFinishedPulling="2026-02-26 17:35:16.67869043 +0000 UTC m=+1231.240444769" observedRunningTime="2026-02-26 17:35:17.891816168 +0000 UTC m=+1232.453570507" watchObservedRunningTime="2026-02-26 17:35:17.89584846 +0000 UTC m=+1232.457602799" Feb 26 17:35:23 crc kubenswrapper[4805]: I0226 17:35:23.211217 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-tzsf5" Feb 26 17:35:23 crc kubenswrapper[4805]: I0226 17:35:23.314236 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-wxtbt" Feb 26 17:35:29 crc kubenswrapper[4805]: I0226 17:35:29.247128 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cphz8p" Feb 26 17:35:29 crc kubenswrapper[4805]: I0226 17:35:29.890349 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-6xzvs" event={"ID":"8f2e984c-ff5c-419b-bcd2-3e0d53825b89","Type":"ContainerStarted","Data":"5d6266b0c4caf5015787ecba1d5945bf7d6353e5860d9b82860d12de74bbdf5b"} Feb 26 17:35:29 crc kubenswrapper[4805]: I0226 17:35:29.890919 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-6xzvs" Feb 26 17:35:29 crc kubenswrapper[4805]: I0226 17:35:29.905558 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-6xzvs" podStartSLOduration=37.900456702 podStartE2EDuration="1m7.905535585s" podCreationTimestamp="2026-02-26 17:34:22 +0000 UTC" firstStartedPulling="2026-02-26 17:34:59.37965331 +0000 UTC m=+1213.941407649" lastFinishedPulling="2026-02-26 17:35:29.384732183 +0000 UTC m=+1243.946486532" observedRunningTime="2026-02-26 17:35:29.902490888 +0000 UTC m=+1244.464245267" watchObservedRunningTime="2026-02-26 17:35:29.905535585 +0000 UTC m=+1244.467289944" Feb 26 17:35:39 crc kubenswrapper[4805]: I0226 17:35:39.186468 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-6xzvs" Feb 26 17:35:56 crc kubenswrapper[4805]: I0226 17:35:56.025993 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-sw4n4"] Feb 26 17:35:56 crc kubenswrapper[4805]: I0226 17:35:56.028081 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-sw4n4" Feb 26 17:35:56 crc kubenswrapper[4805]: I0226 17:35:56.033133 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 26 17:35:56 crc kubenswrapper[4805]: I0226 17:35:56.033387 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-mzmfn" Feb 26 17:35:56 crc kubenswrapper[4805]: I0226 17:35:56.033533 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 26 17:35:56 crc kubenswrapper[4805]: I0226 17:35:56.033749 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 26 17:35:56 crc kubenswrapper[4805]: I0226 17:35:56.045307 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-sw4n4"] Feb 26 17:35:56 crc kubenswrapper[4805]: I0226 17:35:56.063171 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45qxk\" (UniqueName: \"kubernetes.io/projected/13a896a6-ebef-4330-8da4-2a48ff648afd-kube-api-access-45qxk\") pod \"dnsmasq-dns-675f4bcbfc-sw4n4\" (UID: \"13a896a6-ebef-4330-8da4-2a48ff648afd\") " pod="openstack/dnsmasq-dns-675f4bcbfc-sw4n4" Feb 26 17:35:56 crc kubenswrapper[4805]: I0226 17:35:56.063357 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a896a6-ebef-4330-8da4-2a48ff648afd-config\") pod \"dnsmasq-dns-675f4bcbfc-sw4n4\" (UID: \"13a896a6-ebef-4330-8da4-2a48ff648afd\") " pod="openstack/dnsmasq-dns-675f4bcbfc-sw4n4" Feb 26 17:35:56 crc kubenswrapper[4805]: I0226 17:35:56.089350 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mhqc4"] Feb 26 17:35:56 crc kubenswrapper[4805]: I0226 17:35:56.092609 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mhqc4" Feb 26 17:35:56 crc kubenswrapper[4805]: I0226 17:35:56.100464 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 26 17:35:56 crc kubenswrapper[4805]: I0226 17:35:56.111170 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mhqc4"] Feb 26 17:35:56 crc kubenswrapper[4805]: I0226 17:35:56.167710 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45qxk\" (UniqueName: \"kubernetes.io/projected/13a896a6-ebef-4330-8da4-2a48ff648afd-kube-api-access-45qxk\") pod \"dnsmasq-dns-675f4bcbfc-sw4n4\" (UID: \"13a896a6-ebef-4330-8da4-2a48ff648afd\") " pod="openstack/dnsmasq-dns-675f4bcbfc-sw4n4" Feb 26 17:35:56 crc kubenswrapper[4805]: I0226 17:35:56.168202 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a896a6-ebef-4330-8da4-2a48ff648afd-config\") pod \"dnsmasq-dns-675f4bcbfc-sw4n4\" (UID: \"13a896a6-ebef-4330-8da4-2a48ff648afd\") " pod="openstack/dnsmasq-dns-675f4bcbfc-sw4n4" Feb 26 17:35:56 crc kubenswrapper[4805]: I0226 17:35:56.169088 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a896a6-ebef-4330-8da4-2a48ff648afd-config\") pod \"dnsmasq-dns-675f4bcbfc-sw4n4\" (UID: \"13a896a6-ebef-4330-8da4-2a48ff648afd\") " pod="openstack/dnsmasq-dns-675f4bcbfc-sw4n4" Feb 26 17:35:56 crc kubenswrapper[4805]: I0226 17:35:56.185279 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45qxk\" (UniqueName: \"kubernetes.io/projected/13a896a6-ebef-4330-8da4-2a48ff648afd-kube-api-access-45qxk\") pod \"dnsmasq-dns-675f4bcbfc-sw4n4\" (UID: \"13a896a6-ebef-4330-8da4-2a48ff648afd\") " pod="openstack/dnsmasq-dns-675f4bcbfc-sw4n4" Feb 26 17:35:56 crc kubenswrapper[4805]: I0226 17:35:56.269298 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bafaeb2-6da2-4950-9dc8-82708a80fb9c-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-mhqc4\" (UID: \"6bafaeb2-6da2-4950-9dc8-82708a80fb9c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mhqc4" Feb 26 17:35:56 crc kubenswrapper[4805]: I0226 17:35:56.269375 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bafaeb2-6da2-4950-9dc8-82708a80fb9c-config\") pod \"dnsmasq-dns-78dd6ddcc-mhqc4\" (UID: \"6bafaeb2-6da2-4950-9dc8-82708a80fb9c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mhqc4" Feb 26 17:35:56 crc kubenswrapper[4805]: I0226 17:35:56.269447 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92hrl\" (UniqueName: \"kubernetes.io/projected/6bafaeb2-6da2-4950-9dc8-82708a80fb9c-kube-api-access-92hrl\") pod \"dnsmasq-dns-78dd6ddcc-mhqc4\" (UID: \"6bafaeb2-6da2-4950-9dc8-82708a80fb9c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mhqc4" Feb 26 17:35:56 crc kubenswrapper[4805]: I0226 17:35:56.352172 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-sw4n4" Feb 26 17:35:56 crc kubenswrapper[4805]: I0226 17:35:56.370938 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92hrl\" (UniqueName: \"kubernetes.io/projected/6bafaeb2-6da2-4950-9dc8-82708a80fb9c-kube-api-access-92hrl\") pod \"dnsmasq-dns-78dd6ddcc-mhqc4\" (UID: \"6bafaeb2-6da2-4950-9dc8-82708a80fb9c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mhqc4" Feb 26 17:35:56 crc kubenswrapper[4805]: I0226 17:35:56.371046 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bafaeb2-6da2-4950-9dc8-82708a80fb9c-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-mhqc4\" (UID: \"6bafaeb2-6da2-4950-9dc8-82708a80fb9c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mhqc4" Feb 26 17:35:56 crc kubenswrapper[4805]: I0226 17:35:56.371100 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bafaeb2-6da2-4950-9dc8-82708a80fb9c-config\") pod \"dnsmasq-dns-78dd6ddcc-mhqc4\" (UID: \"6bafaeb2-6da2-4950-9dc8-82708a80fb9c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mhqc4" Feb 26 17:35:56 crc kubenswrapper[4805]: I0226 17:35:56.371979 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bafaeb2-6da2-4950-9dc8-82708a80fb9c-config\") pod \"dnsmasq-dns-78dd6ddcc-mhqc4\" (UID: \"6bafaeb2-6da2-4950-9dc8-82708a80fb9c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mhqc4" Feb 26 17:35:56 crc kubenswrapper[4805]: I0226 17:35:56.372108 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bafaeb2-6da2-4950-9dc8-82708a80fb9c-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-mhqc4\" (UID: \"6bafaeb2-6da2-4950-9dc8-82708a80fb9c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mhqc4" Feb 26 17:35:56 crc kubenswrapper[4805]: I0226 17:35:56.396834 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92hrl\" (UniqueName: \"kubernetes.io/projected/6bafaeb2-6da2-4950-9dc8-82708a80fb9c-kube-api-access-92hrl\") pod \"dnsmasq-dns-78dd6ddcc-mhqc4\" (UID: \"6bafaeb2-6da2-4950-9dc8-82708a80fb9c\") " pod="openstack/dnsmasq-dns-78dd6ddcc-mhqc4" Feb 26 17:35:56 crc kubenswrapper[4805]: I0226 17:35:56.422214 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mhqc4" Feb 26 17:35:56 crc kubenswrapper[4805]: I0226 17:35:56.826043 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-sw4n4"] Feb 26 17:35:57 crc kubenswrapper[4805]: I0226 17:35:57.083493 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mhqc4"] Feb 26 17:35:57 crc kubenswrapper[4805]: I0226 17:35:57.158451 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-mhqc4" event={"ID":"6bafaeb2-6da2-4950-9dc8-82708a80fb9c","Type":"ContainerStarted","Data":"034a9168cbfd899f5d4d374d1fcadf62a274fd59d75b5691f0e4a8dcc499de31"} Feb 26 17:35:57 crc kubenswrapper[4805]: I0226 17:35:57.159955 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-sw4n4" event={"ID":"13a896a6-ebef-4330-8da4-2a48ff648afd","Type":"ContainerStarted","Data":"4535257b02b1010f4bf353db37329f213c5c6a537bccd5efab5782b057991191"} Feb 26 17:35:58 crc kubenswrapper[4805]: I0226 17:35:58.715477 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-sw4n4"] Feb 26 17:35:58 crc kubenswrapper[4805]: I0226 17:35:58.744446 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-k8brz"] Feb 26 17:35:58 crc kubenswrapper[4805]: I0226 17:35:58.745905 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-k8brz" Feb 26 17:35:58 crc kubenswrapper[4805]: I0226 17:35:58.765386 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-k8brz"] Feb 26 17:35:58 crc kubenswrapper[4805]: I0226 17:35:58.925898 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z68jj\" (UniqueName: \"kubernetes.io/projected/41552c16-eba4-4163-a652-8490f5dd0ef1-kube-api-access-z68jj\") pod \"dnsmasq-dns-666b6646f7-k8brz\" (UID: \"41552c16-eba4-4163-a652-8490f5dd0ef1\") " pod="openstack/dnsmasq-dns-666b6646f7-k8brz" Feb 26 17:35:58 crc kubenswrapper[4805]: I0226 17:35:58.925971 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41552c16-eba4-4163-a652-8490f5dd0ef1-dns-svc\") pod \"dnsmasq-dns-666b6646f7-k8brz\" (UID: \"41552c16-eba4-4163-a652-8490f5dd0ef1\") " pod="openstack/dnsmasq-dns-666b6646f7-k8brz" Feb 26 17:35:58 crc kubenswrapper[4805]: I0226 17:35:58.926116 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41552c16-eba4-4163-a652-8490f5dd0ef1-config\") pod \"dnsmasq-dns-666b6646f7-k8brz\" (UID: \"41552c16-eba4-4163-a652-8490f5dd0ef1\") " pod="openstack/dnsmasq-dns-666b6646f7-k8brz" Feb 26 17:35:59 crc kubenswrapper[4805]: I0226 17:35:59.027570 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41552c16-eba4-4163-a652-8490f5dd0ef1-config\") pod \"dnsmasq-dns-666b6646f7-k8brz\" (UID: \"41552c16-eba4-4163-a652-8490f5dd0ef1\") " pod="openstack/dnsmasq-dns-666b6646f7-k8brz" Feb 26 17:35:59 crc kubenswrapper[4805]: I0226 17:35:59.027618 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z68jj\" (UniqueName: \"kubernetes.io/projected/41552c16-eba4-4163-a652-8490f5dd0ef1-kube-api-access-z68jj\") pod \"dnsmasq-dns-666b6646f7-k8brz\" (UID: \"41552c16-eba4-4163-a652-8490f5dd0ef1\") " pod="openstack/dnsmasq-dns-666b6646f7-k8brz" Feb 26 17:35:59 crc kubenswrapper[4805]: I0226 17:35:59.027650 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41552c16-eba4-4163-a652-8490f5dd0ef1-dns-svc\") pod \"dnsmasq-dns-666b6646f7-k8brz\" (UID: \"41552c16-eba4-4163-a652-8490f5dd0ef1\") " pod="openstack/dnsmasq-dns-666b6646f7-k8brz" Feb 26 17:35:59 crc kubenswrapper[4805]: I0226 17:35:59.028876 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41552c16-eba4-4163-a652-8490f5dd0ef1-dns-svc\") pod \"dnsmasq-dns-666b6646f7-k8brz\" (UID: \"41552c16-eba4-4163-a652-8490f5dd0ef1\") " pod="openstack/dnsmasq-dns-666b6646f7-k8brz" Feb 26 17:35:59 crc kubenswrapper[4805]: I0226 17:35:59.029415 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41552c16-eba4-4163-a652-8490f5dd0ef1-config\") pod \"dnsmasq-dns-666b6646f7-k8brz\" (UID: \"41552c16-eba4-4163-a652-8490f5dd0ef1\") " pod="openstack/dnsmasq-dns-666b6646f7-k8brz" Feb 26 17:35:59 crc kubenswrapper[4805]: I0226 17:35:59.061476 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z68jj\" (UniqueName: \"kubernetes.io/projected/41552c16-eba4-4163-a652-8490f5dd0ef1-kube-api-access-z68jj\") pod \"dnsmasq-dns-666b6646f7-k8brz\" (UID: \"41552c16-eba4-4163-a652-8490f5dd0ef1\") " pod="openstack/dnsmasq-dns-666b6646f7-k8brz" Feb 26 17:35:59 crc kubenswrapper[4805]: I0226 17:35:59.126347 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-k8brz" Feb 26 17:35:59 crc kubenswrapper[4805]: I0226 17:35:59.225752 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mhqc4"] Feb 26 17:35:59 crc kubenswrapper[4805]: I0226 17:35:59.261129 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-spqx5"] Feb 26 17:35:59 crc kubenswrapper[4805]: I0226 17:35:59.272285 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-spqx5" Feb 26 17:35:59 crc kubenswrapper[4805]: I0226 17:35:59.305687 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-spqx5"] Feb 26 17:35:59 crc kubenswrapper[4805]: I0226 17:35:59.455765 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g77c4\" (UniqueName: \"kubernetes.io/projected/84eddd87-9e83-41ff-a0a9-f813279962cb-kube-api-access-g77c4\") pod \"dnsmasq-dns-57d769cc4f-spqx5\" (UID: \"84eddd87-9e83-41ff-a0a9-f813279962cb\") " pod="openstack/dnsmasq-dns-57d769cc4f-spqx5" Feb 26 17:35:59 crc kubenswrapper[4805]: I0226 17:35:59.455880 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84eddd87-9e83-41ff-a0a9-f813279962cb-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-spqx5\" (UID: \"84eddd87-9e83-41ff-a0a9-f813279962cb\") " pod="openstack/dnsmasq-dns-57d769cc4f-spqx5" Feb 26 17:35:59 crc kubenswrapper[4805]: I0226 17:35:59.455965 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84eddd87-9e83-41ff-a0a9-f813279962cb-config\") pod \"dnsmasq-dns-57d769cc4f-spqx5\" (UID: \"84eddd87-9e83-41ff-a0a9-f813279962cb\") " pod="openstack/dnsmasq-dns-57d769cc4f-spqx5" Feb 26 17:35:59 crc kubenswrapper[4805]: I0226 17:35:59.558259 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g77c4\" (UniqueName: \"kubernetes.io/projected/84eddd87-9e83-41ff-a0a9-f813279962cb-kube-api-access-g77c4\") pod \"dnsmasq-dns-57d769cc4f-spqx5\" (UID: \"84eddd87-9e83-41ff-a0a9-f813279962cb\") " pod="openstack/dnsmasq-dns-57d769cc4f-spqx5" Feb 26 17:35:59 crc kubenswrapper[4805]: I0226 17:35:59.558860 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84eddd87-9e83-41ff-a0a9-f813279962cb-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-spqx5\" (UID: \"84eddd87-9e83-41ff-a0a9-f813279962cb\") " pod="openstack/dnsmasq-dns-57d769cc4f-spqx5" Feb 26 17:35:59 crc kubenswrapper[4805]: I0226 17:35:59.559535 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84eddd87-9e83-41ff-a0a9-f813279962cb-config\") pod \"dnsmasq-dns-57d769cc4f-spqx5\" (UID: \"84eddd87-9e83-41ff-a0a9-f813279962cb\") " pod="openstack/dnsmasq-dns-57d769cc4f-spqx5" Feb 26 17:35:59 crc kubenswrapper[4805]: I0226 17:35:59.560474 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84eddd87-9e83-41ff-a0a9-f813279962cb-config\") pod \"dnsmasq-dns-57d769cc4f-spqx5\" (UID: \"84eddd87-9e83-41ff-a0a9-f813279962cb\") " pod="openstack/dnsmasq-dns-57d769cc4f-spqx5" Feb 26 17:35:59 crc kubenswrapper[4805]: I0226 17:35:59.561216 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84eddd87-9e83-41ff-a0a9-f813279962cb-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-spqx5\" (UID: \"84eddd87-9e83-41ff-a0a9-f813279962cb\") " pod="openstack/dnsmasq-dns-57d769cc4f-spqx5" Feb 26 17:35:59 crc kubenswrapper[4805]: I0226 17:35:59.584892 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g77c4\" (UniqueName: \"kubernetes.io/projected/84eddd87-9e83-41ff-a0a9-f813279962cb-kube-api-access-g77c4\") pod \"dnsmasq-dns-57d769cc4f-spqx5\" (UID: \"84eddd87-9e83-41ff-a0a9-f813279962cb\") " pod="openstack/dnsmasq-dns-57d769cc4f-spqx5" Feb 26 17:35:59 crc kubenswrapper[4805]: I0226 17:35:59.653557 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-spqx5" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:35:59.987923 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:35:59.989971 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:35:59.992278 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:35:59.992302 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:35:59.992450 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:35:59.992640 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.001963 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-bphbl" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.002180 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.002372 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.025445 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.049091 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-k8brz"] Feb 26 17:36:00 crc kubenswrapper[4805]: W0226 17:36:00.055449 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41552c16_eba4_4163_a652_8490f5dd0ef1.slice/crio-d34b345a2b7eb4a6b12182f0c2c5e5a2eefcd54deace4b610905300f99c0accf WatchSource:0}: Error finding container d34b345a2b7eb4a6b12182f0c2c5e5a2eefcd54deace4b610905300f99c0accf: Status 404 returned error can't find the container with id d34b345a2b7eb4a6b12182f0c2c5e5a2eefcd54deace4b610905300f99c0accf Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.067669 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/82935132-2a23-4b0c-86c5-be40089b7e0b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " pod="openstack/rabbitmq-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.067891 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a21ab19a-588e-4827-9716-83290db70476\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a21ab19a-588e-4827-9716-83290db70476\") pod \"rabbitmq-server-0\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " pod="openstack/rabbitmq-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.067949 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/82935132-2a23-4b0c-86c5-be40089b7e0b-config-data\") pod \"rabbitmq-server-0\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " pod="openstack/rabbitmq-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.068069 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/82935132-2a23-4b0c-86c5-be40089b7e0b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " pod="openstack/rabbitmq-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.068200 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/82935132-2a23-4b0c-86c5-be40089b7e0b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " pod="openstack/rabbitmq-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.068309 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsdj5\" (UniqueName: \"kubernetes.io/projected/82935132-2a23-4b0c-86c5-be40089b7e0b-kube-api-access-hsdj5\") pod \"rabbitmq-server-0\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " pod="openstack/rabbitmq-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.068385 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/82935132-2a23-4b0c-86c5-be40089b7e0b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " pod="openstack/rabbitmq-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.068658 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/82935132-2a23-4b0c-86c5-be40089b7e0b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " pod="openstack/rabbitmq-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.068701 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/82935132-2a23-4b0c-86c5-be40089b7e0b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " pod="openstack/rabbitmq-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.068982 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/82935132-2a23-4b0c-86c5-be40089b7e0b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " pod="openstack/rabbitmq-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.069012 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/82935132-2a23-4b0c-86c5-be40089b7e0b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " pod="openstack/rabbitmq-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.133244 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535456-ngmsm"] Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.134645 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535456-ngmsm" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.137687 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.138281 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.139194 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.155121 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535456-ngmsm"] Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.170160 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a21ab19a-588e-4827-9716-83290db70476\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a21ab19a-588e-4827-9716-83290db70476\") pod \"rabbitmq-server-0\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " pod="openstack/rabbitmq-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.170249 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/82935132-2a23-4b0c-86c5-be40089b7e0b-config-data\") pod \"rabbitmq-server-0\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " pod="openstack/rabbitmq-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.174309 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/82935132-2a23-4b0c-86c5-be40089b7e0b-config-data\") pod \"rabbitmq-server-0\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " pod="openstack/rabbitmq-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.174490 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/82935132-2a23-4b0c-86c5-be40089b7e0b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " pod="openstack/rabbitmq-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.174586 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/82935132-2a23-4b0c-86c5-be40089b7e0b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " pod="openstack/rabbitmq-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.174677 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsdj5\" (UniqueName: \"kubernetes.io/projected/82935132-2a23-4b0c-86c5-be40089b7e0b-kube-api-access-hsdj5\") pod \"rabbitmq-server-0\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " pod="openstack/rabbitmq-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.174768 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/82935132-2a23-4b0c-86c5-be40089b7e0b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " pod="openstack/rabbitmq-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.174815 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/82935132-2a23-4b0c-86c5-be40089b7e0b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " pod="openstack/rabbitmq-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.174851 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/82935132-2a23-4b0c-86c5-be40089b7e0b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " pod="openstack/rabbitmq-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.174903 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/82935132-2a23-4b0c-86c5-be40089b7e0b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " pod="openstack/rabbitmq-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.174945 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/82935132-2a23-4b0c-86c5-be40089b7e0b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " pod="openstack/rabbitmq-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.175040 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/82935132-2a23-4b0c-86c5-be40089b7e0b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " pod="openstack/rabbitmq-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.179665 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/82935132-2a23-4b0c-86c5-be40089b7e0b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " pod="openstack/rabbitmq-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.183461 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/82935132-2a23-4b0c-86c5-be40089b7e0b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " pod="openstack/rabbitmq-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.184258 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/82935132-2a23-4b0c-86c5-be40089b7e0b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " pod="openstack/rabbitmq-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.185252 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/82935132-2a23-4b0c-86c5-be40089b7e0b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " pod="openstack/rabbitmq-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.187427 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/82935132-2a23-4b0c-86c5-be40089b7e0b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " pod="openstack/rabbitmq-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.189053 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.189110 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a21ab19a-588e-4827-9716-83290db70476\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a21ab19a-588e-4827-9716-83290db70476\") pod \"rabbitmq-server-0\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/94e1f2f5e6b4d98c41fa2e76b2416407adf395bf747ae59a28cbbbf46e2baffb/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.192043 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/82935132-2a23-4b0c-86c5-be40089b7e0b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " pod="openstack/rabbitmq-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.235026 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-spqx5"] Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.237712 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-k8brz" event={"ID":"41552c16-eba4-4163-a652-8490f5dd0ef1","Type":"ContainerStarted","Data":"d34b345a2b7eb4a6b12182f0c2c5e5a2eefcd54deace4b610905300f99c0accf"} Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.268557 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a21ab19a-588e-4827-9716-83290db70476\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a21ab19a-588e-4827-9716-83290db70476\") pod \"rabbitmq-server-0\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " pod="openstack/rabbitmq-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.277110 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc8t9\" (UniqueName: \"kubernetes.io/projected/bbfd9013-6210-4ca0-b7d6-ce58c547779b-kube-api-access-wc8t9\") pod \"auto-csr-approver-29535456-ngmsm\" (UID: \"bbfd9013-6210-4ca0-b7d6-ce58c547779b\") " pod="openshift-infra/auto-csr-approver-29535456-ngmsm" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.298925 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/82935132-2a23-4b0c-86c5-be40089b7e0b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " pod="openstack/rabbitmq-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.299248 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/82935132-2a23-4b0c-86c5-be40089b7e0b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " pod="openstack/rabbitmq-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.301975 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsdj5\" (UniqueName: \"kubernetes.io/projected/82935132-2a23-4b0c-86c5-be40089b7e0b-kube-api-access-hsdj5\") pod \"rabbitmq-server-0\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " pod="openstack/rabbitmq-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.326640 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.378699 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc8t9\" (UniqueName: \"kubernetes.io/projected/bbfd9013-6210-4ca0-b7d6-ce58c547779b-kube-api-access-wc8t9\") pod \"auto-csr-approver-29535456-ngmsm\" (UID: \"bbfd9013-6210-4ca0-b7d6-ce58c547779b\") " pod="openshift-infra/auto-csr-approver-29535456-ngmsm" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.397722 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc8t9\" (UniqueName: \"kubernetes.io/projected/bbfd9013-6210-4ca0-b7d6-ce58c547779b-kube-api-access-wc8t9\") pod \"auto-csr-approver-29535456-ngmsm\" (UID: \"bbfd9013-6210-4ca0-b7d6-ce58c547779b\") " pod="openshift-infra/auto-csr-approver-29535456-ngmsm" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.438587 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.439935 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.456673 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.456906 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-jlr6v" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.457062 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.457285 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.457463 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.457654 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.457829 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.463694 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.472747 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535456-ngmsm" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.481460 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9c793c17-a107-4006-9e15-5a2ac2afa296-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.481514 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9c793c17-a107-4006-9e15-5a2ac2afa296-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.481547 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k4l4\" (UniqueName: \"kubernetes.io/projected/9c793c17-a107-4006-9e15-5a2ac2afa296-kube-api-access-5k4l4\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.481592 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9c793c17-a107-4006-9e15-5a2ac2afa296-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.481609 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c793c17-a107-4006-9e15-5a2ac2afa296-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.481627 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9c793c17-a107-4006-9e15-5a2ac2afa296-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.481644 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9c793c17-a107-4006-9e15-5a2ac2afa296-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.481661 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9c793c17-a107-4006-9e15-5a2ac2afa296-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.481692 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-436dc230-dcc1-4c94-a5cd-efd150a21809\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-436dc230-dcc1-4c94-a5cd-efd150a21809\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.481713 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9c793c17-a107-4006-9e15-5a2ac2afa296-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.481731 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9c793c17-a107-4006-9e15-5a2ac2afa296-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.583745 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9c793c17-a107-4006-9e15-5a2ac2afa296-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.584077 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c793c17-a107-4006-9e15-5a2ac2afa296-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.584105 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9c793c17-a107-4006-9e15-5a2ac2afa296-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.584128 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9c793c17-a107-4006-9e15-5a2ac2afa296-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.584147 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9c793c17-a107-4006-9e15-5a2ac2afa296-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.584205 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-436dc230-dcc1-4c94-a5cd-efd150a21809\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-436dc230-dcc1-4c94-a5cd-efd150a21809\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.584235 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9c793c17-a107-4006-9e15-5a2ac2afa296-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.584275 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9c793c17-a107-4006-9e15-5a2ac2afa296-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.584335 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9c793c17-a107-4006-9e15-5a2ac2afa296-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.584379 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9c793c17-a107-4006-9e15-5a2ac2afa296-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.584419 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k4l4\" (UniqueName: \"kubernetes.io/projected/9c793c17-a107-4006-9e15-5a2ac2afa296-kube-api-access-5k4l4\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.584651 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9c793c17-a107-4006-9e15-5a2ac2afa296-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.585116 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9c793c17-a107-4006-9e15-5a2ac2afa296-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.585982 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9c793c17-a107-4006-9e15-5a2ac2afa296-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.587160 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c793c17-a107-4006-9e15-5a2ac2afa296-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.589527 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9c793c17-a107-4006-9e15-5a2ac2afa296-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.589908 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9c793c17-a107-4006-9e15-5a2ac2afa296-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.590269 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9c793c17-a107-4006-9e15-5a2ac2afa296-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.591671 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9c793c17-a107-4006-9e15-5a2ac2afa296-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.592427 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9c793c17-a107-4006-9e15-5a2ac2afa296-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.605968 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.606031 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-436dc230-dcc1-4c94-a5cd-efd150a21809\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-436dc230-dcc1-4c94-a5cd-efd150a21809\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/245f79fdaa526276c0e2ee03c805fa691f64a89402818eb13855aaab894d5f00/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.606850 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k4l4\" (UniqueName: \"kubernetes.io/projected/9c793c17-a107-4006-9e15-5a2ac2afa296-kube-api-access-5k4l4\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.713339 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-436dc230-dcc1-4c94-a5cd-efd150a21809\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-436dc230-dcc1-4c94-a5cd-efd150a21809\") pod \"rabbitmq-cell1-server-0\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.770688 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:36:00 crc kubenswrapper[4805]: I0226 17:36:00.924581 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 17:36:00 crc kubenswrapper[4805]: W0226 17:36:00.959191 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82935132_2a23_4b0c_86c5_be40089b7e0b.slice/crio-902f9ec07bec344dd871d3f27d472d3de39be409f4fbeb88dfadf59c7df9f257 WatchSource:0}: Error finding container 902f9ec07bec344dd871d3f27d472d3de39be409f4fbeb88dfadf59c7df9f257: Status 404 returned error can't find the container with id 902f9ec07bec344dd871d3f27d472d3de39be409f4fbeb88dfadf59c7df9f257 Feb 26 17:36:01 crc kubenswrapper[4805]: I0226 17:36:01.092544 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535456-ngmsm"] Feb 26 17:36:01 crc kubenswrapper[4805]: W0226 17:36:01.104417 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbfd9013_6210_4ca0_b7d6_ce58c547779b.slice/crio-0b279150e902597c67fbc167c12c43a8148bdbc54f060841df2b65124ab9e952 WatchSource:0}: Error finding container 0b279150e902597c67fbc167c12c43a8148bdbc54f060841df2b65124ab9e952: Status 404 returned error can't find the container with id 0b279150e902597c67fbc167c12c43a8148bdbc54f060841df2b65124ab9e952 Feb 26 17:36:01 crc kubenswrapper[4805]: I0226 17:36:01.251211 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 17:36:01 crc kubenswrapper[4805]: I0226 17:36:01.269421 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-spqx5" event={"ID":"84eddd87-9e83-41ff-a0a9-f813279962cb","Type":"ContainerStarted","Data":"b56a798e140a2bfd4a56340a983043352f0d87331dbf827d8752cbd659ff6778"} Feb 26 17:36:01 crc kubenswrapper[4805]: I0226 17:36:01.272205 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"82935132-2a23-4b0c-86c5-be40089b7e0b","Type":"ContainerStarted","Data":"902f9ec07bec344dd871d3f27d472d3de39be409f4fbeb88dfadf59c7df9f257"} Feb 26 17:36:01 crc kubenswrapper[4805]: I0226 17:36:01.275774 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535456-ngmsm" event={"ID":"bbfd9013-6210-4ca0-b7d6-ce58c547779b","Type":"ContainerStarted","Data":"0b279150e902597c67fbc167c12c43a8148bdbc54f060841df2b65124ab9e952"} Feb 26 17:36:01 crc kubenswrapper[4805]: W0226 17:36:01.293287 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c793c17_a107_4006_9e15_5a2ac2afa296.slice/crio-69fdfc19086dc3d09d2b2a5bb4e95415a661a317a1aae6e0b13e5fa1c04267fd WatchSource:0}: Error finding container 69fdfc19086dc3d09d2b2a5bb4e95415a661a317a1aae6e0b13e5fa1c04267fd: Status 404 returned error can't find the container with id 69fdfc19086dc3d09d2b2a5bb4e95415a661a317a1aae6e0b13e5fa1c04267fd Feb 26 17:36:01 crc kubenswrapper[4805]: I0226 17:36:01.581597 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 26 17:36:01 crc kubenswrapper[4805]: I0226 17:36:01.587517 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 26 17:36:01 crc kubenswrapper[4805]: I0226 17:36:01.599554 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 26 17:36:01 crc kubenswrapper[4805]: I0226 17:36:01.599590 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-r6b9r" Feb 26 17:36:01 crc kubenswrapper[4805]: I0226 17:36:01.599725 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 26 17:36:01 crc kubenswrapper[4805]: I0226 17:36:01.599923 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 26 17:36:01 crc kubenswrapper[4805]: I0226 17:36:01.601102 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 26 17:36:01 crc kubenswrapper[4805]: I0226 17:36:01.608256 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 26 17:36:01 crc kubenswrapper[4805]: I0226 17:36:01.616344 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0be4e187-2328-4b07-825d-2435d153499d-kolla-config\") pod \"openstack-galera-0\" (UID: \"0be4e187-2328-4b07-825d-2435d153499d\") " pod="openstack/openstack-galera-0" Feb 26 17:36:01 crc kubenswrapper[4805]: I0226 17:36:01.617145 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0be4e187-2328-4b07-825d-2435d153499d-config-data-default\") pod \"openstack-galera-0\" (UID: \"0be4e187-2328-4b07-825d-2435d153499d\") " pod="openstack/openstack-galera-0" Feb 26 17:36:01 crc kubenswrapper[4805]: I0226 17:36:01.617329 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0be4e187-2328-4b07-825d-2435d153499d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0be4e187-2328-4b07-825d-2435d153499d\") " pod="openstack/openstack-galera-0" Feb 26 17:36:01 crc kubenswrapper[4805]: I0226 17:36:01.617515 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m77k7\" (UniqueName: \"kubernetes.io/projected/0be4e187-2328-4b07-825d-2435d153499d-kube-api-access-m77k7\") pod \"openstack-galera-0\" (UID: \"0be4e187-2328-4b07-825d-2435d153499d\") " pod="openstack/openstack-galera-0" Feb 26 17:36:01 crc kubenswrapper[4805]: I0226 17:36:01.617682 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-81d60ab3-d932-4f4a-a0d1-1e394d64f70d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-81d60ab3-d932-4f4a-a0d1-1e394d64f70d\") pod \"openstack-galera-0\" (UID: \"0be4e187-2328-4b07-825d-2435d153499d\") " pod="openstack/openstack-galera-0" Feb 26 17:36:01 crc kubenswrapper[4805]: I0226 17:36:01.617815 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0be4e187-2328-4b07-825d-2435d153499d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0be4e187-2328-4b07-825d-2435d153499d\") " pod="openstack/openstack-galera-0" Feb 26 17:36:01 crc kubenswrapper[4805]: I0226 17:36:01.617850 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0be4e187-2328-4b07-825d-2435d153499d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0be4e187-2328-4b07-825d-2435d153499d\") " pod="openstack/openstack-galera-0" Feb 26 17:36:01 crc kubenswrapper[4805]: I0226 17:36:01.617895 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0be4e187-2328-4b07-825d-2435d153499d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0be4e187-2328-4b07-825d-2435d153499d\") " pod="openstack/openstack-galera-0" Feb 26 17:36:01 crc kubenswrapper[4805]: I0226 17:36:01.719513 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-81d60ab3-d932-4f4a-a0d1-1e394d64f70d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-81d60ab3-d932-4f4a-a0d1-1e394d64f70d\") pod \"openstack-galera-0\" (UID: \"0be4e187-2328-4b07-825d-2435d153499d\") " pod="openstack/openstack-galera-0" Feb 26 17:36:01 crc kubenswrapper[4805]: I0226 17:36:01.719576 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0be4e187-2328-4b07-825d-2435d153499d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0be4e187-2328-4b07-825d-2435d153499d\") " pod="openstack/openstack-galera-0" Feb 26 17:36:01 crc kubenswrapper[4805]: I0226 17:36:01.719946 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0be4e187-2328-4b07-825d-2435d153499d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0be4e187-2328-4b07-825d-2435d153499d\") " pod="openstack/openstack-galera-0" Feb 26 17:36:01 crc kubenswrapper[4805]: I0226 17:36:01.719963 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0be4e187-2328-4b07-825d-2435d153499d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0be4e187-2328-4b07-825d-2435d153499d\") " pod="openstack/openstack-galera-0" Feb 26 17:36:01 crc kubenswrapper[4805]: I0226 17:36:01.720033 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0be4e187-2328-4b07-825d-2435d153499d-kolla-config\") pod \"openstack-galera-0\" (UID: \"0be4e187-2328-4b07-825d-2435d153499d\") " pod="openstack/openstack-galera-0" Feb 26 17:36:01 crc kubenswrapper[4805]: I0226 17:36:01.720052 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0be4e187-2328-4b07-825d-2435d153499d-config-data-default\") pod \"openstack-galera-0\" (UID: \"0be4e187-2328-4b07-825d-2435d153499d\") " pod="openstack/openstack-galera-0" Feb 26 17:36:01 crc kubenswrapper[4805]: I0226 17:36:01.720081 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0be4e187-2328-4b07-825d-2435d153499d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0be4e187-2328-4b07-825d-2435d153499d\") " pod="openstack/openstack-galera-0" Feb 26 17:36:01 crc kubenswrapper[4805]: I0226 17:36:01.720116 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m77k7\" (UniqueName: \"kubernetes.io/projected/0be4e187-2328-4b07-825d-2435d153499d-kube-api-access-m77k7\") pod \"openstack-galera-0\" (UID: \"0be4e187-2328-4b07-825d-2435d153499d\") " pod="openstack/openstack-galera-0" Feb 26 17:36:01 crc kubenswrapper[4805]: I0226 17:36:01.720669 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0be4e187-2328-4b07-825d-2435d153499d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0be4e187-2328-4b07-825d-2435d153499d\") " pod="openstack/openstack-galera-0" Feb 26 17:36:01 crc kubenswrapper[4805]: I0226 17:36:01.721832 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0be4e187-2328-4b07-825d-2435d153499d-kolla-config\") pod \"openstack-galera-0\" (UID: \"0be4e187-2328-4b07-825d-2435d153499d\") " pod="openstack/openstack-galera-0" Feb 26 17:36:01 crc kubenswrapper[4805]: I0226 17:36:01.722446 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0be4e187-2328-4b07-825d-2435d153499d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0be4e187-2328-4b07-825d-2435d153499d\") " pod="openstack/openstack-galera-0" Feb 26 17:36:01 crc kubenswrapper[4805]: I0226 17:36:01.722754 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0be4e187-2328-4b07-825d-2435d153499d-config-data-default\") pod \"openstack-galera-0\" (UID: \"0be4e187-2328-4b07-825d-2435d153499d\") " pod="openstack/openstack-galera-0" Feb 26 17:36:01 crc kubenswrapper[4805]: I0226 17:36:01.723031 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 17:36:01 crc kubenswrapper[4805]: I0226 17:36:01.723056 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-81d60ab3-d932-4f4a-a0d1-1e394d64f70d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-81d60ab3-d932-4f4a-a0d1-1e394d64f70d\") pod \"openstack-galera-0\" (UID: \"0be4e187-2328-4b07-825d-2435d153499d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a27530f50e3ffc1b3274f1dce3ba8edfe40f68229d6de7359fbe37a2abab25f3/globalmount\"" pod="openstack/openstack-galera-0" Feb 26 17:36:01 crc kubenswrapper[4805]: I0226 17:36:01.727846 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0be4e187-2328-4b07-825d-2435d153499d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0be4e187-2328-4b07-825d-2435d153499d\") " pod="openstack/openstack-galera-0" Feb 26 17:36:01 crc kubenswrapper[4805]: I0226 17:36:01.737778 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0be4e187-2328-4b07-825d-2435d153499d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0be4e187-2328-4b07-825d-2435d153499d\") " pod="openstack/openstack-galera-0" Feb 26 17:36:01 crc kubenswrapper[4805]: I0226 17:36:01.738058 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m77k7\" (UniqueName: \"kubernetes.io/projected/0be4e187-2328-4b07-825d-2435d153499d-kube-api-access-m77k7\") pod \"openstack-galera-0\" (UID: \"0be4e187-2328-4b07-825d-2435d153499d\") " pod="openstack/openstack-galera-0" Feb 26 17:36:01 crc kubenswrapper[4805]: I0226 17:36:01.820989 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-81d60ab3-d932-4f4a-a0d1-1e394d64f70d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-81d60ab3-d932-4f4a-a0d1-1e394d64f70d\") pod \"openstack-galera-0\" (UID: \"0be4e187-2328-4b07-825d-2435d153499d\") " pod="openstack/openstack-galera-0" Feb 26 17:36:01 crc kubenswrapper[4805]: I0226 17:36:01.920460 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 26 17:36:02 crc kubenswrapper[4805]: I0226 17:36:02.301524 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9c793c17-a107-4006-9e15-5a2ac2afa296","Type":"ContainerStarted","Data":"69fdfc19086dc3d09d2b2a5bb4e95415a661a317a1aae6e0b13e5fa1c04267fd"} Feb 26 17:36:02 crc kubenswrapper[4805]: I0226 17:36:02.547624 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 26 17:36:02 crc kubenswrapper[4805]: W0226 17:36:02.573968 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0be4e187_2328_4b07_825d_2435d153499d.slice/crio-dcf49340a4f9e88b2d0ea56edafe02988f664095f62380614ba8358804847934 WatchSource:0}: Error finding container dcf49340a4f9e88b2d0ea56edafe02988f664095f62380614ba8358804847934: Status 404 returned error can't find the container with id dcf49340a4f9e88b2d0ea56edafe02988f664095f62380614ba8358804847934 Feb 26 17:36:02 crc kubenswrapper[4805]: I0226 17:36:02.928806 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 26 17:36:02 crc kubenswrapper[4805]: I0226 17:36:02.930422 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 26 17:36:02 crc kubenswrapper[4805]: I0226 17:36:02.934160 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 26 17:36:02 crc kubenswrapper[4805]: I0226 17:36:02.934432 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 26 17:36:02 crc kubenswrapper[4805]: I0226 17:36:02.934589 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-95wxm" Feb 26 17:36:02 crc kubenswrapper[4805]: I0226 17:36:02.934703 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 26 17:36:02 crc kubenswrapper[4805]: I0226 17:36:02.938856 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.054647 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1f73362-f45b-43a1-a1c7-ec280cb0f3c8-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c1f73362-f45b-43a1-a1c7-ec280cb0f3c8\") " pod="openstack/openstack-cell1-galera-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.054791 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c1f73362-f45b-43a1-a1c7-ec280cb0f3c8-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c1f73362-f45b-43a1-a1c7-ec280cb0f3c8\") " pod="openstack/openstack-cell1-galera-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.054826 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1f73362-f45b-43a1-a1c7-ec280cb0f3c8-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c1f73362-f45b-43a1-a1c7-ec280cb0f3c8\") " pod="openstack/openstack-cell1-galera-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.054874 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c1f73362-f45b-43a1-a1c7-ec280cb0f3c8-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c1f73362-f45b-43a1-a1c7-ec280cb0f3c8\") " pod="openstack/openstack-cell1-galera-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.054917 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ec95ed3f-76a0-4c46-899a-5a68c3fff607\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec95ed3f-76a0-4c46-899a-5a68c3fff607\") pod \"openstack-cell1-galera-0\" (UID: \"c1f73362-f45b-43a1-a1c7-ec280cb0f3c8\") " pod="openstack/openstack-cell1-galera-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.054956 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8wh7\" (UniqueName: \"kubernetes.io/projected/c1f73362-f45b-43a1-a1c7-ec280cb0f3c8-kube-api-access-v8wh7\") pod \"openstack-cell1-galera-0\" (UID: \"c1f73362-f45b-43a1-a1c7-ec280cb0f3c8\") " pod="openstack/openstack-cell1-galera-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.055059 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1f73362-f45b-43a1-a1c7-ec280cb0f3c8-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c1f73362-f45b-43a1-a1c7-ec280cb0f3c8\") " pod="openstack/openstack-cell1-galera-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.055086 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c1f73362-f45b-43a1-a1c7-ec280cb0f3c8-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c1f73362-f45b-43a1-a1c7-ec280cb0f3c8\") " pod="openstack/openstack-cell1-galera-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.157365 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ec95ed3f-76a0-4c46-899a-5a68c3fff607\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec95ed3f-76a0-4c46-899a-5a68c3fff607\") pod \"openstack-cell1-galera-0\" (UID: \"c1f73362-f45b-43a1-a1c7-ec280cb0f3c8\") " pod="openstack/openstack-cell1-galera-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.157457 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8wh7\" (UniqueName: \"kubernetes.io/projected/c1f73362-f45b-43a1-a1c7-ec280cb0f3c8-kube-api-access-v8wh7\") pod \"openstack-cell1-galera-0\" (UID: \"c1f73362-f45b-43a1-a1c7-ec280cb0f3c8\") " pod="openstack/openstack-cell1-galera-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.157570 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1f73362-f45b-43a1-a1c7-ec280cb0f3c8-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c1f73362-f45b-43a1-a1c7-ec280cb0f3c8\") " pod="openstack/openstack-cell1-galera-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.157628 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c1f73362-f45b-43a1-a1c7-ec280cb0f3c8-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c1f73362-f45b-43a1-a1c7-ec280cb0f3c8\") " pod="openstack/openstack-cell1-galera-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.157702 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1f73362-f45b-43a1-a1c7-ec280cb0f3c8-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c1f73362-f45b-43a1-a1c7-ec280cb0f3c8\") " pod="openstack/openstack-cell1-galera-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.157797 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c1f73362-f45b-43a1-a1c7-ec280cb0f3c8-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c1f73362-f45b-43a1-a1c7-ec280cb0f3c8\") " pod="openstack/openstack-cell1-galera-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.157819 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1f73362-f45b-43a1-a1c7-ec280cb0f3c8-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c1f73362-f45b-43a1-a1c7-ec280cb0f3c8\") " pod="openstack/openstack-cell1-galera-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.157863 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c1f73362-f45b-43a1-a1c7-ec280cb0f3c8-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c1f73362-f45b-43a1-a1c7-ec280cb0f3c8\") " pod="openstack/openstack-cell1-galera-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.166512 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c1f73362-f45b-43a1-a1c7-ec280cb0f3c8-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c1f73362-f45b-43a1-a1c7-ec280cb0f3c8\") " pod="openstack/openstack-cell1-galera-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.167650 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1f73362-f45b-43a1-a1c7-ec280cb0f3c8-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c1f73362-f45b-43a1-a1c7-ec280cb0f3c8\") " pod="openstack/openstack-cell1-galera-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.169438 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c1f73362-f45b-43a1-a1c7-ec280cb0f3c8-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c1f73362-f45b-43a1-a1c7-ec280cb0f3c8\") " pod="openstack/openstack-cell1-galera-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.170670 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1f73362-f45b-43a1-a1c7-ec280cb0f3c8-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c1f73362-f45b-43a1-a1c7-ec280cb0f3c8\") " pod="openstack/openstack-cell1-galera-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.171078 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.171134 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ec95ed3f-76a0-4c46-899a-5a68c3fff607\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec95ed3f-76a0-4c46-899a-5a68c3fff607\") pod \"openstack-cell1-galera-0\" (UID: \"c1f73362-f45b-43a1-a1c7-ec280cb0f3c8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1cd0d2c7a4f7ba71da99b916fb7f0f91aa5c10fd9d51374767c2f0a6705ba740/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.172184 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c1f73362-f45b-43a1-a1c7-ec280cb0f3c8-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c1f73362-f45b-43a1-a1c7-ec280cb0f3c8\") " pod="openstack/openstack-cell1-galera-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.177828 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8wh7\" (UniqueName: \"kubernetes.io/projected/c1f73362-f45b-43a1-a1c7-ec280cb0f3c8-kube-api-access-v8wh7\") pod \"openstack-cell1-galera-0\" (UID: \"c1f73362-f45b-43a1-a1c7-ec280cb0f3c8\") " pod="openstack/openstack-cell1-galera-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.180603 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1f73362-f45b-43a1-a1c7-ec280cb0f3c8-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c1f73362-f45b-43a1-a1c7-ec280cb0f3c8\") " pod="openstack/openstack-cell1-galera-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.205248 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.215031 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.223158 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.223961 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.224521 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-r99gk" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.224574 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.226282 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ec95ed3f-76a0-4c46-899a-5a68c3fff607\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ec95ed3f-76a0-4c46-899a-5a68c3fff607\") pod \"openstack-cell1-galera-0\" (UID: \"c1f73362-f45b-43a1-a1c7-ec280cb0f3c8\") " pod="openstack/openstack-cell1-galera-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.260205 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.344489 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0be4e187-2328-4b07-825d-2435d153499d","Type":"ContainerStarted","Data":"dcf49340a4f9e88b2d0ea56edafe02988f664095f62380614ba8358804847934"} Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.362547 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq68c\" (UniqueName: \"kubernetes.io/projected/2a78640b-0558-468f-893e-db7794aeb8b1-kube-api-access-rq68c\") pod \"memcached-0\" (UID: \"2a78640b-0558-468f-893e-db7794aeb8b1\") " pod="openstack/memcached-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.362926 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2a78640b-0558-468f-893e-db7794aeb8b1-config-data\") pod \"memcached-0\" (UID: \"2a78640b-0558-468f-893e-db7794aeb8b1\") " pod="openstack/memcached-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.363240 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a78640b-0558-468f-893e-db7794aeb8b1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"2a78640b-0558-468f-893e-db7794aeb8b1\") " pod="openstack/memcached-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.364301 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a78640b-0558-468f-893e-db7794aeb8b1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"2a78640b-0558-468f-893e-db7794aeb8b1\") " pod="openstack/memcached-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.364431 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2a78640b-0558-468f-893e-db7794aeb8b1-kolla-config\") pod \"memcached-0\" (UID: \"2a78640b-0558-468f-893e-db7794aeb8b1\") " pod="openstack/memcached-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.476841 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2a78640b-0558-468f-893e-db7794aeb8b1-config-data\") pod \"memcached-0\" (UID: \"2a78640b-0558-468f-893e-db7794aeb8b1\") " pod="openstack/memcached-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.477195 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a78640b-0558-468f-893e-db7794aeb8b1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"2a78640b-0558-468f-893e-db7794aeb8b1\") " pod="openstack/memcached-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.477242 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a78640b-0558-468f-893e-db7794aeb8b1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"2a78640b-0558-468f-893e-db7794aeb8b1\") " pod="openstack/memcached-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.477284 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2a78640b-0558-468f-893e-db7794aeb8b1-kolla-config\") pod \"memcached-0\" (UID: \"2a78640b-0558-468f-893e-db7794aeb8b1\") " pod="openstack/memcached-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.477304 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rq68c\" (UniqueName: \"kubernetes.io/projected/2a78640b-0558-468f-893e-db7794aeb8b1-kube-api-access-rq68c\") pod \"memcached-0\" (UID: \"2a78640b-0558-468f-893e-db7794aeb8b1\") " pod="openstack/memcached-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.479589 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2a78640b-0558-468f-893e-db7794aeb8b1-kolla-config\") pod \"memcached-0\" (UID: \"2a78640b-0558-468f-893e-db7794aeb8b1\") " pod="openstack/memcached-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.481361 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2a78640b-0558-468f-893e-db7794aeb8b1-config-data\") pod \"memcached-0\" (UID: \"2a78640b-0558-468f-893e-db7794aeb8b1\") " pod="openstack/memcached-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.486901 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a78640b-0558-468f-893e-db7794aeb8b1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"2a78640b-0558-468f-893e-db7794aeb8b1\") " pod="openstack/memcached-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.489837 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a78640b-0558-468f-893e-db7794aeb8b1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"2a78640b-0558-468f-893e-db7794aeb8b1\") " pod="openstack/memcached-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.495808 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rq68c\" (UniqueName: \"kubernetes.io/projected/2a78640b-0558-468f-893e-db7794aeb8b1-kube-api-access-rq68c\") pod \"memcached-0\" (UID: \"2a78640b-0558-468f-893e-db7794aeb8b1\") " pod="openstack/memcached-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.580589 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 26 17:36:03 crc kubenswrapper[4805]: I0226 17:36:03.860037 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 26 17:36:04 crc kubenswrapper[4805]: I0226 17:36:04.395820 4805 generic.go:334] "Generic (PLEG): container finished" podID="bbfd9013-6210-4ca0-b7d6-ce58c547779b" containerID="48cc9977d90049604cbba3c4e80c842c8e7a834fc1c9627b2fc0b84f24944ca9" exitCode=0 Feb 26 17:36:04 crc kubenswrapper[4805]: I0226 17:36:04.395977 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535456-ngmsm" event={"ID":"bbfd9013-6210-4ca0-b7d6-ce58c547779b","Type":"ContainerDied","Data":"48cc9977d90049604cbba3c4e80c842c8e7a834fc1c9627b2fc0b84f24944ca9"} Feb 26 17:36:05 crc kubenswrapper[4805]: I0226 17:36:05.723819 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 17:36:05 crc kubenswrapper[4805]: I0226 17:36:05.725387 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 26 17:36:05 crc kubenswrapper[4805]: I0226 17:36:05.736252 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-wg826" Feb 26 17:36:05 crc kubenswrapper[4805]: I0226 17:36:05.748597 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 17:36:05 crc kubenswrapper[4805]: I0226 17:36:05.845132 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw94n\" (UniqueName: \"kubernetes.io/projected/d82b85ed-2d9b-4e61-aa95-7ca78b0e96e7-kube-api-access-dw94n\") pod \"kube-state-metrics-0\" (UID: \"d82b85ed-2d9b-4e61-aa95-7ca78b0e96e7\") " pod="openstack/kube-state-metrics-0" Feb 26 17:36:05 crc kubenswrapper[4805]: I0226 17:36:05.946492 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw94n\" (UniqueName: \"kubernetes.io/projected/d82b85ed-2d9b-4e61-aa95-7ca78b0e96e7-kube-api-access-dw94n\") pod \"kube-state-metrics-0\" (UID: \"d82b85ed-2d9b-4e61-aa95-7ca78b0e96e7\") " pod="openstack/kube-state-metrics-0" Feb 26 17:36:05 crc kubenswrapper[4805]: I0226 17:36:05.998948 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw94n\" (UniqueName: \"kubernetes.io/projected/d82b85ed-2d9b-4e61-aa95-7ca78b0e96e7-kube-api-access-dw94n\") pod \"kube-state-metrics-0\" (UID: \"d82b85ed-2d9b-4e61-aa95-7ca78b0e96e7\") " pod="openstack/kube-state-metrics-0" Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.075768 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.485735 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.495582 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.503227 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.507378 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-bf865" Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.507582 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.507707 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.507896 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.518801 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.667351 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvfpn\" (UniqueName: \"kubernetes.io/projected/26cecc08-6d2b-4e0f-a231-8ac8764e8ddf-kube-api-access-vvfpn\") pod \"alertmanager-metric-storage-0\" (UID: \"26cecc08-6d2b-4e0f-a231-8ac8764e8ddf\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.667426 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/26cecc08-6d2b-4e0f-a231-8ac8764e8ddf-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"26cecc08-6d2b-4e0f-a231-8ac8764e8ddf\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.667462 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/26cecc08-6d2b-4e0f-a231-8ac8764e8ddf-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"26cecc08-6d2b-4e0f-a231-8ac8764e8ddf\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.667490 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/26cecc08-6d2b-4e0f-a231-8ac8764e8ddf-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"26cecc08-6d2b-4e0f-a231-8ac8764e8ddf\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.667512 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/26cecc08-6d2b-4e0f-a231-8ac8764e8ddf-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"26cecc08-6d2b-4e0f-a231-8ac8764e8ddf\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.667531 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/26cecc08-6d2b-4e0f-a231-8ac8764e8ddf-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"26cecc08-6d2b-4e0f-a231-8ac8764e8ddf\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.667593 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/26cecc08-6d2b-4e0f-a231-8ac8764e8ddf-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"26cecc08-6d2b-4e0f-a231-8ac8764e8ddf\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.769775 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvfpn\" (UniqueName: \"kubernetes.io/projected/26cecc08-6d2b-4e0f-a231-8ac8764e8ddf-kube-api-access-vvfpn\") pod \"alertmanager-metric-storage-0\" (UID: \"26cecc08-6d2b-4e0f-a231-8ac8764e8ddf\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.769832 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/26cecc08-6d2b-4e0f-a231-8ac8764e8ddf-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"26cecc08-6d2b-4e0f-a231-8ac8764e8ddf\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.769867 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/26cecc08-6d2b-4e0f-a231-8ac8764e8ddf-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"26cecc08-6d2b-4e0f-a231-8ac8764e8ddf\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.769902 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/26cecc08-6d2b-4e0f-a231-8ac8764e8ddf-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"26cecc08-6d2b-4e0f-a231-8ac8764e8ddf\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.769920 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/26cecc08-6d2b-4e0f-a231-8ac8764e8ddf-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"26cecc08-6d2b-4e0f-a231-8ac8764e8ddf\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.769940 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/26cecc08-6d2b-4e0f-a231-8ac8764e8ddf-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"26cecc08-6d2b-4e0f-a231-8ac8764e8ddf\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.770057 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/26cecc08-6d2b-4e0f-a231-8ac8764e8ddf-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"26cecc08-6d2b-4e0f-a231-8ac8764e8ddf\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.771466 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/26cecc08-6d2b-4e0f-a231-8ac8764e8ddf-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"26cecc08-6d2b-4e0f-a231-8ac8764e8ddf\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.778523 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/26cecc08-6d2b-4e0f-a231-8ac8764e8ddf-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"26cecc08-6d2b-4e0f-a231-8ac8764e8ddf\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.779976 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/26cecc08-6d2b-4e0f-a231-8ac8764e8ddf-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"26cecc08-6d2b-4e0f-a231-8ac8764e8ddf\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.782429 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/26cecc08-6d2b-4e0f-a231-8ac8764e8ddf-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"26cecc08-6d2b-4e0f-a231-8ac8764e8ddf\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.787923 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/26cecc08-6d2b-4e0f-a231-8ac8764e8ddf-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"26cecc08-6d2b-4e0f-a231-8ac8764e8ddf\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.789234 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/26cecc08-6d2b-4e0f-a231-8ac8764e8ddf-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"26cecc08-6d2b-4e0f-a231-8ac8764e8ddf\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.791995 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvfpn\" (UniqueName: \"kubernetes.io/projected/26cecc08-6d2b-4e0f-a231-8ac8764e8ddf-kube-api-access-vvfpn\") pod \"alertmanager-metric-storage-0\" (UID: \"26cecc08-6d2b-4e0f-a231-8ac8764e8ddf\") " pod="openstack/alertmanager-metric-storage-0" Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.844828 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.940198 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.942196 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.946915 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.947173 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.947359 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.947520 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.947661 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.947783 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.947890 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.947965 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-f6mbc" Feb 26 17:36:06 crc kubenswrapper[4805]: I0226 17:36:06.990837 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 26 17:36:07 crc kubenswrapper[4805]: I0226 17:36:07.081847 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/657f7632-1861-4399-9731-81e9977c7640-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:36:07 crc kubenswrapper[4805]: I0226 17:36:07.081925 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0d67964b-5983-4e73-ac71-321d1deac404\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d67964b-5983-4e73-ac71-321d1deac404\") pod \"prometheus-metric-storage-0\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:36:07 crc kubenswrapper[4805]: I0226 17:36:07.081977 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/657f7632-1861-4399-9731-81e9977c7640-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:36:07 crc kubenswrapper[4805]: I0226 17:36:07.081996 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/657f7632-1861-4399-9731-81e9977c7640-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:36:07 crc kubenswrapper[4805]: I0226 17:36:07.082031 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/657f7632-1861-4399-9731-81e9977c7640-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:36:07 crc kubenswrapper[4805]: I0226 17:36:07.082060 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/657f7632-1861-4399-9731-81e9977c7640-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:36:07 crc kubenswrapper[4805]: I0226 17:36:07.082084 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/657f7632-1861-4399-9731-81e9977c7640-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:36:07 crc kubenswrapper[4805]: I0226 17:36:07.082102 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g65c7\" (UniqueName: \"kubernetes.io/projected/657f7632-1861-4399-9731-81e9977c7640-kube-api-access-g65c7\") pod \"prometheus-metric-storage-0\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:36:07 crc kubenswrapper[4805]: I0226 17:36:07.082122 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/657f7632-1861-4399-9731-81e9977c7640-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:36:07 crc kubenswrapper[4805]: I0226 17:36:07.082364 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/657f7632-1861-4399-9731-81e9977c7640-config\") pod \"prometheus-metric-storage-0\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:36:07 crc kubenswrapper[4805]: I0226 17:36:07.184116 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/657f7632-1861-4399-9731-81e9977c7640-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:36:07 crc kubenswrapper[4805]: I0226 17:36:07.184175 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/657f7632-1861-4399-9731-81e9977c7640-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:36:07 crc kubenswrapper[4805]: I0226 17:36:07.184487 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/657f7632-1861-4399-9731-81e9977c7640-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:36:07 crc kubenswrapper[4805]: I0226 17:36:07.184575 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/657f7632-1861-4399-9731-81e9977c7640-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:36:07 crc kubenswrapper[4805]: I0226 17:36:07.184653 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/657f7632-1861-4399-9731-81e9977c7640-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:36:07 crc kubenswrapper[4805]: I0226 17:36:07.184720 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g65c7\" (UniqueName: \"kubernetes.io/projected/657f7632-1861-4399-9731-81e9977c7640-kube-api-access-g65c7\") pod \"prometheus-metric-storage-0\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:36:07 crc kubenswrapper[4805]: I0226 17:36:07.184749 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/657f7632-1861-4399-9731-81e9977c7640-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:36:07 crc kubenswrapper[4805]: I0226 17:36:07.184804 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/657f7632-1861-4399-9731-81e9977c7640-config\") pod \"prometheus-metric-storage-0\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:36:07 crc kubenswrapper[4805]: I0226 17:36:07.184836 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/657f7632-1861-4399-9731-81e9977c7640-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:36:07 crc kubenswrapper[4805]: I0226 17:36:07.184905 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0d67964b-5983-4e73-ac71-321d1deac404\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d67964b-5983-4e73-ac71-321d1deac404\") pod \"prometheus-metric-storage-0\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:36:07 crc kubenswrapper[4805]: I0226 17:36:07.186075 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/657f7632-1861-4399-9731-81e9977c7640-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:36:07 crc kubenswrapper[4805]: I0226 17:36:07.187000 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/657f7632-1861-4399-9731-81e9977c7640-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:36:07 crc kubenswrapper[4805]: I0226 17:36:07.188454 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/657f7632-1861-4399-9731-81e9977c7640-config\") pod \"prometheus-metric-storage-0\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:36:07 crc kubenswrapper[4805]: I0226 17:36:07.188653 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/657f7632-1861-4399-9731-81e9977c7640-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:36:07 crc kubenswrapper[4805]: I0226 17:36:07.189944 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/657f7632-1861-4399-9731-81e9977c7640-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:36:07 crc kubenswrapper[4805]: I0226 17:36:07.196827 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/657f7632-1861-4399-9731-81e9977c7640-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:36:07 crc kubenswrapper[4805]: I0226 17:36:07.199326 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/657f7632-1861-4399-9731-81e9977c7640-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:36:07 crc kubenswrapper[4805]: I0226 17:36:07.206298 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/657f7632-1861-4399-9731-81e9977c7640-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:36:07 crc kubenswrapper[4805]: I0226 17:36:07.211363 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g65c7\" (UniqueName: \"kubernetes.io/projected/657f7632-1861-4399-9731-81e9977c7640-kube-api-access-g65c7\") pod \"prometheus-metric-storage-0\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:36:07 crc kubenswrapper[4805]: I0226 17:36:07.213043 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 17:36:07 crc kubenswrapper[4805]: I0226 17:36:07.213079 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0d67964b-5983-4e73-ac71-321d1deac404\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d67964b-5983-4e73-ac71-321d1deac404\") pod \"prometheus-metric-storage-0\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/46c8a399786a5d13d427e175cc49bbe19d4e67b986514e0609ea6d8887bfe9ac/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 26 17:36:07 crc kubenswrapper[4805]: I0226 17:36:07.261684 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0d67964b-5983-4e73-ac71-321d1deac404\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d67964b-5983-4e73-ac71-321d1deac404\") pod \"prometheus-metric-storage-0\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:36:07 crc kubenswrapper[4805]: I0226 17:36:07.280215 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 26 17:36:08 crc kubenswrapper[4805]: I0226 17:36:08.799180 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-2k9nw"] Feb 26 17:36:08 crc kubenswrapper[4805]: I0226 17:36:08.800450 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2k9nw" Feb 26 17:36:08 crc kubenswrapper[4805]: I0226 17:36:08.803297 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 26 17:36:08 crc kubenswrapper[4805]: I0226 17:36:08.805088 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 26 17:36:08 crc kubenswrapper[4805]: I0226 17:36:08.805408 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-wxtm5" Feb 26 17:36:08 crc kubenswrapper[4805]: I0226 17:36:08.855224 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2k9nw"] Feb 26 17:36:08 crc kubenswrapper[4805]: I0226 17:36:08.903258 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-tzv64"] Feb 26 17:36:08 crc kubenswrapper[4805]: I0226 17:36:08.915900 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-tzv64" Feb 26 17:36:08 crc kubenswrapper[4805]: I0226 17:36:08.919041 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ff856bd-109f-4978-9b06-546d2afaf577-combined-ca-bundle\") pod \"ovn-controller-2k9nw\" (UID: \"6ff856bd-109f-4978-9b06-546d2afaf577\") " pod="openstack/ovn-controller-2k9nw" Feb 26 17:36:08 crc kubenswrapper[4805]: I0226 17:36:08.919114 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6ff856bd-109f-4978-9b06-546d2afaf577-var-run\") pod \"ovn-controller-2k9nw\" (UID: \"6ff856bd-109f-4978-9b06-546d2afaf577\") " pod="openstack/ovn-controller-2k9nw" Feb 26 17:36:08 crc kubenswrapper[4805]: I0226 17:36:08.919175 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6ff856bd-109f-4978-9b06-546d2afaf577-var-log-ovn\") pod \"ovn-controller-2k9nw\" (UID: \"6ff856bd-109f-4978-9b06-546d2afaf577\") " pod="openstack/ovn-controller-2k9nw" Feb 26 17:36:08 crc kubenswrapper[4805]: I0226 17:36:08.919217 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ff856bd-109f-4978-9b06-546d2afaf577-ovn-controller-tls-certs\") pod \"ovn-controller-2k9nw\" (UID: \"6ff856bd-109f-4978-9b06-546d2afaf577\") " pod="openstack/ovn-controller-2k9nw" Feb 26 17:36:08 crc kubenswrapper[4805]: I0226 17:36:08.919246 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6ff856bd-109f-4978-9b06-546d2afaf577-var-run-ovn\") pod \"ovn-controller-2k9nw\" (UID: \"6ff856bd-109f-4978-9b06-546d2afaf577\") " pod="openstack/ovn-controller-2k9nw" Feb 26 17:36:08 crc kubenswrapper[4805]: I0226 17:36:08.919269 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m5sd\" (UniqueName: \"kubernetes.io/projected/6ff856bd-109f-4978-9b06-546d2afaf577-kube-api-access-6m5sd\") pod \"ovn-controller-2k9nw\" (UID: \"6ff856bd-109f-4978-9b06-546d2afaf577\") " pod="openstack/ovn-controller-2k9nw" Feb 26 17:36:08 crc kubenswrapper[4805]: I0226 17:36:08.919308 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ff856bd-109f-4978-9b06-546d2afaf577-scripts\") pod \"ovn-controller-2k9nw\" (UID: \"6ff856bd-109f-4978-9b06-546d2afaf577\") " pod="openstack/ovn-controller-2k9nw" Feb 26 17:36:08 crc kubenswrapper[4805]: I0226 17:36:08.990293 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-tzv64"] Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.022850 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6ff856bd-109f-4978-9b06-546d2afaf577-var-run-ovn\") pod \"ovn-controller-2k9nw\" (UID: \"6ff856bd-109f-4978-9b06-546d2afaf577\") " pod="openstack/ovn-controller-2k9nw" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.022911 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m5sd\" (UniqueName: \"kubernetes.io/projected/6ff856bd-109f-4978-9b06-546d2afaf577-kube-api-access-6m5sd\") pod \"ovn-controller-2k9nw\" (UID: \"6ff856bd-109f-4978-9b06-546d2afaf577\") " pod="openstack/ovn-controller-2k9nw" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.022955 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ff856bd-109f-4978-9b06-546d2afaf577-scripts\") pod \"ovn-controller-2k9nw\" (UID: \"6ff856bd-109f-4978-9b06-546d2afaf577\") " pod="openstack/ovn-controller-2k9nw" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.023040 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3645c31c-6e0b-4f42-b270-91cf46d0aaf9-var-run\") pod \"ovn-controller-ovs-tzv64\" (UID: \"3645c31c-6e0b-4f42-b270-91cf46d0aaf9\") " pod="openstack/ovn-controller-ovs-tzv64" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.023646 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6ff856bd-109f-4978-9b06-546d2afaf577-var-run-ovn\") pod \"ovn-controller-2k9nw\" (UID: \"6ff856bd-109f-4978-9b06-546d2afaf577\") " pod="openstack/ovn-controller-2k9nw" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.023065 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/3645c31c-6e0b-4f42-b270-91cf46d0aaf9-etc-ovs\") pod \"ovn-controller-ovs-tzv64\" (UID: \"3645c31c-6e0b-4f42-b270-91cf46d0aaf9\") " pod="openstack/ovn-controller-ovs-tzv64" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.025143 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ff856bd-109f-4978-9b06-546d2afaf577-combined-ca-bundle\") pod \"ovn-controller-2k9nw\" (UID: \"6ff856bd-109f-4978-9b06-546d2afaf577\") " pod="openstack/ovn-controller-2k9nw" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.025182 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6ff856bd-109f-4978-9b06-546d2afaf577-var-run\") pod \"ovn-controller-2k9nw\" (UID: \"6ff856bd-109f-4978-9b06-546d2afaf577\") " pod="openstack/ovn-controller-2k9nw" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.025223 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djntk\" (UniqueName: \"kubernetes.io/projected/3645c31c-6e0b-4f42-b270-91cf46d0aaf9-kube-api-access-djntk\") pod \"ovn-controller-ovs-tzv64\" (UID: \"3645c31c-6e0b-4f42-b270-91cf46d0aaf9\") " pod="openstack/ovn-controller-ovs-tzv64" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.025252 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/3645c31c-6e0b-4f42-b270-91cf46d0aaf9-var-lib\") pod \"ovn-controller-ovs-tzv64\" (UID: \"3645c31c-6e0b-4f42-b270-91cf46d0aaf9\") " pod="openstack/ovn-controller-ovs-tzv64" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.025277 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3645c31c-6e0b-4f42-b270-91cf46d0aaf9-scripts\") pod \"ovn-controller-ovs-tzv64\" (UID: \"3645c31c-6e0b-4f42-b270-91cf46d0aaf9\") " pod="openstack/ovn-controller-ovs-tzv64" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.025326 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6ff856bd-109f-4978-9b06-546d2afaf577-var-log-ovn\") pod \"ovn-controller-2k9nw\" (UID: \"6ff856bd-109f-4978-9b06-546d2afaf577\") " pod="openstack/ovn-controller-2k9nw" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.025359 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3645c31c-6e0b-4f42-b270-91cf46d0aaf9-var-log\") pod \"ovn-controller-ovs-tzv64\" (UID: \"3645c31c-6e0b-4f42-b270-91cf46d0aaf9\") " pod="openstack/ovn-controller-ovs-tzv64" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.025392 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ff856bd-109f-4978-9b06-546d2afaf577-ovn-controller-tls-certs\") pod \"ovn-controller-2k9nw\" (UID: \"6ff856bd-109f-4978-9b06-546d2afaf577\") " pod="openstack/ovn-controller-2k9nw" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.027912 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ff856bd-109f-4978-9b06-546d2afaf577-scripts\") pod \"ovn-controller-2k9nw\" (UID: \"6ff856bd-109f-4978-9b06-546d2afaf577\") " pod="openstack/ovn-controller-2k9nw" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.028132 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6ff856bd-109f-4978-9b06-546d2afaf577-var-log-ovn\") pod \"ovn-controller-2k9nw\" (UID: \"6ff856bd-109f-4978-9b06-546d2afaf577\") " pod="openstack/ovn-controller-2k9nw" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.028312 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6ff856bd-109f-4978-9b06-546d2afaf577-var-run\") pod \"ovn-controller-2k9nw\" (UID: \"6ff856bd-109f-4978-9b06-546d2afaf577\") " pod="openstack/ovn-controller-2k9nw" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.037084 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ff856bd-109f-4978-9b06-546d2afaf577-ovn-controller-tls-certs\") pod \"ovn-controller-2k9nw\" (UID: \"6ff856bd-109f-4978-9b06-546d2afaf577\") " pod="openstack/ovn-controller-2k9nw" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.037129 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ff856bd-109f-4978-9b06-546d2afaf577-combined-ca-bundle\") pod \"ovn-controller-2k9nw\" (UID: \"6ff856bd-109f-4978-9b06-546d2afaf577\") " pod="openstack/ovn-controller-2k9nw" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.056339 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m5sd\" (UniqueName: \"kubernetes.io/projected/6ff856bd-109f-4978-9b06-546d2afaf577-kube-api-access-6m5sd\") pod \"ovn-controller-2k9nw\" (UID: \"6ff856bd-109f-4978-9b06-546d2afaf577\") " pod="openstack/ovn-controller-2k9nw" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.127040 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3645c31c-6e0b-4f42-b270-91cf46d0aaf9-var-run\") pod \"ovn-controller-ovs-tzv64\" (UID: \"3645c31c-6e0b-4f42-b270-91cf46d0aaf9\") " pod="openstack/ovn-controller-ovs-tzv64" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.127096 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/3645c31c-6e0b-4f42-b270-91cf46d0aaf9-etc-ovs\") pod \"ovn-controller-ovs-tzv64\" (UID: \"3645c31c-6e0b-4f42-b270-91cf46d0aaf9\") " pod="openstack/ovn-controller-ovs-tzv64" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.127224 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djntk\" (UniqueName: \"kubernetes.io/projected/3645c31c-6e0b-4f42-b270-91cf46d0aaf9-kube-api-access-djntk\") pod \"ovn-controller-ovs-tzv64\" (UID: \"3645c31c-6e0b-4f42-b270-91cf46d0aaf9\") " pod="openstack/ovn-controller-ovs-tzv64" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.127255 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/3645c31c-6e0b-4f42-b270-91cf46d0aaf9-var-lib\") pod \"ovn-controller-ovs-tzv64\" (UID: \"3645c31c-6e0b-4f42-b270-91cf46d0aaf9\") " pod="openstack/ovn-controller-ovs-tzv64" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.127308 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3645c31c-6e0b-4f42-b270-91cf46d0aaf9-scripts\") pod \"ovn-controller-ovs-tzv64\" (UID: \"3645c31c-6e0b-4f42-b270-91cf46d0aaf9\") " pod="openstack/ovn-controller-ovs-tzv64" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.127372 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3645c31c-6e0b-4f42-b270-91cf46d0aaf9-var-log\") pod \"ovn-controller-ovs-tzv64\" (UID: \"3645c31c-6e0b-4f42-b270-91cf46d0aaf9\") " pod="openstack/ovn-controller-ovs-tzv64" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.127662 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3645c31c-6e0b-4f42-b270-91cf46d0aaf9-var-log\") pod \"ovn-controller-ovs-tzv64\" (UID: \"3645c31c-6e0b-4f42-b270-91cf46d0aaf9\") " pod="openstack/ovn-controller-ovs-tzv64" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.127755 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3645c31c-6e0b-4f42-b270-91cf46d0aaf9-var-run\") pod \"ovn-controller-ovs-tzv64\" (UID: \"3645c31c-6e0b-4f42-b270-91cf46d0aaf9\") " pod="openstack/ovn-controller-ovs-tzv64" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.127947 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/3645c31c-6e0b-4f42-b270-91cf46d0aaf9-etc-ovs\") pod \"ovn-controller-ovs-tzv64\" (UID: \"3645c31c-6e0b-4f42-b270-91cf46d0aaf9\") " pod="openstack/ovn-controller-ovs-tzv64" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.128239 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/3645c31c-6e0b-4f42-b270-91cf46d0aaf9-var-lib\") pod \"ovn-controller-ovs-tzv64\" (UID: \"3645c31c-6e0b-4f42-b270-91cf46d0aaf9\") " pod="openstack/ovn-controller-ovs-tzv64" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.128405 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2k9nw" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.135342 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3645c31c-6e0b-4f42-b270-91cf46d0aaf9-scripts\") pod \"ovn-controller-ovs-tzv64\" (UID: \"3645c31c-6e0b-4f42-b270-91cf46d0aaf9\") " pod="openstack/ovn-controller-ovs-tzv64" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.164710 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djntk\" (UniqueName: \"kubernetes.io/projected/3645c31c-6e0b-4f42-b270-91cf46d0aaf9-kube-api-access-djntk\") pod \"ovn-controller-ovs-tzv64\" (UID: \"3645c31c-6e0b-4f42-b270-91cf46d0aaf9\") " pod="openstack/ovn-controller-ovs-tzv64" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.305973 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-tzv64" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.624288 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.625904 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.629389 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.629722 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.630424 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.630669 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.630787 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-8pt7j" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.640314 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.740241 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hszl\" (UniqueName: \"kubernetes.io/projected/f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9-kube-api-access-2hszl\") pod \"ovsdbserver-nb-0\" (UID: \"f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9\") " pod="openstack/ovsdbserver-nb-0" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.740316 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9-config\") pod \"ovsdbserver-nb-0\" (UID: \"f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9\") " pod="openstack/ovsdbserver-nb-0" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.740362 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9\") " pod="openstack/ovsdbserver-nb-0" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.740384 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-501fa371-2ca3-47ca-b0fa-2953223b4094\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-501fa371-2ca3-47ca-b0fa-2953223b4094\") pod \"ovsdbserver-nb-0\" (UID: \"f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9\") " pod="openstack/ovsdbserver-nb-0" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.740404 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9\") " pod="openstack/ovsdbserver-nb-0" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.740443 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9\") " pod="openstack/ovsdbserver-nb-0" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.740463 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9\") " pod="openstack/ovsdbserver-nb-0" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.740479 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9\") " pod="openstack/ovsdbserver-nb-0" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.841973 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9\") " pod="openstack/ovsdbserver-nb-0" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.842038 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-501fa371-2ca3-47ca-b0fa-2953223b4094\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-501fa371-2ca3-47ca-b0fa-2953223b4094\") pod \"ovsdbserver-nb-0\" (UID: \"f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9\") " pod="openstack/ovsdbserver-nb-0" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.842089 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9\") " pod="openstack/ovsdbserver-nb-0" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.842141 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9\") " pod="openstack/ovsdbserver-nb-0" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.842171 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9\") " pod="openstack/ovsdbserver-nb-0" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.842190 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9\") " pod="openstack/ovsdbserver-nb-0" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.842245 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hszl\" (UniqueName: \"kubernetes.io/projected/f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9-kube-api-access-2hszl\") pod \"ovsdbserver-nb-0\" (UID: \"f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9\") " pod="openstack/ovsdbserver-nb-0" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.842298 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9-config\") pod \"ovsdbserver-nb-0\" (UID: \"f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9\") " pod="openstack/ovsdbserver-nb-0" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.842600 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9\") " pod="openstack/ovsdbserver-nb-0" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.843249 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9-config\") pod \"ovsdbserver-nb-0\" (UID: \"f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9\") " pod="openstack/ovsdbserver-nb-0" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.844707 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9\") " pod="openstack/ovsdbserver-nb-0" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.846841 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9\") " pod="openstack/ovsdbserver-nb-0" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.847684 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9\") " pod="openstack/ovsdbserver-nb-0" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.849346 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.849384 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-501fa371-2ca3-47ca-b0fa-2953223b4094\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-501fa371-2ca3-47ca-b0fa-2953223b4094\") pod \"ovsdbserver-nb-0\" (UID: \"f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/912eeadaec4e78de916a84a682e726b5483484fa59bb5cc7d9f63901bee7d721/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.856872 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9\") " pod="openstack/ovsdbserver-nb-0" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.874874 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hszl\" (UniqueName: \"kubernetes.io/projected/f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9-kube-api-access-2hszl\") pod \"ovsdbserver-nb-0\" (UID: \"f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9\") " pod="openstack/ovsdbserver-nb-0" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.880003 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-501fa371-2ca3-47ca-b0fa-2953223b4094\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-501fa371-2ca3-47ca-b0fa-2953223b4094\") pod \"ovsdbserver-nb-0\" (UID: \"f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9\") " pod="openstack/ovsdbserver-nb-0" Feb 26 17:36:09 crc kubenswrapper[4805]: I0226 17:36:09.943760 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 26 17:36:12 crc kubenswrapper[4805]: I0226 17:36:12.491742 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535456-ngmsm" event={"ID":"bbfd9013-6210-4ca0-b7d6-ce58c547779b","Type":"ContainerDied","Data":"0b279150e902597c67fbc167c12c43a8148bdbc54f060841df2b65124ab9e952"} Feb 26 17:36:12 crc kubenswrapper[4805]: I0226 17:36:12.492031 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b279150e902597c67fbc167c12c43a8148bdbc54f060841df2b65124ab9e952" Feb 26 17:36:12 crc kubenswrapper[4805]: I0226 17:36:12.495679 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c1f73362-f45b-43a1-a1c7-ec280cb0f3c8","Type":"ContainerStarted","Data":"564dd1b70b2f60e647a69c7868d1767ec690596248b7e632081dd4b8ab0cf8ce"} Feb 26 17:36:12 crc kubenswrapper[4805]: I0226 17:36:12.530485 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535456-ngmsm" Feb 26 17:36:12 crc kubenswrapper[4805]: I0226 17:36:12.694343 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wc8t9\" (UniqueName: \"kubernetes.io/projected/bbfd9013-6210-4ca0-b7d6-ce58c547779b-kube-api-access-wc8t9\") pod \"bbfd9013-6210-4ca0-b7d6-ce58c547779b\" (UID: \"bbfd9013-6210-4ca0-b7d6-ce58c547779b\") " Feb 26 17:36:12 crc kubenswrapper[4805]: I0226 17:36:12.732748 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbfd9013-6210-4ca0-b7d6-ce58c547779b-kube-api-access-wc8t9" (OuterVolumeSpecName: "kube-api-access-wc8t9") pod "bbfd9013-6210-4ca0-b7d6-ce58c547779b" (UID: "bbfd9013-6210-4ca0-b7d6-ce58c547779b"). InnerVolumeSpecName "kube-api-access-wc8t9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:36:12 crc kubenswrapper[4805]: I0226 17:36:12.798622 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wc8t9\" (UniqueName: \"kubernetes.io/projected/bbfd9013-6210-4ca0-b7d6-ce58c547779b-kube-api-access-wc8t9\") on node \"crc\" DevicePath \"\"" Feb 26 17:36:13 crc kubenswrapper[4805]: E0226 17:36:13.077841 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbfd9013_6210_4ca0_b7d6_ce58c547779b.slice\": RecentStats: unable to find data in memory cache]" Feb 26 17:36:13 crc kubenswrapper[4805]: I0226 17:36:13.502071 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535456-ngmsm" Feb 26 17:36:13 crc kubenswrapper[4805]: I0226 17:36:13.605293 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535450-rlwkb"] Feb 26 17:36:13 crc kubenswrapper[4805]: I0226 17:36:13.612447 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535450-rlwkb"] Feb 26 17:36:14 crc kubenswrapper[4805]: I0226 17:36:14.143830 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 26 17:36:14 crc kubenswrapper[4805]: E0226 17:36:14.144176 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbfd9013-6210-4ca0-b7d6-ce58c547779b" containerName="oc" Feb 26 17:36:14 crc kubenswrapper[4805]: I0226 17:36:14.144192 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbfd9013-6210-4ca0-b7d6-ce58c547779b" containerName="oc" Feb 26 17:36:14 crc kubenswrapper[4805]: I0226 17:36:14.144493 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbfd9013-6210-4ca0-b7d6-ce58c547779b" containerName="oc" Feb 26 17:36:14 crc kubenswrapper[4805]: I0226 17:36:14.145318 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 26 17:36:14 crc kubenswrapper[4805]: I0226 17:36:14.147573 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-tkrcq" Feb 26 17:36:14 crc kubenswrapper[4805]: I0226 17:36:14.147721 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 26 17:36:14 crc kubenswrapper[4805]: I0226 17:36:14.147869 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 26 17:36:14 crc kubenswrapper[4805]: I0226 17:36:14.148787 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 26 17:36:14 crc kubenswrapper[4805]: I0226 17:36:14.168728 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 26 17:36:14 crc kubenswrapper[4805]: I0226 17:36:14.227161 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb\") " pod="openstack/ovsdbserver-sb-0" Feb 26 17:36:14 crc kubenswrapper[4805]: I0226 17:36:14.227224 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb\") " pod="openstack/ovsdbserver-sb-0" Feb 26 17:36:14 crc kubenswrapper[4805]: I0226 17:36:14.227253 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb-config\") pod \"ovsdbserver-sb-0\" (UID: \"2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb\") " pod="openstack/ovsdbserver-sb-0" Feb 26 17:36:14 crc kubenswrapper[4805]: I0226 17:36:14.227351 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3ad37c84-a647-4445-9091-e4eb0e20ff26\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ad37c84-a647-4445-9091-e4eb0e20ff26\") pod \"ovsdbserver-sb-0\" (UID: \"2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb\") " pod="openstack/ovsdbserver-sb-0" Feb 26 17:36:14 crc kubenswrapper[4805]: I0226 17:36:14.227662 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j92l7\" (UniqueName: \"kubernetes.io/projected/2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb-kube-api-access-j92l7\") pod \"ovsdbserver-sb-0\" (UID: \"2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb\") " pod="openstack/ovsdbserver-sb-0" Feb 26 17:36:14 crc kubenswrapper[4805]: I0226 17:36:14.227696 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb\") " pod="openstack/ovsdbserver-sb-0" Feb 26 17:36:14 crc kubenswrapper[4805]: I0226 17:36:14.227739 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb\") " pod="openstack/ovsdbserver-sb-0" Feb 26 17:36:14 crc kubenswrapper[4805]: I0226 17:36:14.227889 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb\") " pod="openstack/ovsdbserver-sb-0" Feb 26 17:36:14 crc kubenswrapper[4805]: I0226 17:36:14.329433 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb\") " pod="openstack/ovsdbserver-sb-0" Feb 26 17:36:14 crc kubenswrapper[4805]: I0226 17:36:14.329986 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb\") " pod="openstack/ovsdbserver-sb-0" Feb 26 17:36:14 crc kubenswrapper[4805]: I0226 17:36:14.330052 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb\") " pod="openstack/ovsdbserver-sb-0" Feb 26 17:36:14 crc kubenswrapper[4805]: I0226 17:36:14.330092 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb-config\") pod \"ovsdbserver-sb-0\" (UID: \"2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb\") " pod="openstack/ovsdbserver-sb-0" Feb 26 17:36:14 crc kubenswrapper[4805]: I0226 17:36:14.330123 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3ad37c84-a647-4445-9091-e4eb0e20ff26\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ad37c84-a647-4445-9091-e4eb0e20ff26\") pod \"ovsdbserver-sb-0\" (UID: \"2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb\") " pod="openstack/ovsdbserver-sb-0" Feb 26 17:36:14 crc kubenswrapper[4805]: I0226 17:36:14.330173 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j92l7\" (UniqueName: \"kubernetes.io/projected/2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb-kube-api-access-j92l7\") pod \"ovsdbserver-sb-0\" (UID: \"2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb\") " pod="openstack/ovsdbserver-sb-0" Feb 26 17:36:14 crc kubenswrapper[4805]: I0226 17:36:14.330202 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb\") " pod="openstack/ovsdbserver-sb-0" Feb 26 17:36:14 crc kubenswrapper[4805]: I0226 17:36:14.330235 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb\") " pod="openstack/ovsdbserver-sb-0" Feb 26 17:36:14 crc kubenswrapper[4805]: I0226 17:36:14.330687 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb\") " pod="openstack/ovsdbserver-sb-0" Feb 26 17:36:14 crc kubenswrapper[4805]: I0226 17:36:14.330855 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb\") " pod="openstack/ovsdbserver-sb-0" Feb 26 17:36:14 crc kubenswrapper[4805]: I0226 17:36:14.331109 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb-config\") pod \"ovsdbserver-sb-0\" (UID: \"2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb\") " pod="openstack/ovsdbserver-sb-0" Feb 26 17:36:14 crc kubenswrapper[4805]: I0226 17:36:14.333484 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 17:36:14 crc kubenswrapper[4805]: I0226 17:36:14.333514 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3ad37c84-a647-4445-9091-e4eb0e20ff26\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ad37c84-a647-4445-9091-e4eb0e20ff26\") pod \"ovsdbserver-sb-0\" (UID: \"2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0ba6403fcc057fa9bfb8da7b67468f8583c2ed2e21e887599d6c47cb9451f39f/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 26 17:36:14 crc kubenswrapper[4805]: I0226 17:36:14.337423 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb\") " pod="openstack/ovsdbserver-sb-0" Feb 26 17:36:14 crc kubenswrapper[4805]: I0226 17:36:14.339673 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb\") " pod="openstack/ovsdbserver-sb-0" Feb 26 17:36:14 crc kubenswrapper[4805]: I0226 17:36:14.348910 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j92l7\" (UniqueName: \"kubernetes.io/projected/2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb-kube-api-access-j92l7\") pod \"ovsdbserver-sb-0\" (UID: \"2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb\") " pod="openstack/ovsdbserver-sb-0" Feb 26 17:36:14 crc kubenswrapper[4805]: I0226 17:36:14.350426 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb\") " pod="openstack/ovsdbserver-sb-0" Feb 26 17:36:14 crc kubenswrapper[4805]: I0226 17:36:14.390314 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3ad37c84-a647-4445-9091-e4eb0e20ff26\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3ad37c84-a647-4445-9091-e4eb0e20ff26\") pod \"ovsdbserver-sb-0\" (UID: \"2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb\") " pod="openstack/ovsdbserver-sb-0" Feb 26 17:36:14 crc kubenswrapper[4805]: I0226 17:36:14.470056 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 26 17:36:14 crc kubenswrapper[4805]: I0226 17:36:14.986248 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f0c7db9-369e-4a42-bf2e-2bacfed49fe2" path="/var/lib/kubelet/pods/0f0c7db9-369e-4a42-bf2e-2bacfed49fe2/volumes" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.132057 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-tk4vq"] Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.135384 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-tk4vq" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.144323 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-dockercfg-22bhf" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.144737 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-http" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.145066 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-config" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.145259 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca-bundle" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.145662 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-grpc" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.162047 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-tk4vq"] Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.289226 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgpmh\" (UniqueName: \"kubernetes.io/projected/5e792f59-e6d1-48d3-bc1b-e17d2e0da457-kube-api-access-tgpmh\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-tk4vq\" (UID: \"5e792f59-e6d1-48d3-bc1b-e17d2e0da457\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-tk4vq" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.289323 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/5e792f59-e6d1-48d3-bc1b-e17d2e0da457-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-tk4vq\" (UID: \"5e792f59-e6d1-48d3-bc1b-e17d2e0da457\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-tk4vq" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.289400 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/5e792f59-e6d1-48d3-bc1b-e17d2e0da457-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-tk4vq\" (UID: \"5e792f59-e6d1-48d3-bc1b-e17d2e0da457\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-tk4vq" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.289442 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e792f59-e6d1-48d3-bc1b-e17d2e0da457-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-tk4vq\" (UID: \"5e792f59-e6d1-48d3-bc1b-e17d2e0da457\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-tk4vq" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.289509 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e792f59-e6d1-48d3-bc1b-e17d2e0da457-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-tk4vq\" (UID: \"5e792f59-e6d1-48d3-bc1b-e17d2e0da457\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-tk4vq" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.392558 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e792f59-e6d1-48d3-bc1b-e17d2e0da457-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-tk4vq\" (UID: \"5e792f59-e6d1-48d3-bc1b-e17d2e0da457\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-tk4vq" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.392701 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgpmh\" (UniqueName: \"kubernetes.io/projected/5e792f59-e6d1-48d3-bc1b-e17d2e0da457-kube-api-access-tgpmh\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-tk4vq\" (UID: \"5e792f59-e6d1-48d3-bc1b-e17d2e0da457\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-tk4vq" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.392777 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/5e792f59-e6d1-48d3-bc1b-e17d2e0da457-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-tk4vq\" (UID: \"5e792f59-e6d1-48d3-bc1b-e17d2e0da457\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-tk4vq" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.392849 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/5e792f59-e6d1-48d3-bc1b-e17d2e0da457-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-tk4vq\" (UID: \"5e792f59-e6d1-48d3-bc1b-e17d2e0da457\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-tk4vq" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.392890 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e792f59-e6d1-48d3-bc1b-e17d2e0da457-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-tk4vq\" (UID: \"5e792f59-e6d1-48d3-bc1b-e17d2e0da457\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-tk4vq" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.394683 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e792f59-e6d1-48d3-bc1b-e17d2e0da457-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-tk4vq\" (UID: \"5e792f59-e6d1-48d3-bc1b-e17d2e0da457\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-tk4vq" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.400155 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e792f59-e6d1-48d3-bc1b-e17d2e0da457-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-tk4vq\" (UID: \"5e792f59-e6d1-48d3-bc1b-e17d2e0da457\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-tk4vq" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.407214 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/5e792f59-e6d1-48d3-bc1b-e17d2e0da457-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-tk4vq\" (UID: \"5e792f59-e6d1-48d3-bc1b-e17d2e0da457\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-tk4vq" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.425093 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-wxfvc"] Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.426597 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wxfvc" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.430718 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-http" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.431757 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-grpc" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.431896 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-loki-s3" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.444879 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/5e792f59-e6d1-48d3-bc1b-e17d2e0da457-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-tk4vq\" (UID: \"5e792f59-e6d1-48d3-bc1b-e17d2e0da457\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-tk4vq" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.455439 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-wxfvc"] Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.467962 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgpmh\" (UniqueName: \"kubernetes.io/projected/5e792f59-e6d1-48d3-bc1b-e17d2e0da457-kube-api-access-tgpmh\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-tk4vq\" (UID: \"5e792f59-e6d1-48d3-bc1b-e17d2e0da457\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-tk4vq" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.476834 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-tk4vq" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.494381 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3dfb8794-f574-4514-b4e3-b7cdcc1460b5-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-wxfvc\" (UID: \"3dfb8794-f574-4514-b4e3-b7cdcc1460b5\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wxfvc" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.494440 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/3dfb8794-f574-4514-b4e3-b7cdcc1460b5-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-wxfvc\" (UID: \"3dfb8794-f574-4514-b4e3-b7cdcc1460b5\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wxfvc" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.494476 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/3dfb8794-f574-4514-b4e3-b7cdcc1460b5-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-wxfvc\" (UID: \"3dfb8794-f574-4514-b4e3-b7cdcc1460b5\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wxfvc" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.494604 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xs62\" (UniqueName: \"kubernetes.io/projected/3dfb8794-f574-4514-b4e3-b7cdcc1460b5-kube-api-access-2xs62\") pod \"cloudkitty-lokistack-querier-58c84b5844-wxfvc\" (UID: \"3dfb8794-f574-4514-b4e3-b7cdcc1460b5\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wxfvc" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.494621 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/3dfb8794-f574-4514-b4e3-b7cdcc1460b5-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-wxfvc\" (UID: \"3dfb8794-f574-4514-b4e3-b7cdcc1460b5\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wxfvc" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.494650 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dfb8794-f574-4514-b4e3-b7cdcc1460b5-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-wxfvc\" (UID: \"3dfb8794-f574-4514-b4e3-b7cdcc1460b5\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wxfvc" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.596528 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6"] Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.597456 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xs62\" (UniqueName: \"kubernetes.io/projected/3dfb8794-f574-4514-b4e3-b7cdcc1460b5-kube-api-access-2xs62\") pod \"cloudkitty-lokistack-querier-58c84b5844-wxfvc\" (UID: \"3dfb8794-f574-4514-b4e3-b7cdcc1460b5\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wxfvc" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.597505 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/3dfb8794-f574-4514-b4e3-b7cdcc1460b5-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-wxfvc\" (UID: \"3dfb8794-f574-4514-b4e3-b7cdcc1460b5\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wxfvc" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.597551 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dfb8794-f574-4514-b4e3-b7cdcc1460b5-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-wxfvc\" (UID: \"3dfb8794-f574-4514-b4e3-b7cdcc1460b5\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wxfvc" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.597594 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3dfb8794-f574-4514-b4e3-b7cdcc1460b5-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-wxfvc\" (UID: \"3dfb8794-f574-4514-b4e3-b7cdcc1460b5\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wxfvc" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.597615 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.597620 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/3dfb8794-f574-4514-b4e3-b7cdcc1460b5-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-wxfvc\" (UID: \"3dfb8794-f574-4514-b4e3-b7cdcc1460b5\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wxfvc" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.597650 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/3dfb8794-f574-4514-b4e3-b7cdcc1460b5-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-wxfvc\" (UID: \"3dfb8794-f574-4514-b4e3-b7cdcc1460b5\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wxfvc" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.599309 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dfb8794-f574-4514-b4e3-b7cdcc1460b5-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-wxfvc\" (UID: \"3dfb8794-f574-4514-b4e3-b7cdcc1460b5\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wxfvc" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.599819 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3dfb8794-f574-4514-b4e3-b7cdcc1460b5-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-wxfvc\" (UID: \"3dfb8794-f574-4514-b4e3-b7cdcc1460b5\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wxfvc" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.602560 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-http" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.602913 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-grpc" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.609286 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/3dfb8794-f574-4514-b4e3-b7cdcc1460b5-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-wxfvc\" (UID: \"3dfb8794-f574-4514-b4e3-b7cdcc1460b5\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wxfvc" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.613441 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/3dfb8794-f574-4514-b4e3-b7cdcc1460b5-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-wxfvc\" (UID: \"3dfb8794-f574-4514-b4e3-b7cdcc1460b5\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wxfvc" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.620622 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/3dfb8794-f574-4514-b4e3-b7cdcc1460b5-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-wxfvc\" (UID: \"3dfb8794-f574-4514-b4e3-b7cdcc1460b5\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wxfvc" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.628527 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xs62\" (UniqueName: \"kubernetes.io/projected/3dfb8794-f574-4514-b4e3-b7cdcc1460b5-kube-api-access-2xs62\") pod \"cloudkitty-lokistack-querier-58c84b5844-wxfvc\" (UID: \"3dfb8794-f574-4514-b4e3-b7cdcc1460b5\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wxfvc" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.659553 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6"] Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.700203 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/8cc224a9-ec5a-40b2-b0b6-0905f3553e08-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6\" (UID: \"8cc224a9-ec5a-40b2-b0b6-0905f3553e08\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.700313 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/8cc224a9-ec5a-40b2-b0b6-0905f3553e08-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6\" (UID: \"8cc224a9-ec5a-40b2-b0b6-0905f3553e08\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.700389 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cc224a9-ec5a-40b2-b0b6-0905f3553e08-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6\" (UID: \"8cc224a9-ec5a-40b2-b0b6-0905f3553e08\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.700436 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nbbd\" (UniqueName: \"kubernetes.io/projected/8cc224a9-ec5a-40b2-b0b6-0905f3553e08-kube-api-access-6nbbd\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6\" (UID: \"8cc224a9-ec5a-40b2-b0b6-0905f3553e08\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.700516 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8cc224a9-ec5a-40b2-b0b6-0905f3553e08-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6\" (UID: \"8cc224a9-ec5a-40b2-b0b6-0905f3553e08\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.748244 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-psb9w"] Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.768419 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-psb9w" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.780086 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-psb9w"] Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.782147 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.782471 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-client-http" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.782483 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-http" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.782655 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.782777 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway-ca-bundle" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.782939 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.802939 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wxfvc" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.804086 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/8cc224a9-ec5a-40b2-b0b6-0905f3553e08-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6\" (UID: \"8cc224a9-ec5a-40b2-b0b6-0905f3553e08\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.804161 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cc224a9-ec5a-40b2-b0b6-0905f3553e08-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6\" (UID: \"8cc224a9-ec5a-40b2-b0b6-0905f3553e08\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.804206 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nbbd\" (UniqueName: \"kubernetes.io/projected/8cc224a9-ec5a-40b2-b0b6-0905f3553e08-kube-api-access-6nbbd\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6\" (UID: \"8cc224a9-ec5a-40b2-b0b6-0905f3553e08\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.804267 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8cc224a9-ec5a-40b2-b0b6-0905f3553e08-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6\" (UID: \"8cc224a9-ec5a-40b2-b0b6-0905f3553e08\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.804319 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/8cc224a9-ec5a-40b2-b0b6-0905f3553e08-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6\" (UID: \"8cc224a9-ec5a-40b2-b0b6-0905f3553e08\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.805792 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8cc224a9-ec5a-40b2-b0b6-0905f3553e08-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6\" (UID: \"8cc224a9-ec5a-40b2-b0b6-0905f3553e08\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.811464 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cc224a9-ec5a-40b2-b0b6-0905f3553e08-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6\" (UID: \"8cc224a9-ec5a-40b2-b0b6-0905f3553e08\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.818511 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/8cc224a9-ec5a-40b2-b0b6-0905f3553e08-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6\" (UID: \"8cc224a9-ec5a-40b2-b0b6-0905f3553e08\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.827195 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/8cc224a9-ec5a-40b2-b0b6-0905f3553e08-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6\" (UID: \"8cc224a9-ec5a-40b2-b0b6-0905f3553e08\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.827845 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-k2wqw"] Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.832714 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-k2wqw" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.843851 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nbbd\" (UniqueName: \"kubernetes.io/projected/8cc224a9-ec5a-40b2-b0b6-0905f3553e08-kube-api-access-6nbbd\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6\" (UID: \"8cc224a9-ec5a-40b2-b0b6-0905f3553e08\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.853849 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-k2wqw"] Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.886944 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-dockercfg-qjp7m" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.929486 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5512e840-a01e-4669-bae6-a677ae85819c-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-k2wqw\" (UID: \"5512e840-a01e-4669-bae6-a677ae85819c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-k2wqw" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.929966 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/64b9816e-18ec-481f-95e9-7dc3e56534f6-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-psb9w\" (UID: \"64b9816e-18ec-481f-95e9-7dc3e56534f6\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-psb9w" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.930111 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/64b9816e-18ec-481f-95e9-7dc3e56534f6-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-psb9w\" (UID: \"64b9816e-18ec-481f-95e9-7dc3e56534f6\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-psb9w" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.930274 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/5512e840-a01e-4669-bae6-a677ae85819c-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-k2wqw\" (UID: \"5512e840-a01e-4669-bae6-a677ae85819c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-k2wqw" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.930414 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh9c7\" (UniqueName: \"kubernetes.io/projected/64b9816e-18ec-481f-95e9-7dc3e56534f6-kube-api-access-mh9c7\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-psb9w\" (UID: \"64b9816e-18ec-481f-95e9-7dc3e56534f6\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-psb9w" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.930541 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64b9816e-18ec-481f-95e9-7dc3e56534f6-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-psb9w\" (UID: \"64b9816e-18ec-481f-95e9-7dc3e56534f6\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-psb9w" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.930717 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/64b9816e-18ec-481f-95e9-7dc3e56534f6-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-psb9w\" (UID: \"64b9816e-18ec-481f-95e9-7dc3e56534f6\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-psb9w" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.930856 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/5512e840-a01e-4669-bae6-a677ae85819c-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-k2wqw\" (UID: \"5512e840-a01e-4669-bae6-a677ae85819c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-k2wqw" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.930958 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/5512e840-a01e-4669-bae6-a677ae85819c-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-k2wqw\" (UID: \"5512e840-a01e-4669-bae6-a677ae85819c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-k2wqw" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.931100 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/64b9816e-18ec-481f-95e9-7dc3e56534f6-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-psb9w\" (UID: \"64b9816e-18ec-481f-95e9-7dc3e56534f6\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-psb9w" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.931205 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5512e840-a01e-4669-bae6-a677ae85819c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-k2wqw\" (UID: \"5512e840-a01e-4669-bae6-a677ae85819c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-k2wqw" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.931286 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/5512e840-a01e-4669-bae6-a677ae85819c-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-k2wqw\" (UID: \"5512e840-a01e-4669-bae6-a677ae85819c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-k2wqw" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.931376 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts6wr\" (UniqueName: \"kubernetes.io/projected/5512e840-a01e-4669-bae6-a677ae85819c-kube-api-access-ts6wr\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-k2wqw\" (UID: \"5512e840-a01e-4669-bae6-a677ae85819c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-k2wqw" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.931471 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5512e840-a01e-4669-bae6-a677ae85819c-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-k2wqw\" (UID: \"5512e840-a01e-4669-bae6-a677ae85819c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-k2wqw" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.931555 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/5512e840-a01e-4669-bae6-a677ae85819c-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-k2wqw\" (UID: \"5512e840-a01e-4669-bae6-a677ae85819c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-k2wqw" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.931666 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64b9816e-18ec-481f-95e9-7dc3e56534f6-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-psb9w\" (UID: \"64b9816e-18ec-481f-95e9-7dc3e56534f6\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-psb9w" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.931760 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64b9816e-18ec-481f-95e9-7dc3e56534f6-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-psb9w\" (UID: \"64b9816e-18ec-481f-95e9-7dc3e56534f6\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-psb9w" Feb 26 17:36:17 crc kubenswrapper[4805]: I0226 17:36:17.931886 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/64b9816e-18ec-481f-95e9-7dc3e56534f6-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-psb9w\" (UID: \"64b9816e-18ec-481f-95e9-7dc3e56534f6\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-psb9w" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.024595 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.033929 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/5512e840-a01e-4669-bae6-a677ae85819c-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-k2wqw\" (UID: \"5512e840-a01e-4669-bae6-a677ae85819c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-k2wqw" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.033989 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts6wr\" (UniqueName: \"kubernetes.io/projected/5512e840-a01e-4669-bae6-a677ae85819c-kube-api-access-ts6wr\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-k2wqw\" (UID: \"5512e840-a01e-4669-bae6-a677ae85819c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-k2wqw" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.034043 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5512e840-a01e-4669-bae6-a677ae85819c-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-k2wqw\" (UID: \"5512e840-a01e-4669-bae6-a677ae85819c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-k2wqw" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.034062 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/5512e840-a01e-4669-bae6-a677ae85819c-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-k2wqw\" (UID: \"5512e840-a01e-4669-bae6-a677ae85819c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-k2wqw" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.034097 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64b9816e-18ec-481f-95e9-7dc3e56534f6-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-psb9w\" (UID: \"64b9816e-18ec-481f-95e9-7dc3e56534f6\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-psb9w" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.034132 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64b9816e-18ec-481f-95e9-7dc3e56534f6-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-psb9w\" (UID: \"64b9816e-18ec-481f-95e9-7dc3e56534f6\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-psb9w" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.034167 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/64b9816e-18ec-481f-95e9-7dc3e56534f6-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-psb9w\" (UID: \"64b9816e-18ec-481f-95e9-7dc3e56534f6\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-psb9w" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.034194 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5512e840-a01e-4669-bae6-a677ae85819c-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-k2wqw\" (UID: \"5512e840-a01e-4669-bae6-a677ae85819c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-k2wqw" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.034216 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/64b9816e-18ec-481f-95e9-7dc3e56534f6-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-psb9w\" (UID: \"64b9816e-18ec-481f-95e9-7dc3e56534f6\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-psb9w" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.034231 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/64b9816e-18ec-481f-95e9-7dc3e56534f6-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-psb9w\" (UID: \"64b9816e-18ec-481f-95e9-7dc3e56534f6\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-psb9w" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.034246 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/5512e840-a01e-4669-bae6-a677ae85819c-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-k2wqw\" (UID: \"5512e840-a01e-4669-bae6-a677ae85819c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-k2wqw" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.034268 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh9c7\" (UniqueName: \"kubernetes.io/projected/64b9816e-18ec-481f-95e9-7dc3e56534f6-kube-api-access-mh9c7\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-psb9w\" (UID: \"64b9816e-18ec-481f-95e9-7dc3e56534f6\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-psb9w" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.034301 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64b9816e-18ec-481f-95e9-7dc3e56534f6-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-psb9w\" (UID: \"64b9816e-18ec-481f-95e9-7dc3e56534f6\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-psb9w" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.034347 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/64b9816e-18ec-481f-95e9-7dc3e56534f6-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-psb9w\" (UID: \"64b9816e-18ec-481f-95e9-7dc3e56534f6\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-psb9w" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.034407 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/5512e840-a01e-4669-bae6-a677ae85819c-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-k2wqw\" (UID: \"5512e840-a01e-4669-bae6-a677ae85819c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-k2wqw" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.034422 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/5512e840-a01e-4669-bae6-a677ae85819c-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-k2wqw\" (UID: \"5512e840-a01e-4669-bae6-a677ae85819c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-k2wqw" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.034443 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/64b9816e-18ec-481f-95e9-7dc3e56534f6-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-psb9w\" (UID: \"64b9816e-18ec-481f-95e9-7dc3e56534f6\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-psb9w" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.034461 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5512e840-a01e-4669-bae6-a677ae85819c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-k2wqw\" (UID: \"5512e840-a01e-4669-bae6-a677ae85819c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-k2wqw" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.035401 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5512e840-a01e-4669-bae6-a677ae85819c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-k2wqw\" (UID: \"5512e840-a01e-4669-bae6-a677ae85819c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-k2wqw" Feb 26 17:36:18 crc kubenswrapper[4805]: E0226 17:36:18.035735 4805 secret.go:188] Couldn't get secret openstack/cloudkitty-lokistack-gateway-http: secret "cloudkitty-lokistack-gateway-http" not found Feb 26 17:36:18 crc kubenswrapper[4805]: E0226 17:36:18.035794 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5512e840-a01e-4669-bae6-a677ae85819c-tls-secret podName:5512e840-a01e-4669-bae6-a677ae85819c nodeName:}" failed. No retries permitted until 2026-02-26 17:36:18.535780486 +0000 UTC m=+1293.097534825 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/5512e840-a01e-4669-bae6-a677ae85819c-tls-secret") pod "cloudkitty-lokistack-gateway-7f8685b49f-k2wqw" (UID: "5512e840-a01e-4669-bae6-a677ae85819c") : secret "cloudkitty-lokistack-gateway-http" not found Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.036472 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/5512e840-a01e-4669-bae6-a677ae85819c-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-k2wqw\" (UID: \"5512e840-a01e-4669-bae6-a677ae85819c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-k2wqw" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.036606 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5512e840-a01e-4669-bae6-a677ae85819c-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-k2wqw\" (UID: \"5512e840-a01e-4669-bae6-a677ae85819c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-k2wqw" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.036744 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5512e840-a01e-4669-bae6-a677ae85819c-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-k2wqw\" (UID: \"5512e840-a01e-4669-bae6-a677ae85819c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-k2wqw" Feb 26 17:36:18 crc kubenswrapper[4805]: E0226 17:36:18.037180 4805 secret.go:188] Couldn't get secret openstack/cloudkitty-lokistack-gateway-http: secret "cloudkitty-lokistack-gateway-http" not found Feb 26 17:36:18 crc kubenswrapper[4805]: E0226 17:36:18.037272 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64b9816e-18ec-481f-95e9-7dc3e56534f6-tls-secret podName:64b9816e-18ec-481f-95e9-7dc3e56534f6 nodeName:}" failed. No retries permitted until 2026-02-26 17:36:18.537254974 +0000 UTC m=+1293.099009313 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/64b9816e-18ec-481f-95e9-7dc3e56534f6-tls-secret") pod "cloudkitty-lokistack-gateway-7f8685b49f-psb9w" (UID: "64b9816e-18ec-481f-95e9-7dc3e56534f6") : secret "cloudkitty-lokistack-gateway-http" not found Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.038798 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/5512e840-a01e-4669-bae6-a677ae85819c-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-k2wqw\" (UID: \"5512e840-a01e-4669-bae6-a677ae85819c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-k2wqw" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.039848 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64b9816e-18ec-481f-95e9-7dc3e56534f6-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-psb9w\" (UID: \"64b9816e-18ec-481f-95e9-7dc3e56534f6\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-psb9w" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.040026 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64b9816e-18ec-481f-95e9-7dc3e56534f6-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-psb9w\" (UID: \"64b9816e-18ec-481f-95e9-7dc3e56534f6\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-psb9w" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.040080 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/64b9816e-18ec-481f-95e9-7dc3e56534f6-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-psb9w\" (UID: \"64b9816e-18ec-481f-95e9-7dc3e56534f6\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-psb9w" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.043533 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/5512e840-a01e-4669-bae6-a677ae85819c-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-k2wqw\" (UID: \"5512e840-a01e-4669-bae6-a677ae85819c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-k2wqw" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.045679 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/64b9816e-18ec-481f-95e9-7dc3e56534f6-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-psb9w\" (UID: \"64b9816e-18ec-481f-95e9-7dc3e56534f6\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-psb9w" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.048032 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/5512e840-a01e-4669-bae6-a677ae85819c-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-k2wqw\" (UID: \"5512e840-a01e-4669-bae6-a677ae85819c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-k2wqw" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.051738 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64b9816e-18ec-481f-95e9-7dc3e56534f6-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-psb9w\" (UID: \"64b9816e-18ec-481f-95e9-7dc3e56534f6\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-psb9w" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.052162 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/64b9816e-18ec-481f-95e9-7dc3e56534f6-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-psb9w\" (UID: \"64b9816e-18ec-481f-95e9-7dc3e56534f6\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-psb9w" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.052677 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/64b9816e-18ec-481f-95e9-7dc3e56534f6-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-psb9w\" (UID: \"64b9816e-18ec-481f-95e9-7dc3e56534f6\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-psb9w" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.053148 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts6wr\" (UniqueName: \"kubernetes.io/projected/5512e840-a01e-4669-bae6-a677ae85819c-kube-api-access-ts6wr\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-k2wqw\" (UID: \"5512e840-a01e-4669-bae6-a677ae85819c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-k2wqw" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.055342 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh9c7\" (UniqueName: \"kubernetes.io/projected/64b9816e-18ec-481f-95e9-7dc3e56534f6-kube-api-access-mh9c7\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-psb9w\" (UID: \"64b9816e-18ec-481f-95e9-7dc3e56534f6\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-psb9w" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.350261 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.351632 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.355560 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-grpc" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.355580 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-http" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.378544 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.441388 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4fs5\" (UniqueName: \"kubernetes.io/projected/537eaeba-93f9-4d28-871b-049946f86c2b-kube-api-access-j4fs5\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"537eaeba-93f9-4d28-871b-049946f86c2b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.441457 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/537eaeba-93f9-4d28-871b-049946f86c2b-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"537eaeba-93f9-4d28-871b-049946f86c2b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.441517 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/537eaeba-93f9-4d28-871b-049946f86c2b-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"537eaeba-93f9-4d28-871b-049946f86c2b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.441796 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"537eaeba-93f9-4d28-871b-049946f86c2b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.441908 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/537eaeba-93f9-4d28-871b-049946f86c2b-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"537eaeba-93f9-4d28-871b-049946f86c2b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.441960 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"537eaeba-93f9-4d28-871b-049946f86c2b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.442092 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/537eaeba-93f9-4d28-871b-049946f86c2b-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"537eaeba-93f9-4d28-871b-049946f86c2b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.442123 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/537eaeba-93f9-4d28-871b-049946f86c2b-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"537eaeba-93f9-4d28-871b-049946f86c2b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.543850 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"537eaeba-93f9-4d28-871b-049946f86c2b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.543907 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/537eaeba-93f9-4d28-871b-049946f86c2b-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"537eaeba-93f9-4d28-871b-049946f86c2b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.543936 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"537eaeba-93f9-4d28-871b-049946f86c2b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.543998 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/64b9816e-18ec-481f-95e9-7dc3e56534f6-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-psb9w\" (UID: \"64b9816e-18ec-481f-95e9-7dc3e56534f6\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-psb9w" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.544041 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/537eaeba-93f9-4d28-871b-049946f86c2b-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"537eaeba-93f9-4d28-871b-049946f86c2b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.544065 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/537eaeba-93f9-4d28-871b-049946f86c2b-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"537eaeba-93f9-4d28-871b-049946f86c2b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.544095 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4fs5\" (UniqueName: \"kubernetes.io/projected/537eaeba-93f9-4d28-871b-049946f86c2b-kube-api-access-j4fs5\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"537eaeba-93f9-4d28-871b-049946f86c2b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.544137 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/537eaeba-93f9-4d28-871b-049946f86c2b-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"537eaeba-93f9-4d28-871b-049946f86c2b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.544184 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/537eaeba-93f9-4d28-871b-049946f86c2b-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"537eaeba-93f9-4d28-871b-049946f86c2b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.544223 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/5512e840-a01e-4669-bae6-a677ae85819c-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-k2wqw\" (UID: \"5512e840-a01e-4669-bae6-a677ae85819c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-k2wqw" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.544374 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"537eaeba-93f9-4d28-871b-049946f86c2b\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.547614 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/537eaeba-93f9-4d28-871b-049946f86c2b-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"537eaeba-93f9-4d28-871b-049946f86c2b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.547940 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"537eaeba-93f9-4d28-871b-049946f86c2b\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.550082 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/64b9816e-18ec-481f-95e9-7dc3e56534f6-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-psb9w\" (UID: \"64b9816e-18ec-481f-95e9-7dc3e56534f6\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-psb9w" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.550439 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/537eaeba-93f9-4d28-871b-049946f86c2b-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"537eaeba-93f9-4d28-871b-049946f86c2b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.557703 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/5512e840-a01e-4669-bae6-a677ae85819c-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-k2wqw\" (UID: \"5512e840-a01e-4669-bae6-a677ae85819c\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-k2wqw" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.565777 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/537eaeba-93f9-4d28-871b-049946f86c2b-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"537eaeba-93f9-4d28-871b-049946f86c2b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.567110 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4fs5\" (UniqueName: \"kubernetes.io/projected/537eaeba-93f9-4d28-871b-049946f86c2b-kube-api-access-j4fs5\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"537eaeba-93f9-4d28-871b-049946f86c2b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.567463 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.571582 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.575877 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-http" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.578952 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.580245 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-grpc" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.582755 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/537eaeba-93f9-4d28-871b-049946f86c2b-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"537eaeba-93f9-4d28-871b-049946f86c2b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.587431 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/537eaeba-93f9-4d28-871b-049946f86c2b-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"537eaeba-93f9-4d28-871b-049946f86c2b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.590286 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"537eaeba-93f9-4d28-871b-049946f86c2b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.591826 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"537eaeba-93f9-4d28-871b-049946f86c2b\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.645481 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/5f96258c-e535-406a-b67e-601f769b7e2e-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5f96258c-e535-406a-b67e-601f769b7e2e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.645656 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f96258c-e535-406a-b67e-601f769b7e2e-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5f96258c-e535-406a-b67e-601f769b7e2e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.645716 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/5f96258c-e535-406a-b67e-601f769b7e2e-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5f96258c-e535-406a-b67e-601f769b7e2e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.645842 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f96258c-e535-406a-b67e-601f769b7e2e-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5f96258c-e535-406a-b67e-601f769b7e2e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.645885 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5f96258c-e535-406a-b67e-601f769b7e2e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.645914 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/5f96258c-e535-406a-b67e-601f769b7e2e-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5f96258c-e535-406a-b67e-601f769b7e2e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.646009 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x886c\" (UniqueName: \"kubernetes.io/projected/5f96258c-e535-406a-b67e-601f769b7e2e-kube-api-access-x886c\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5f96258c-e535-406a-b67e-601f769b7e2e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.677073 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.678538 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.680728 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.682679 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-http" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.683341 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-grpc" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.701495 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.713230 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-psb9w" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.747857 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ac81593-9624-455e-89a0-fd8e84a4e8b5-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4ac81593-9624-455e-89a0-fd8e84a4e8b5\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.747955 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x886c\" (UniqueName: \"kubernetes.io/projected/5f96258c-e535-406a-b67e-601f769b7e2e-kube-api-access-x886c\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5f96258c-e535-406a-b67e-601f769b7e2e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.748002 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/5f96258c-e535-406a-b67e-601f769b7e2e-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5f96258c-e535-406a-b67e-601f769b7e2e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.748538 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f96258c-e535-406a-b67e-601f769b7e2e-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5f96258c-e535-406a-b67e-601f769b7e2e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.748571 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/4ac81593-9624-455e-89a0-fd8e84a4e8b5-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4ac81593-9624-455e-89a0-fd8e84a4e8b5\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.748596 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/5f96258c-e535-406a-b67e-601f769b7e2e-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5f96258c-e535-406a-b67e-601f769b7e2e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.748617 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rmjg\" (UniqueName: \"kubernetes.io/projected/4ac81593-9624-455e-89a0-fd8e84a4e8b5-kube-api-access-5rmjg\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4ac81593-9624-455e-89a0-fd8e84a4e8b5\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.748637 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4ac81593-9624-455e-89a0-fd8e84a4e8b5\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.748661 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ac81593-9624-455e-89a0-fd8e84a4e8b5-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4ac81593-9624-455e-89a0-fd8e84a4e8b5\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.748725 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f96258c-e535-406a-b67e-601f769b7e2e-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5f96258c-e535-406a-b67e-601f769b7e2e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.748748 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5f96258c-e535-406a-b67e-601f769b7e2e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.748768 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/5f96258c-e535-406a-b67e-601f769b7e2e-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5f96258c-e535-406a-b67e-601f769b7e2e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.748789 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/4ac81593-9624-455e-89a0-fd8e84a4e8b5-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4ac81593-9624-455e-89a0-fd8e84a4e8b5\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.748840 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/4ac81593-9624-455e-89a0-fd8e84a4e8b5-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4ac81593-9624-455e-89a0-fd8e84a4e8b5\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.749412 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f96258c-e535-406a-b67e-601f769b7e2e-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5f96258c-e535-406a-b67e-601f769b7e2e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.750949 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5f96258c-e535-406a-b67e-601f769b7e2e\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.751076 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f96258c-e535-406a-b67e-601f769b7e2e-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5f96258c-e535-406a-b67e-601f769b7e2e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.751248 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/5f96258c-e535-406a-b67e-601f769b7e2e-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5f96258c-e535-406a-b67e-601f769b7e2e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.751720 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/5f96258c-e535-406a-b67e-601f769b7e2e-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5f96258c-e535-406a-b67e-601f769b7e2e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.753593 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/5f96258c-e535-406a-b67e-601f769b7e2e-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5f96258c-e535-406a-b67e-601f769b7e2e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.768269 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x886c\" (UniqueName: \"kubernetes.io/projected/5f96258c-e535-406a-b67e-601f769b7e2e-kube-api-access-x886c\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5f96258c-e535-406a-b67e-601f769b7e2e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.768982 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-k2wqw" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.778238 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"5f96258c-e535-406a-b67e-601f769b7e2e\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.850516 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/4ac81593-9624-455e-89a0-fd8e84a4e8b5-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4ac81593-9624-455e-89a0-fd8e84a4e8b5\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.850932 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rmjg\" (UniqueName: \"kubernetes.io/projected/4ac81593-9624-455e-89a0-fd8e84a4e8b5-kube-api-access-5rmjg\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4ac81593-9624-455e-89a0-fd8e84a4e8b5\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.850980 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4ac81593-9624-455e-89a0-fd8e84a4e8b5\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.851009 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ac81593-9624-455e-89a0-fd8e84a4e8b5-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4ac81593-9624-455e-89a0-fd8e84a4e8b5\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.851084 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/4ac81593-9624-455e-89a0-fd8e84a4e8b5-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4ac81593-9624-455e-89a0-fd8e84a4e8b5\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.851114 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/4ac81593-9624-455e-89a0-fd8e84a4e8b5-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4ac81593-9624-455e-89a0-fd8e84a4e8b5\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.851207 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ac81593-9624-455e-89a0-fd8e84a4e8b5-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4ac81593-9624-455e-89a0-fd8e84a4e8b5\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.852381 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ac81593-9624-455e-89a0-fd8e84a4e8b5-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4ac81593-9624-455e-89a0-fd8e84a4e8b5\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.852767 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4ac81593-9624-455e-89a0-fd8e84a4e8b5\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.854532 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ac81593-9624-455e-89a0-fd8e84a4e8b5-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4ac81593-9624-455e-89a0-fd8e84a4e8b5\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.855306 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/4ac81593-9624-455e-89a0-fd8e84a4e8b5-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4ac81593-9624-455e-89a0-fd8e84a4e8b5\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.856554 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/4ac81593-9624-455e-89a0-fd8e84a4e8b5-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4ac81593-9624-455e-89a0-fd8e84a4e8b5\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.860888 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/4ac81593-9624-455e-89a0-fd8e84a4e8b5-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4ac81593-9624-455e-89a0-fd8e84a4e8b5\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.874867 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rmjg\" (UniqueName: \"kubernetes.io/projected/4ac81593-9624-455e-89a0-fd8e84a4e8b5-kube-api-access-5rmjg\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4ac81593-9624-455e-89a0-fd8e84a4e8b5\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.883447 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"4ac81593-9624-455e-89a0-fd8e84a4e8b5\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 17:36:18 crc kubenswrapper[4805]: I0226 17:36:18.960357 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 17:36:19 crc kubenswrapper[4805]: I0226 17:36:19.001999 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 17:36:24 crc kubenswrapper[4805]: E0226 17:36:24.831835 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 26 17:36:24 crc kubenswrapper[4805]: E0226 17:36:24.832739 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hsdj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(82935132-2a23-4b0c-86c5-be40089b7e0b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 17:36:24 crc kubenswrapper[4805]: E0226 17:36:24.833921 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="82935132-2a23-4b0c-86c5-be40089b7e0b" Feb 26 17:36:25 crc kubenswrapper[4805]: E0226 17:36:25.557045 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 26 17:36:25 crc kubenswrapper[4805]: E0226 17:36:25.557293 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5k4l4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(9c793c17-a107-4006-9e15-5a2ac2afa296): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 17:36:25 crc kubenswrapper[4805]: E0226 17:36:25.558536 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="9c793c17-a107-4006-9e15-5a2ac2afa296" Feb 26 17:36:25 crc kubenswrapper[4805]: E0226 17:36:25.568598 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 26 17:36:25 crc kubenswrapper[4805]: E0226 17:36:25.568754 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g77c4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-spqx5_openstack(84eddd87-9e83-41ff-a0a9-f813279962cb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 17:36:25 crc kubenswrapper[4805]: E0226 17:36:25.569882 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-spqx5" podUID="84eddd87-9e83-41ff-a0a9-f813279962cb" Feb 26 17:36:25 crc kubenswrapper[4805]: E0226 17:36:25.598830 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 26 17:36:25 crc kubenswrapper[4805]: E0226 17:36:25.598997 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-45qxk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-sw4n4_openstack(13a896a6-ebef-4330-8da4-2a48ff648afd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 17:36:25 crc kubenswrapper[4805]: E0226 17:36:25.600695 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-sw4n4" podUID="13a896a6-ebef-4330-8da4-2a48ff648afd" Feb 26 17:36:25 crc kubenswrapper[4805]: E0226 17:36:25.627616 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-spqx5" podUID="84eddd87-9e83-41ff-a0a9-f813279962cb" Feb 26 17:36:25 crc kubenswrapper[4805]: E0226 17:36:25.627629 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="9c793c17-a107-4006-9e15-5a2ac2afa296" Feb 26 17:36:25 crc kubenswrapper[4805]: E0226 17:36:25.627685 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="82935132-2a23-4b0c-86c5-be40089b7e0b" Feb 26 17:36:28 crc kubenswrapper[4805]: I0226 17:36:28.312198 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-sw4n4" Feb 26 17:36:28 crc kubenswrapper[4805]: I0226 17:36:28.398355 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a896a6-ebef-4330-8da4-2a48ff648afd-config\") pod \"13a896a6-ebef-4330-8da4-2a48ff648afd\" (UID: \"13a896a6-ebef-4330-8da4-2a48ff648afd\") " Feb 26 17:36:28 crc kubenswrapper[4805]: I0226 17:36:28.398509 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45qxk\" (UniqueName: \"kubernetes.io/projected/13a896a6-ebef-4330-8da4-2a48ff648afd-kube-api-access-45qxk\") pod \"13a896a6-ebef-4330-8da4-2a48ff648afd\" (UID: \"13a896a6-ebef-4330-8da4-2a48ff648afd\") " Feb 26 17:36:28 crc kubenswrapper[4805]: I0226 17:36:28.399490 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13a896a6-ebef-4330-8da4-2a48ff648afd-config" (OuterVolumeSpecName: "config") pod "13a896a6-ebef-4330-8da4-2a48ff648afd" (UID: "13a896a6-ebef-4330-8da4-2a48ff648afd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:36:28 crc kubenswrapper[4805]: I0226 17:36:28.399716 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a896a6-ebef-4330-8da4-2a48ff648afd-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:36:28 crc kubenswrapper[4805]: I0226 17:36:28.404757 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13a896a6-ebef-4330-8da4-2a48ff648afd-kube-api-access-45qxk" (OuterVolumeSpecName: "kube-api-access-45qxk") pod "13a896a6-ebef-4330-8da4-2a48ff648afd" (UID: "13a896a6-ebef-4330-8da4-2a48ff648afd"). InnerVolumeSpecName "kube-api-access-45qxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:36:28 crc kubenswrapper[4805]: I0226 17:36:28.500723 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45qxk\" (UniqueName: \"kubernetes.io/projected/13a896a6-ebef-4330-8da4-2a48ff648afd-kube-api-access-45qxk\") on node \"crc\" DevicePath \"\"" Feb 26 17:36:28 crc kubenswrapper[4805]: I0226 17:36:28.653082 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-sw4n4" event={"ID":"13a896a6-ebef-4330-8da4-2a48ff648afd","Type":"ContainerDied","Data":"4535257b02b1010f4bf353db37329f213c5c6a537bccd5efab5782b057991191"} Feb 26 17:36:28 crc kubenswrapper[4805]: I0226 17:36:28.653112 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-sw4n4" Feb 26 17:36:28 crc kubenswrapper[4805]: I0226 17:36:28.742858 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-sw4n4"] Feb 26 17:36:28 crc kubenswrapper[4805]: I0226 17:36:28.759252 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-sw4n4"] Feb 26 17:36:28 crc kubenswrapper[4805]: I0226 17:36:28.974669 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13a896a6-ebef-4330-8da4-2a48ff648afd" path="/var/lib/kubelet/pods/13a896a6-ebef-4330-8da4-2a48ff648afd/volumes" Feb 26 17:36:28 crc kubenswrapper[4805]: I0226 17:36:28.975062 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2k9nw"] Feb 26 17:36:28 crc kubenswrapper[4805]: I0226 17:36:28.975086 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 26 17:36:28 crc kubenswrapper[4805]: E0226 17:36:28.992509 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 26 17:36:28 crc kubenswrapper[4805]: E0226 17:36:28.992662 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z68jj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-k8brz_openstack(41552c16-eba4-4163-a652-8490f5dd0ef1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 17:36:28 crc kubenswrapper[4805]: E0226 17:36:28.994423 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-k8brz" podUID="41552c16-eba4-4163-a652-8490f5dd0ef1" Feb 26 17:36:29 crc kubenswrapper[4805]: E0226 17:36:29.226870 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 26 17:36:29 crc kubenswrapper[4805]: E0226 17:36:29.227638 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-92hrl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-mhqc4_openstack(6bafaeb2-6da2-4950-9dc8-82708a80fb9c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 17:36:29 crc kubenswrapper[4805]: E0226 17:36:29.229068 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-mhqc4" podUID="6bafaeb2-6da2-4950-9dc8-82708a80fb9c" Feb 26 17:36:29 crc kubenswrapper[4805]: W0226 17:36:29.441553 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3dfb8794_f574_4514_b4e3_b7cdcc1460b5.slice/crio-d5208fb0de42c403cdc1ffef8ac5080a7ff2ea20796c2a3f2b366e737d5725b4 WatchSource:0}: Error finding container d5208fb0de42c403cdc1ffef8ac5080a7ff2ea20796c2a3f2b366e737d5725b4: Status 404 returned error can't find the container with id d5208fb0de42c403cdc1ffef8ac5080a7ff2ea20796c2a3f2b366e737d5725b4 Feb 26 17:36:29 crc kubenswrapper[4805]: I0226 17:36:29.454136 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 26 17:36:29 crc kubenswrapper[4805]: W0226 17:36:29.466000 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod657f7632_1861_4399_9731_81e9977c7640.slice/crio-be34f6124874aca36f8ef9de9381bfa07f028b4c95e6aca832e61d23c528dcfc WatchSource:0}: Error finding container be34f6124874aca36f8ef9de9381bfa07f028b4c95e6aca832e61d23c528dcfc: Status 404 returned error can't find the container with id be34f6124874aca36f8ef9de9381bfa07f028b4c95e6aca832e61d23c528dcfc Feb 26 17:36:29 crc kubenswrapper[4805]: I0226 17:36:29.469218 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-wxfvc"] Feb 26 17:36:29 crc kubenswrapper[4805]: I0226 17:36:29.484903 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 26 17:36:29 crc kubenswrapper[4805]: W0226 17:36:29.487501 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ac81593_9624_455e_89a0_fd8e84a4e8b5.slice/crio-e8a0e76b8aea26e102ce5bad25036f9cf36758844a0f1c4b4a201e2feea304a6 WatchSource:0}: Error finding container e8a0e76b8aea26e102ce5bad25036f9cf36758844a0f1c4b4a201e2feea304a6: Status 404 returned error can't find the container with id e8a0e76b8aea26e102ce5bad25036f9cf36758844a0f1c4b4a201e2feea304a6 Feb 26 17:36:29 crc kubenswrapper[4805]: I0226 17:36:29.505255 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 26 17:36:29 crc kubenswrapper[4805]: W0226 17:36:29.511772 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f96258c_e535_406a_b67e_601f769b7e2e.slice/crio-06e6f89e51f11b3dfc31ca667f4ce029984c14fd9ab5178bff5a4f91acd0842d WatchSource:0}: Error finding container 06e6f89e51f11b3dfc31ca667f4ce029984c14fd9ab5178bff5a4f91acd0842d: Status 404 returned error can't find the container with id 06e6f89e51f11b3dfc31ca667f4ce029984c14fd9ab5178bff5a4f91acd0842d Feb 26 17:36:29 crc kubenswrapper[4805]: I0226 17:36:29.517235 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 26 17:36:29 crc kubenswrapper[4805]: I0226 17:36:29.523574 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 17:36:29 crc kubenswrapper[4805]: W0226 17:36:29.523760 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5512e840_a01e_4669_bae6_a677ae85819c.slice/crio-563a9006486e4a0b979804ffee891d91df6ef75a7b6af0a3b7e445cb1d804b8a WatchSource:0}: Error finding container 563a9006486e4a0b979804ffee891d91df6ef75a7b6af0a3b7e445cb1d804b8a: Status 404 returned error can't find the container with id 563a9006486e4a0b979804ffee891d91df6ef75a7b6af0a3b7e445cb1d804b8a Feb 26 17:36:29 crc kubenswrapper[4805]: I0226 17:36:29.530723 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-k2wqw"] Feb 26 17:36:29 crc kubenswrapper[4805]: I0226 17:36:29.674768 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2k9nw" event={"ID":"6ff856bd-109f-4978-9b06-546d2afaf577","Type":"ContainerStarted","Data":"e571c963bdd0da27a27969205610260eb41068b9204445ae0c8df02c95cfd618"} Feb 26 17:36:29 crc kubenswrapper[4805]: I0226 17:36:29.682294 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c1f73362-f45b-43a1-a1c7-ec280cb0f3c8","Type":"ContainerStarted","Data":"3bbc7e81964fd16fe9ee143259b9c6c3e92c34f154639de90c2655e9d0712f5e"} Feb 26 17:36:29 crc kubenswrapper[4805]: I0226 17:36:29.688235 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wxfvc" event={"ID":"3dfb8794-f574-4514-b4e3-b7cdcc1460b5","Type":"ContainerStarted","Data":"d5208fb0de42c403cdc1ffef8ac5080a7ff2ea20796c2a3f2b366e737d5725b4"} Feb 26 17:36:29 crc kubenswrapper[4805]: I0226 17:36:29.697577 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"5f96258c-e535-406a-b67e-601f769b7e2e","Type":"ContainerStarted","Data":"06e6f89e51f11b3dfc31ca667f4ce029984c14fd9ab5178bff5a4f91acd0842d"} Feb 26 17:36:29 crc kubenswrapper[4805]: I0226 17:36:29.699431 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-k2wqw" event={"ID":"5512e840-a01e-4669-bae6-a677ae85819c","Type":"ContainerStarted","Data":"563a9006486e4a0b979804ffee891d91df6ef75a7b6af0a3b7e445cb1d804b8a"} Feb 26 17:36:29 crc kubenswrapper[4805]: I0226 17:36:29.700615 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d82b85ed-2d9b-4e61-aa95-7ca78b0e96e7","Type":"ContainerStarted","Data":"0b726e2072a4435e4da78b93a4af0451ed1eeaa5a962f3293a93c61c27b84c61"} Feb 26 17:36:29 crc kubenswrapper[4805]: I0226 17:36:29.701733 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"2a78640b-0558-468f-893e-db7794aeb8b1","Type":"ContainerStarted","Data":"76a17505e6b80974abea88ec7ae4079eb9237652bce103ff05a571ba91ec14cb"} Feb 26 17:36:29 crc kubenswrapper[4805]: I0226 17:36:29.703567 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0be4e187-2328-4b07-825d-2435d153499d","Type":"ContainerStarted","Data":"fca951dece5d7c1b20c9981abb3a25b6c798f933de6a8568aeb5b84c78e67a4d"} Feb 26 17:36:29 crc kubenswrapper[4805]: I0226 17:36:29.801971 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"26cecc08-6d2b-4e0f-a231-8ac8764e8ddf","Type":"ContainerStarted","Data":"9f0abf2e6821846e1d3894a896ae73f8ef63c1d79bf7e1f5aba3733a404f168f"} Feb 26 17:36:29 crc kubenswrapper[4805]: I0226 17:36:29.829513 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"4ac81593-9624-455e-89a0-fd8e84a4e8b5","Type":"ContainerStarted","Data":"e8a0e76b8aea26e102ce5bad25036f9cf36758844a0f1c4b4a201e2feea304a6"} Feb 26 17:36:29 crc kubenswrapper[4805]: I0226 17:36:29.842805 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"657f7632-1861-4399-9731-81e9977c7640","Type":"ContainerStarted","Data":"be34f6124874aca36f8ef9de9381bfa07f028b4c95e6aca832e61d23c528dcfc"} Feb 26 17:36:29 crc kubenswrapper[4805]: E0226 17:36:29.847221 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-k8brz" podUID="41552c16-eba4-4163-a652-8490f5dd0ef1" Feb 26 17:36:29 crc kubenswrapper[4805]: I0226 17:36:29.867769 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-psb9w"] Feb 26 17:36:29 crc kubenswrapper[4805]: W0226 17:36:29.887959 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8cc224a9_ec5a_40b2_b0b6_0905f3553e08.slice/crio-7181eda707e192586f751861dc85469d581556c23773d923950d0de50f202201 WatchSource:0}: Error finding container 7181eda707e192586f751861dc85469d581556c23773d923950d0de50f202201: Status 404 returned error can't find the container with id 7181eda707e192586f751861dc85469d581556c23773d923950d0de50f202201 Feb 26 17:36:29 crc kubenswrapper[4805]: I0226 17:36:29.937125 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-tzv64"] Feb 26 17:36:29 crc kubenswrapper[4805]: I0226 17:36:29.956554 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6"] Feb 26 17:36:30 crc kubenswrapper[4805]: I0226 17:36:30.025366 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 26 17:36:30 crc kubenswrapper[4805]: I0226 17:36:30.068783 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 26 17:36:30 crc kubenswrapper[4805]: I0226 17:36:30.149882 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-tk4vq"] Feb 26 17:36:30 crc kubenswrapper[4805]: I0226 17:36:30.365724 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mhqc4" Feb 26 17:36:30 crc kubenswrapper[4805]: I0226 17:36:30.409003 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92hrl\" (UniqueName: \"kubernetes.io/projected/6bafaeb2-6da2-4950-9dc8-82708a80fb9c-kube-api-access-92hrl\") pod \"6bafaeb2-6da2-4950-9dc8-82708a80fb9c\" (UID: \"6bafaeb2-6da2-4950-9dc8-82708a80fb9c\") " Feb 26 17:36:30 crc kubenswrapper[4805]: I0226 17:36:30.409201 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bafaeb2-6da2-4950-9dc8-82708a80fb9c-config\") pod \"6bafaeb2-6da2-4950-9dc8-82708a80fb9c\" (UID: \"6bafaeb2-6da2-4950-9dc8-82708a80fb9c\") " Feb 26 17:36:30 crc kubenswrapper[4805]: I0226 17:36:30.409360 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bafaeb2-6da2-4950-9dc8-82708a80fb9c-dns-svc\") pod \"6bafaeb2-6da2-4950-9dc8-82708a80fb9c\" (UID: \"6bafaeb2-6da2-4950-9dc8-82708a80fb9c\") " Feb 26 17:36:30 crc kubenswrapper[4805]: I0226 17:36:30.411327 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bafaeb2-6da2-4950-9dc8-82708a80fb9c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6bafaeb2-6da2-4950-9dc8-82708a80fb9c" (UID: "6bafaeb2-6da2-4950-9dc8-82708a80fb9c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:36:30 crc kubenswrapper[4805]: I0226 17:36:30.411359 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bafaeb2-6da2-4950-9dc8-82708a80fb9c-config" (OuterVolumeSpecName: "config") pod "6bafaeb2-6da2-4950-9dc8-82708a80fb9c" (UID: "6bafaeb2-6da2-4950-9dc8-82708a80fb9c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:36:30 crc kubenswrapper[4805]: I0226 17:36:30.418607 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bafaeb2-6da2-4950-9dc8-82708a80fb9c-kube-api-access-92hrl" (OuterVolumeSpecName: "kube-api-access-92hrl") pod "6bafaeb2-6da2-4950-9dc8-82708a80fb9c" (UID: "6bafaeb2-6da2-4950-9dc8-82708a80fb9c"). InnerVolumeSpecName "kube-api-access-92hrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:36:30 crc kubenswrapper[4805]: I0226 17:36:30.512613 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92hrl\" (UniqueName: \"kubernetes.io/projected/6bafaeb2-6da2-4950-9dc8-82708a80fb9c-kube-api-access-92hrl\") on node \"crc\" DevicePath \"\"" Feb 26 17:36:30 crc kubenswrapper[4805]: I0226 17:36:30.512677 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bafaeb2-6da2-4950-9dc8-82708a80fb9c-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:36:30 crc kubenswrapper[4805]: I0226 17:36:30.512700 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bafaeb2-6da2-4950-9dc8-82708a80fb9c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 17:36:30 crc kubenswrapper[4805]: I0226 17:36:30.980482 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-mhqc4" Feb 26 17:36:31 crc kubenswrapper[4805]: I0226 17:36:31.057924 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 26 17:36:31 crc kubenswrapper[4805]: I0226 17:36:31.057959 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tzv64" event={"ID":"3645c31c-6e0b-4f42-b270-91cf46d0aaf9","Type":"ContainerStarted","Data":"76538dc1cbc46bbddf3252da5dd3456321496312d82c05271571bfb69612e902"} Feb 26 17:36:31 crc kubenswrapper[4805]: I0226 17:36:31.057986 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-tk4vq" event={"ID":"5e792f59-e6d1-48d3-bc1b-e17d2e0da457","Type":"ContainerStarted","Data":"374ba0b94bd7c912d0a3575322b8e7967c1a48adfbb8fd3b262c18bbe4c765f6"} Feb 26 17:36:31 crc kubenswrapper[4805]: I0226 17:36:31.058004 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"537eaeba-93f9-4d28-871b-049946f86c2b","Type":"ContainerStarted","Data":"88bdfc4242f179c25694d6990e2b3eda3b057db410d0950aa95e94d771ecf45b"} Feb 26 17:36:31 crc kubenswrapper[4805]: I0226 17:36:31.058034 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-mhqc4" event={"ID":"6bafaeb2-6da2-4950-9dc8-82708a80fb9c","Type":"ContainerDied","Data":"034a9168cbfd899f5d4d374d1fcadf62a274fd59d75b5691f0e4a8dcc499de31"} Feb 26 17:36:31 crc kubenswrapper[4805]: I0226 17:36:31.058050 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6" event={"ID":"8cc224a9-ec5a-40b2-b0b6-0905f3553e08","Type":"ContainerStarted","Data":"7181eda707e192586f751861dc85469d581556c23773d923950d0de50f202201"} Feb 26 17:36:31 crc kubenswrapper[4805]: I0226 17:36:31.058062 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-psb9w" event={"ID":"64b9816e-18ec-481f-95e9-7dc3e56534f6","Type":"ContainerStarted","Data":"5326e2dedd66999494b71d0e06dfda67ad3b5882a610cc71d278d864debb113a"} Feb 26 17:36:31 crc kubenswrapper[4805]: I0226 17:36:31.060374 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb","Type":"ContainerStarted","Data":"8622b7813f8d1a4e5e1320cd57a8fac3ee324e73195b2b3f6feb75fded6b15a4"} Feb 26 17:36:31 crc kubenswrapper[4805]: I0226 17:36:31.634963 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mhqc4"] Feb 26 17:36:31 crc kubenswrapper[4805]: I0226 17:36:31.650340 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-mhqc4"] Feb 26 17:36:31 crc kubenswrapper[4805]: W0226 17:36:31.773205 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8fb0872_9bb6_4ccf_92fd_58bf0a7868d9.slice/crio-aa836ca2451c3fa48a5d3a87dcfe1058b97bdba5e726570726a055fb32c1c2b8 WatchSource:0}: Error finding container aa836ca2451c3fa48a5d3a87dcfe1058b97bdba5e726570726a055fb32c1c2b8: Status 404 returned error can't find the container with id aa836ca2451c3fa48a5d3a87dcfe1058b97bdba5e726570726a055fb32c1c2b8 Feb 26 17:36:32 crc kubenswrapper[4805]: I0226 17:36:32.080509 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9","Type":"ContainerStarted","Data":"aa836ca2451c3fa48a5d3a87dcfe1058b97bdba5e726570726a055fb32c1c2b8"} Feb 26 17:36:32 crc kubenswrapper[4805]: I0226 17:36:32.963494 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bafaeb2-6da2-4950-9dc8-82708a80fb9c" path="/var/lib/kubelet/pods/6bafaeb2-6da2-4950-9dc8-82708a80fb9c/volumes" Feb 26 17:36:32 crc kubenswrapper[4805]: I0226 17:36:32.979398 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 17:36:32 crc kubenswrapper[4805]: I0226 17:36:32.979457 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 17:36:35 crc kubenswrapper[4805]: I0226 17:36:35.106189 4805 generic.go:334] "Generic (PLEG): container finished" podID="c1f73362-f45b-43a1-a1c7-ec280cb0f3c8" containerID="3bbc7e81964fd16fe9ee143259b9c6c3e92c34f154639de90c2655e9d0712f5e" exitCode=0 Feb 26 17:36:35 crc kubenswrapper[4805]: I0226 17:36:35.106279 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c1f73362-f45b-43a1-a1c7-ec280cb0f3c8","Type":"ContainerDied","Data":"3bbc7e81964fd16fe9ee143259b9c6c3e92c34f154639de90c2655e9d0712f5e"} Feb 26 17:36:35 crc kubenswrapper[4805]: I0226 17:36:35.107969 4805 generic.go:334] "Generic (PLEG): container finished" podID="0be4e187-2328-4b07-825d-2435d153499d" containerID="fca951dece5d7c1b20c9981abb3a25b6c798f933de6a8568aeb5b84c78e67a4d" exitCode=0 Feb 26 17:36:35 crc kubenswrapper[4805]: I0226 17:36:35.108006 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0be4e187-2328-4b07-825d-2435d153499d","Type":"ContainerDied","Data":"fca951dece5d7c1b20c9981abb3a25b6c798f933de6a8568aeb5b84c78e67a4d"} Feb 26 17:36:38 crc kubenswrapper[4805]: I0226 17:36:38.131839 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0be4e187-2328-4b07-825d-2435d153499d","Type":"ContainerStarted","Data":"be62ddf69e65dd9ace5634271c97e1dfbe0f5728f9adc7cbd5521b819a78b91e"} Feb 26 17:36:38 crc kubenswrapper[4805]: I0226 17:36:38.133239 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c1f73362-f45b-43a1-a1c7-ec280cb0f3c8","Type":"ContainerStarted","Data":"742de8535d9b6d26d565a5460a0161e609e8e5bbcf99f8948b11d37ed72fa7ea"} Feb 26 17:36:38 crc kubenswrapper[4805]: I0226 17:36:38.225736 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=11.669599717 podStartE2EDuration="38.225720395s" podCreationTimestamp="2026-02-26 17:36:00 +0000 UTC" firstStartedPulling="2026-02-26 17:36:02.586472157 +0000 UTC m=+1277.148226496" lastFinishedPulling="2026-02-26 17:36:29.142592835 +0000 UTC m=+1303.704347174" observedRunningTime="2026-02-26 17:36:38.215406945 +0000 UTC m=+1312.777161304" watchObservedRunningTime="2026-02-26 17:36:38.225720395 +0000 UTC m=+1312.787474734" Feb 26 17:36:38 crc kubenswrapper[4805]: I0226 17:36:38.250137 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=20.518420232 podStartE2EDuration="37.250115492s" podCreationTimestamp="2026-02-26 17:36:01 +0000 UTC" firstStartedPulling="2026-02-26 17:36:12.470852521 +0000 UTC m=+1287.032606880" lastFinishedPulling="2026-02-26 17:36:29.202547801 +0000 UTC m=+1303.764302140" observedRunningTime="2026-02-26 17:36:38.233441101 +0000 UTC m=+1312.795195450" watchObservedRunningTime="2026-02-26 17:36:38.250115492 +0000 UTC m=+1312.811869831" Feb 26 17:36:39 crc kubenswrapper[4805]: I0226 17:36:39.143645 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"5f96258c-e535-406a-b67e-601f769b7e2e","Type":"ContainerStarted","Data":"a942733f81cab302470c459b0f1a21ed309098594c880f49d36f170cc83eae6d"} Feb 26 17:36:39 crc kubenswrapper[4805]: I0226 17:36:39.144251 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 17:36:39 crc kubenswrapper[4805]: I0226 17:36:39.146148 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-k2wqw" event={"ID":"5512e840-a01e-4669-bae6-a677ae85819c","Type":"ContainerStarted","Data":"6210e14e62103faa16d549271c23f72fe534de38bb5b22cab1e0fc40e0a6e612"} Feb 26 17:36:39 crc kubenswrapper[4805]: I0226 17:36:39.146443 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-k2wqw" Feb 26 17:36:39 crc kubenswrapper[4805]: I0226 17:36:39.152590 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"2a78640b-0558-468f-893e-db7794aeb8b1","Type":"ContainerStarted","Data":"941adf5cab9b714cf83c8c317ea2ecb88f15fd8f49d6f53408fd0fc41e9c4063"} Feb 26 17:36:39 crc kubenswrapper[4805]: I0226 17:36:39.152728 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 26 17:36:39 crc kubenswrapper[4805]: I0226 17:36:39.154428 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6" event={"ID":"8cc224a9-ec5a-40b2-b0b6-0905f3553e08","Type":"ContainerStarted","Data":"abbee3e54b6e549a76be1ffac30883318ca92529fd1f8b7d8e00fffbd21e7630"} Feb 26 17:36:39 crc kubenswrapper[4805]: I0226 17:36:39.154838 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6" Feb 26 17:36:39 crc kubenswrapper[4805]: I0226 17:36:39.156325 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tzv64" event={"ID":"3645c31c-6e0b-4f42-b270-91cf46d0aaf9","Type":"ContainerStarted","Data":"5e793699734d467331a07d10f1f5039123dbeb925296bd0cef581f10f5fc50dd"} Feb 26 17:36:39 crc kubenswrapper[4805]: I0226 17:36:39.159051 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"4ac81593-9624-455e-89a0-fd8e84a4e8b5","Type":"ContainerStarted","Data":"8f28175939a05e147a8eb42031cc8d9eb385a62e55ed64f4339cfdf0fc68c49f"} Feb 26 17:36:39 crc kubenswrapper[4805]: I0226 17:36:39.159113 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-k2wqw" Feb 26 17:36:39 crc kubenswrapper[4805]: I0226 17:36:39.166881 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-compactor-0" podStartSLOduration=13.953336162 podStartE2EDuration="22.166857663s" podCreationTimestamp="2026-02-26 17:36:17 +0000 UTC" firstStartedPulling="2026-02-26 17:36:29.515292219 +0000 UTC m=+1304.077046558" lastFinishedPulling="2026-02-26 17:36:37.72881372 +0000 UTC m=+1312.290568059" observedRunningTime="2026-02-26 17:36:39.160062581 +0000 UTC m=+1313.721816930" watchObservedRunningTime="2026-02-26 17:36:39.166857663 +0000 UTC m=+1313.728612002" Feb 26 17:36:39 crc kubenswrapper[4805]: I0226 17:36:39.186993 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-index-gateway-0" podStartSLOduration=14.136630266 podStartE2EDuration="22.186972341s" podCreationTimestamp="2026-02-26 17:36:17 +0000 UTC" firstStartedPulling="2026-02-26 17:36:29.490489082 +0000 UTC m=+1304.052243421" lastFinishedPulling="2026-02-26 17:36:37.540831167 +0000 UTC m=+1312.102585496" observedRunningTime="2026-02-26 17:36:39.176882606 +0000 UTC m=+1313.738636955" watchObservedRunningTime="2026-02-26 17:36:39.186972341 +0000 UTC m=+1313.748726680" Feb 26 17:36:39 crc kubenswrapper[4805]: I0226 17:36:39.220427 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6" podStartSLOduration=14.916878897 podStartE2EDuration="22.220411997s" podCreationTimestamp="2026-02-26 17:36:17 +0000 UTC" firstStartedPulling="2026-02-26 17:36:29.893459792 +0000 UTC m=+1304.455214131" lastFinishedPulling="2026-02-26 17:36:37.196992892 +0000 UTC m=+1311.758747231" observedRunningTime="2026-02-26 17:36:39.215362439 +0000 UTC m=+1313.777116778" watchObservedRunningTime="2026-02-26 17:36:39.220411997 +0000 UTC m=+1313.782166326" Feb 26 17:36:39 crc kubenswrapper[4805]: I0226 17:36:39.245509 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=28.149903562 podStartE2EDuration="36.245482801s" podCreationTimestamp="2026-02-26 17:36:03 +0000 UTC" firstStartedPulling="2026-02-26 17:36:29.082270239 +0000 UTC m=+1303.644024578" lastFinishedPulling="2026-02-26 17:36:37.177849478 +0000 UTC m=+1311.739603817" observedRunningTime="2026-02-26 17:36:39.231962629 +0000 UTC m=+1313.793716998" watchObservedRunningTime="2026-02-26 17:36:39.245482801 +0000 UTC m=+1313.807237170" Feb 26 17:36:39 crc kubenswrapper[4805]: I0226 17:36:39.251605 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-k2wqw" podStartSLOduration=14.261845392 podStartE2EDuration="22.251586415s" podCreationTimestamp="2026-02-26 17:36:17 +0000 UTC" firstStartedPulling="2026-02-26 17:36:29.532855813 +0000 UTC m=+1304.094610142" lastFinishedPulling="2026-02-26 17:36:37.522596826 +0000 UTC m=+1312.084351165" observedRunningTime="2026-02-26 17:36:39.250540319 +0000 UTC m=+1313.812294698" watchObservedRunningTime="2026-02-26 17:36:39.251586415 +0000 UTC m=+1313.813340754" Feb 26 17:36:40 crc kubenswrapper[4805]: I0226 17:36:40.167919 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"537eaeba-93f9-4d28-871b-049946f86c2b","Type":"ContainerStarted","Data":"9c955cef989e37ef58c7465ef6b097bdd191d6b2b128ca7cef86e951a3a39068"} Feb 26 17:36:40 crc kubenswrapper[4805]: I0226 17:36:40.169285 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 17:36:40 crc kubenswrapper[4805]: I0226 17:36:40.170855 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wxfvc" event={"ID":"3dfb8794-f574-4514-b4e3-b7cdcc1460b5","Type":"ContainerStarted","Data":"02d6db105d283474d504f0df217cc8b4ff523752887619225f118766703df0c3"} Feb 26 17:36:40 crc kubenswrapper[4805]: I0226 17:36:40.171366 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wxfvc" Feb 26 17:36:40 crc kubenswrapper[4805]: I0226 17:36:40.173968 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-psb9w" event={"ID":"64b9816e-18ec-481f-95e9-7dc3e56534f6","Type":"ContainerStarted","Data":"34d22121826d85582e034a7af8cc0389643252e8730ebc9a991b26ed340c4837"} Feb 26 17:36:40 crc kubenswrapper[4805]: I0226 17:36:40.174058 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-psb9w" Feb 26 17:36:40 crc kubenswrapper[4805]: I0226 17:36:40.175661 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 17:36:40 crc kubenswrapper[4805]: I0226 17:36:40.188281 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-psb9w" Feb 26 17:36:40 crc kubenswrapper[4805]: I0226 17:36:40.194343 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-ingester-0" podStartSLOduration=15.527427176 podStartE2EDuration="23.194320574s" podCreationTimestamp="2026-02-26 17:36:17 +0000 UTC" firstStartedPulling="2026-02-26 17:36:30.062035715 +0000 UTC m=+1304.623790054" lastFinishedPulling="2026-02-26 17:36:37.728929113 +0000 UTC m=+1312.290683452" observedRunningTime="2026-02-26 17:36:40.189936023 +0000 UTC m=+1314.751690372" watchObservedRunningTime="2026-02-26 17:36:40.194320574 +0000 UTC m=+1314.756074913" Feb 26 17:36:40 crc kubenswrapper[4805]: I0226 17:36:40.213919 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wxfvc" podStartSLOduration=14.868656628 podStartE2EDuration="23.213897069s" podCreationTimestamp="2026-02-26 17:36:17 +0000 UTC" firstStartedPulling="2026-02-26 17:36:29.444720055 +0000 UTC m=+1304.006474404" lastFinishedPulling="2026-02-26 17:36:37.789960506 +0000 UTC m=+1312.351714845" observedRunningTime="2026-02-26 17:36:40.208716468 +0000 UTC m=+1314.770470807" watchObservedRunningTime="2026-02-26 17:36:40.213897069 +0000 UTC m=+1314.775651408" Feb 26 17:36:40 crc kubenswrapper[4805]: I0226 17:36:40.238632 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-psb9w" podStartSLOduration=15.319120158 podStartE2EDuration="23.238606584s" podCreationTimestamp="2026-02-26 17:36:17 +0000 UTC" firstStartedPulling="2026-02-26 17:36:29.871613959 +0000 UTC m=+1304.433368288" lastFinishedPulling="2026-02-26 17:36:37.791100375 +0000 UTC m=+1312.352854714" observedRunningTime="2026-02-26 17:36:40.227491283 +0000 UTC m=+1314.789245632" watchObservedRunningTime="2026-02-26 17:36:40.238606584 +0000 UTC m=+1314.800360923" Feb 26 17:36:41 crc kubenswrapper[4805]: I0226 17:36:41.181078 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d82b85ed-2d9b-4e61-aa95-7ca78b0e96e7","Type":"ContainerStarted","Data":"ee20e971a7958bf6c11e987ee4ffc4aa8f7d51232d6dc453c9f5ab49a1faa6b8"} Feb 26 17:36:41 crc kubenswrapper[4805]: I0226 17:36:41.181577 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 26 17:36:41 crc kubenswrapper[4805]: I0226 17:36:41.182912 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9","Type":"ContainerStarted","Data":"05e22918f87bc477f5abe89f7dc6c11ed7894e6ea1656fd3f752ac72e0988027"} Feb 26 17:36:41 crc kubenswrapper[4805]: I0226 17:36:41.184392 4805 generic.go:334] "Generic (PLEG): container finished" podID="84eddd87-9e83-41ff-a0a9-f813279962cb" containerID="dd32bc3cd3a5c36ea99515bced60f84d0a86c79b56c5453789ae68624edd1fed" exitCode=0 Feb 26 17:36:41 crc kubenswrapper[4805]: I0226 17:36:41.184453 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-spqx5" event={"ID":"84eddd87-9e83-41ff-a0a9-f813279962cb","Type":"ContainerDied","Data":"dd32bc3cd3a5c36ea99515bced60f84d0a86c79b56c5453789ae68624edd1fed"} Feb 26 17:36:41 crc kubenswrapper[4805]: I0226 17:36:41.187503 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb","Type":"ContainerStarted","Data":"8ea3a45a784664cd0e24fe97fde79a83f9b41a1ebd21a22dc2c40a6ffe6e2655"} Feb 26 17:36:41 crc kubenswrapper[4805]: I0226 17:36:41.189340 4805 generic.go:334] "Generic (PLEG): container finished" podID="3645c31c-6e0b-4f42-b270-91cf46d0aaf9" containerID="5e793699734d467331a07d10f1f5039123dbeb925296bd0cef581f10f5fc50dd" exitCode=0 Feb 26 17:36:41 crc kubenswrapper[4805]: I0226 17:36:41.189397 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tzv64" event={"ID":"3645c31c-6e0b-4f42-b270-91cf46d0aaf9","Type":"ContainerDied","Data":"5e793699734d467331a07d10f1f5039123dbeb925296bd0cef581f10f5fc50dd"} Feb 26 17:36:41 crc kubenswrapper[4805]: I0226 17:36:41.192047 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"657f7632-1861-4399-9731-81e9977c7640","Type":"ContainerStarted","Data":"2817559e889b1a61eb26a2ea9dc87e75e4d3db17bcbb2aa350cec5dd4478ae3f"} Feb 26 17:36:41 crc kubenswrapper[4805]: I0226 17:36:41.194123 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2k9nw" event={"ID":"6ff856bd-109f-4978-9b06-546d2afaf577","Type":"ContainerStarted","Data":"75148d73207d763c4340168d6f25e711b8d79352b9b5e2d5f8fe1876f7205cfe"} Feb 26 17:36:41 crc kubenswrapper[4805]: I0226 17:36:41.194993 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-2k9nw" Feb 26 17:36:41 crc kubenswrapper[4805]: I0226 17:36:41.204161 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=27.247650421 podStartE2EDuration="36.204142189s" podCreationTimestamp="2026-02-26 17:36:05 +0000 UTC" firstStartedPulling="2026-02-26 17:36:29.519924776 +0000 UTC m=+1304.081679115" lastFinishedPulling="2026-02-26 17:36:38.476416544 +0000 UTC m=+1313.038170883" observedRunningTime="2026-02-26 17:36:41.196638489 +0000 UTC m=+1315.758392828" watchObservedRunningTime="2026-02-26 17:36:41.204142189 +0000 UTC m=+1315.765896528" Feb 26 17:36:41 crc kubenswrapper[4805]: I0226 17:36:41.265796 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-2k9nw" podStartSLOduration=24.725243097 podStartE2EDuration="33.265771167s" podCreationTimestamp="2026-02-26 17:36:08 +0000 UTC" firstStartedPulling="2026-02-26 17:36:29.082657949 +0000 UTC m=+1303.644412298" lastFinishedPulling="2026-02-26 17:36:37.623186029 +0000 UTC m=+1312.184940368" observedRunningTime="2026-02-26 17:36:41.218385429 +0000 UTC m=+1315.780139768" watchObservedRunningTime="2026-02-26 17:36:41.265771167 +0000 UTC m=+1315.827525506" Feb 26 17:36:41 crc kubenswrapper[4805]: I0226 17:36:41.921117 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 26 17:36:41 crc kubenswrapper[4805]: I0226 17:36:41.921438 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 26 17:36:42 crc kubenswrapper[4805]: I0226 17:36:42.211141 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-tk4vq" event={"ID":"5e792f59-e6d1-48d3-bc1b-e17d2e0da457","Type":"ContainerStarted","Data":"d3d14a004251042435340beb62288883d03f505c10acb14909fe530156485e06"} Feb 26 17:36:42 crc kubenswrapper[4805]: I0226 17:36:42.211551 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-tk4vq" Feb 26 17:36:42 crc kubenswrapper[4805]: I0226 17:36:42.214548 4805 generic.go:334] "Generic (PLEG): container finished" podID="41552c16-eba4-4163-a652-8490f5dd0ef1" containerID="93751eef9b15ca4526d5b992a2428019897b5550d5552e88b2d14e2c40b281a8" exitCode=0 Feb 26 17:36:42 crc kubenswrapper[4805]: I0226 17:36:42.214602 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-k8brz" event={"ID":"41552c16-eba4-4163-a652-8490f5dd0ef1","Type":"ContainerDied","Data":"93751eef9b15ca4526d5b992a2428019897b5550d5552e88b2d14e2c40b281a8"} Feb 26 17:36:42 crc kubenswrapper[4805]: I0226 17:36:42.220823 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-spqx5" event={"ID":"84eddd87-9e83-41ff-a0a9-f813279962cb","Type":"ContainerStarted","Data":"6bf0beda42e6e708f836a74b76ed8336808d2bc603c3ce10bfd1ae08c855afc8"} Feb 26 17:36:42 crc kubenswrapper[4805]: I0226 17:36:42.221108 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-spqx5" Feb 26 17:36:42 crc kubenswrapper[4805]: I0226 17:36:42.223688 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"26cecc08-6d2b-4e0f-a231-8ac8764e8ddf","Type":"ContainerStarted","Data":"5f8f83a81c7d67f14bd27a02024189b0a3e93d3dd707c453ba6d1bc131b90312"} Feb 26 17:36:42 crc kubenswrapper[4805]: I0226 17:36:42.238075 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tzv64" event={"ID":"3645c31c-6e0b-4f42-b270-91cf46d0aaf9","Type":"ContainerStarted","Data":"e409b79ce6c2b34c895e7ebd312ff0d09d5987bf03250dda9e3204e65d69859e"} Feb 26 17:36:42 crc kubenswrapper[4805]: I0226 17:36:42.243249 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-tk4vq" podStartSLOduration=17.635567922 podStartE2EDuration="25.243234043s" podCreationTimestamp="2026-02-26 17:36:17 +0000 UTC" firstStartedPulling="2026-02-26 17:36:30.181824734 +0000 UTC m=+1304.743579073" lastFinishedPulling="2026-02-26 17:36:37.789490855 +0000 UTC m=+1312.351245194" observedRunningTime="2026-02-26 17:36:42.229705011 +0000 UTC m=+1316.791459350" watchObservedRunningTime="2026-02-26 17:36:42.243234043 +0000 UTC m=+1316.804988382" Feb 26 17:36:42 crc kubenswrapper[4805]: I0226 17:36:42.281880 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-spqx5" podStartSLOduration=4.181071046 podStartE2EDuration="43.28186176s" podCreationTimestamp="2026-02-26 17:35:59 +0000 UTC" firstStartedPulling="2026-02-26 17:36:00.317157878 +0000 UTC m=+1274.878912217" lastFinishedPulling="2026-02-26 17:36:39.417948592 +0000 UTC m=+1313.979702931" observedRunningTime="2026-02-26 17:36:42.274388541 +0000 UTC m=+1316.836142890" watchObservedRunningTime="2026-02-26 17:36:42.28186176 +0000 UTC m=+1316.843616099" Feb 26 17:36:43 crc kubenswrapper[4805]: I0226 17:36:43.251586 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-tzv64" event={"ID":"3645c31c-6e0b-4f42-b270-91cf46d0aaf9","Type":"ContainerStarted","Data":"69b5a0af84feec903ccfed70468f911b0cbd72b0f2e0d535cbbce0d44bdea1b1"} Feb 26 17:36:43 crc kubenswrapper[4805]: I0226 17:36:43.252226 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-tzv64" Feb 26 17:36:43 crc kubenswrapper[4805]: I0226 17:36:43.253671 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"82935132-2a23-4b0c-86c5-be40089b7e0b","Type":"ContainerStarted","Data":"db9ad2b23715de3e228e0c29dc0b879be5e7ad0805b9838a501fa17eb0b47b4c"} Feb 26 17:36:43 crc kubenswrapper[4805]: I0226 17:36:43.260616 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9c793c17-a107-4006-9e15-5a2ac2afa296","Type":"ContainerStarted","Data":"62b3d33a0aa7871219f4b9d15d0569cc3a99df503b74f0a8d470c476d2904b2f"} Feb 26 17:36:43 crc kubenswrapper[4805]: I0226 17:36:43.262670 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 26 17:36:43 crc kubenswrapper[4805]: I0226 17:36:43.263324 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 26 17:36:43 crc kubenswrapper[4805]: I0226 17:36:43.285421 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-tzv64" podStartSLOduration=27.73725177 podStartE2EDuration="35.285399746s" podCreationTimestamp="2026-02-26 17:36:08 +0000 UTC" firstStartedPulling="2026-02-26 17:36:29.952356601 +0000 UTC m=+1304.514110940" lastFinishedPulling="2026-02-26 17:36:37.500504577 +0000 UTC m=+1312.062258916" observedRunningTime="2026-02-26 17:36:43.276965983 +0000 UTC m=+1317.838720322" watchObservedRunningTime="2026-02-26 17:36:43.285399746 +0000 UTC m=+1317.847154085" Feb 26 17:36:43 crc kubenswrapper[4805]: I0226 17:36:43.581897 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 26 17:36:44 crc kubenswrapper[4805]: I0226 17:36:44.268206 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb","Type":"ContainerStarted","Data":"bbf865434844883a509ed2291bd82f1ffb00da7a9de9748c9f7ba840e17b920a"} Feb 26 17:36:44 crc kubenswrapper[4805]: I0226 17:36:44.271409 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-k8brz" event={"ID":"41552c16-eba4-4163-a652-8490f5dd0ef1","Type":"ContainerStarted","Data":"ab932f1bab8b8701623a8995c3ac8765ffb3326200867aab5fe35de894f66c9f"} Feb 26 17:36:44 crc kubenswrapper[4805]: I0226 17:36:44.271671 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-k8brz" Feb 26 17:36:44 crc kubenswrapper[4805]: I0226 17:36:44.275059 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9","Type":"ContainerStarted","Data":"d26413eb256f31a1b38a48b70605bd3e980fb02934df7a854026ed2a03892493"} Feb 26 17:36:44 crc kubenswrapper[4805]: I0226 17:36:44.275530 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-tzv64" Feb 26 17:36:44 crc kubenswrapper[4805]: I0226 17:36:44.314894 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=18.331366161 podStartE2EDuration="31.314867218s" podCreationTimestamp="2026-02-26 17:36:13 +0000 UTC" firstStartedPulling="2026-02-26 17:36:30.018291108 +0000 UTC m=+1304.580045447" lastFinishedPulling="2026-02-26 17:36:43.001792165 +0000 UTC m=+1317.563546504" observedRunningTime="2026-02-26 17:36:44.300064004 +0000 UTC m=+1318.861818343" watchObservedRunningTime="2026-02-26 17:36:44.314867218 +0000 UTC m=+1318.876621557" Feb 26 17:36:44 crc kubenswrapper[4805]: I0226 17:36:44.331980 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-k8brz" podStartSLOduration=-9223371990.522827 podStartE2EDuration="46.33194831s" podCreationTimestamp="2026-02-26 17:35:58 +0000 UTC" firstStartedPulling="2026-02-26 17:36:00.05905283 +0000 UTC m=+1274.620807159" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:36:44.330333679 +0000 UTC m=+1318.892088018" watchObservedRunningTime="2026-02-26 17:36:44.33194831 +0000 UTC m=+1318.893702649" Feb 26 17:36:44 crc kubenswrapper[4805]: I0226 17:36:44.364621 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=25.13264699 podStartE2EDuration="36.364599846s" podCreationTimestamp="2026-02-26 17:36:08 +0000 UTC" firstStartedPulling="2026-02-26 17:36:31.777202295 +0000 UTC m=+1306.338956634" lastFinishedPulling="2026-02-26 17:36:43.009155151 +0000 UTC m=+1317.570909490" observedRunningTime="2026-02-26 17:36:44.361464086 +0000 UTC m=+1318.923218435" watchObservedRunningTime="2026-02-26 17:36:44.364599846 +0000 UTC m=+1318.926354175" Feb 26 17:36:44 crc kubenswrapper[4805]: I0226 17:36:44.470495 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 26 17:36:44 crc kubenswrapper[4805]: I0226 17:36:44.470592 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 26 17:36:44 crc kubenswrapper[4805]: I0226 17:36:44.524560 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 26 17:36:44 crc kubenswrapper[4805]: I0226 17:36:44.944928 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 26 17:36:45 crc kubenswrapper[4805]: I0226 17:36:45.349539 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 26 17:36:45 crc kubenswrapper[4805]: I0226 17:36:45.651411 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-ns67s"] Feb 26 17:36:45 crc kubenswrapper[4805]: I0226 17:36:45.652840 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-ns67s" Feb 26 17:36:45 crc kubenswrapper[4805]: I0226 17:36:45.655188 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 26 17:36:45 crc kubenswrapper[4805]: I0226 17:36:45.944214 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 26 17:36:45 crc kubenswrapper[4805]: I0226 17:36:45.983398 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 26 17:36:46 crc kubenswrapper[4805]: I0226 17:36:46.589333 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 26 17:36:46 crc kubenswrapper[4805]: I0226 17:36:46.597457 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/01cf09f3-76a8-4c3b-ae5e-320e3fcc38cd-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-ns67s\" (UID: \"01cf09f3-76a8-4c3b-ae5e-320e3fcc38cd\") " pod="openstack/ovn-controller-metrics-ns67s" Feb 26 17:36:46 crc kubenswrapper[4805]: I0226 17:36:46.597730 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kdnh\" (UniqueName: \"kubernetes.io/projected/01cf09f3-76a8-4c3b-ae5e-320e3fcc38cd-kube-api-access-7kdnh\") pod \"ovn-controller-metrics-ns67s\" (UID: \"01cf09f3-76a8-4c3b-ae5e-320e3fcc38cd\") " pod="openstack/ovn-controller-metrics-ns67s" Feb 26 17:36:46 crc kubenswrapper[4805]: I0226 17:36:46.597915 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01cf09f3-76a8-4c3b-ae5e-320e3fcc38cd-config\") pod \"ovn-controller-metrics-ns67s\" (UID: \"01cf09f3-76a8-4c3b-ae5e-320e3fcc38cd\") " pod="openstack/ovn-controller-metrics-ns67s" Feb 26 17:36:46 crc kubenswrapper[4805]: I0226 17:36:46.597956 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cf09f3-76a8-4c3b-ae5e-320e3fcc38cd-combined-ca-bundle\") pod \"ovn-controller-metrics-ns67s\" (UID: \"01cf09f3-76a8-4c3b-ae5e-320e3fcc38cd\") " pod="openstack/ovn-controller-metrics-ns67s" Feb 26 17:36:46 crc kubenswrapper[4805]: I0226 17:36:46.606030 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/01cf09f3-76a8-4c3b-ae5e-320e3fcc38cd-ovn-rundir\") pod \"ovn-controller-metrics-ns67s\" (UID: \"01cf09f3-76a8-4c3b-ae5e-320e3fcc38cd\") " pod="openstack/ovn-controller-metrics-ns67s" Feb 26 17:36:46 crc kubenswrapper[4805]: I0226 17:36:46.606103 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/01cf09f3-76a8-4c3b-ae5e-320e3fcc38cd-ovs-rundir\") pod \"ovn-controller-metrics-ns67s\" (UID: \"01cf09f3-76a8-4c3b-ae5e-320e3fcc38cd\") " pod="openstack/ovn-controller-metrics-ns67s" Feb 26 17:36:46 crc kubenswrapper[4805]: I0226 17:36:46.620173 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 26 17:36:46 crc kubenswrapper[4805]: I0226 17:36:46.692770 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-ns67s"] Feb 26 17:36:46 crc kubenswrapper[4805]: I0226 17:36:46.716350 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 26 17:36:46 crc kubenswrapper[4805]: I0226 17:36:46.718087 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/01cf09f3-76a8-4c3b-ae5e-320e3fcc38cd-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-ns67s\" (UID: \"01cf09f3-76a8-4c3b-ae5e-320e3fcc38cd\") " pod="openstack/ovn-controller-metrics-ns67s" Feb 26 17:36:46 crc kubenswrapper[4805]: I0226 17:36:46.718262 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kdnh\" (UniqueName: \"kubernetes.io/projected/01cf09f3-76a8-4c3b-ae5e-320e3fcc38cd-kube-api-access-7kdnh\") pod \"ovn-controller-metrics-ns67s\" (UID: \"01cf09f3-76a8-4c3b-ae5e-320e3fcc38cd\") " pod="openstack/ovn-controller-metrics-ns67s" Feb 26 17:36:46 crc kubenswrapper[4805]: I0226 17:36:46.718302 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01cf09f3-76a8-4c3b-ae5e-320e3fcc38cd-config\") pod \"ovn-controller-metrics-ns67s\" (UID: \"01cf09f3-76a8-4c3b-ae5e-320e3fcc38cd\") " pod="openstack/ovn-controller-metrics-ns67s" Feb 26 17:36:46 crc kubenswrapper[4805]: I0226 17:36:46.718347 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cf09f3-76a8-4c3b-ae5e-320e3fcc38cd-combined-ca-bundle\") pod \"ovn-controller-metrics-ns67s\" (UID: \"01cf09f3-76a8-4c3b-ae5e-320e3fcc38cd\") " pod="openstack/ovn-controller-metrics-ns67s" Feb 26 17:36:46 crc kubenswrapper[4805]: I0226 17:36:46.718449 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/01cf09f3-76a8-4c3b-ae5e-320e3fcc38cd-ovn-rundir\") pod \"ovn-controller-metrics-ns67s\" (UID: \"01cf09f3-76a8-4c3b-ae5e-320e3fcc38cd\") " pod="openstack/ovn-controller-metrics-ns67s" Feb 26 17:36:46 crc kubenswrapper[4805]: I0226 17:36:46.718498 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/01cf09f3-76a8-4c3b-ae5e-320e3fcc38cd-ovs-rundir\") pod \"ovn-controller-metrics-ns67s\" (UID: \"01cf09f3-76a8-4c3b-ae5e-320e3fcc38cd\") " pod="openstack/ovn-controller-metrics-ns67s" Feb 26 17:36:46 crc kubenswrapper[4805]: I0226 17:36:46.724170 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01cf09f3-76a8-4c3b-ae5e-320e3fcc38cd-config\") pod \"ovn-controller-metrics-ns67s\" (UID: \"01cf09f3-76a8-4c3b-ae5e-320e3fcc38cd\") " pod="openstack/ovn-controller-metrics-ns67s" Feb 26 17:36:46 crc kubenswrapper[4805]: I0226 17:36:46.726596 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/01cf09f3-76a8-4c3b-ae5e-320e3fcc38cd-ovn-rundir\") pod \"ovn-controller-metrics-ns67s\" (UID: \"01cf09f3-76a8-4c3b-ae5e-320e3fcc38cd\") " pod="openstack/ovn-controller-metrics-ns67s" Feb 26 17:36:46 crc kubenswrapper[4805]: I0226 17:36:46.727412 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/01cf09f3-76a8-4c3b-ae5e-320e3fcc38cd-ovs-rundir\") pod \"ovn-controller-metrics-ns67s\" (UID: \"01cf09f3-76a8-4c3b-ae5e-320e3fcc38cd\") " pod="openstack/ovn-controller-metrics-ns67s" Feb 26 17:36:46 crc kubenswrapper[4805]: I0226 17:36:46.732901 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cf09f3-76a8-4c3b-ae5e-320e3fcc38cd-combined-ca-bundle\") pod \"ovn-controller-metrics-ns67s\" (UID: \"01cf09f3-76a8-4c3b-ae5e-320e3fcc38cd\") " pod="openstack/ovn-controller-metrics-ns67s" Feb 26 17:36:46 crc kubenswrapper[4805]: I0226 17:36:46.753340 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/01cf09f3-76a8-4c3b-ae5e-320e3fcc38cd-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-ns67s\" (UID: \"01cf09f3-76a8-4c3b-ae5e-320e3fcc38cd\") " pod="openstack/ovn-controller-metrics-ns67s" Feb 26 17:36:46 crc kubenswrapper[4805]: I0226 17:36:46.808388 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kdnh\" (UniqueName: \"kubernetes.io/projected/01cf09f3-76a8-4c3b-ae5e-320e3fcc38cd-kube-api-access-7kdnh\") pod \"ovn-controller-metrics-ns67s\" (UID: \"01cf09f3-76a8-4c3b-ae5e-320e3fcc38cd\") " pod="openstack/ovn-controller-metrics-ns67s" Feb 26 17:36:46 crc kubenswrapper[4805]: I0226 17:36:46.868236 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-ns67s" Feb 26 17:36:46 crc kubenswrapper[4805]: I0226 17:36:46.880935 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-k8brz"] Feb 26 17:36:46 crc kubenswrapper[4805]: I0226 17:36:46.881202 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-k8brz" podUID="41552c16-eba4-4163-a652-8490f5dd0ef1" containerName="dnsmasq-dns" containerID="cri-o://ab932f1bab8b8701623a8995c3ac8765ffb3326200867aab5fe35de894f66c9f" gracePeriod=10 Feb 26 17:36:46 crc kubenswrapper[4805]: I0226 17:36:46.921782 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8cc7fc4dc-k2ps5"] Feb 26 17:36:46 crc kubenswrapper[4805]: I0226 17:36:46.923247 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8cc7fc4dc-k2ps5" Feb 26 17:36:46 crc kubenswrapper[4805]: I0226 17:36:46.929463 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 26 17:36:47 crc kubenswrapper[4805]: I0226 17:36:47.014351 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8cc7fc4dc-k2ps5"] Feb 26 17:36:47 crc kubenswrapper[4805]: I0226 17:36:47.014405 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 26 17:36:47 crc kubenswrapper[4805]: I0226 17:36:47.028074 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/476d8a34-25de-416a-90dd-173be4f21a1d-config\") pod \"dnsmasq-dns-8cc7fc4dc-k2ps5\" (UID: \"476d8a34-25de-416a-90dd-173be4f21a1d\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-k2ps5" Feb 26 17:36:47 crc kubenswrapper[4805]: I0226 17:36:47.028204 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzrqm\" (UniqueName: \"kubernetes.io/projected/476d8a34-25de-416a-90dd-173be4f21a1d-kube-api-access-qzrqm\") pod \"dnsmasq-dns-8cc7fc4dc-k2ps5\" (UID: \"476d8a34-25de-416a-90dd-173be4f21a1d\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-k2ps5" Feb 26 17:36:47 crc kubenswrapper[4805]: I0226 17:36:47.028267 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/476d8a34-25de-416a-90dd-173be4f21a1d-ovsdbserver-sb\") pod \"dnsmasq-dns-8cc7fc4dc-k2ps5\" (UID: \"476d8a34-25de-416a-90dd-173be4f21a1d\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-k2ps5" Feb 26 17:36:47 crc kubenswrapper[4805]: I0226 17:36:47.028429 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/476d8a34-25de-416a-90dd-173be4f21a1d-dns-svc\") pod \"dnsmasq-dns-8cc7fc4dc-k2ps5\" (UID: \"476d8a34-25de-416a-90dd-173be4f21a1d\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-k2ps5" Feb 26 17:36:47 crc kubenswrapper[4805]: I0226 17:36:47.070078 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-spqx5"] Feb 26 17:36:47 crc kubenswrapper[4805]: I0226 17:36:47.070425 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-spqx5" podUID="84eddd87-9e83-41ff-a0a9-f813279962cb" containerName="dnsmasq-dns" containerID="cri-o://6bf0beda42e6e708f836a74b76ed8336808d2bc603c3ce10bfd1ae08c855afc8" gracePeriod=10 Feb 26 17:36:47 crc kubenswrapper[4805]: I0226 17:36:47.072309 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-spqx5" Feb 26 17:36:47 crc kubenswrapper[4805]: I0226 17:36:47.132286 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/476d8a34-25de-416a-90dd-173be4f21a1d-ovsdbserver-sb\") pod \"dnsmasq-dns-8cc7fc4dc-k2ps5\" (UID: \"476d8a34-25de-416a-90dd-173be4f21a1d\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-k2ps5" Feb 26 17:36:47 crc kubenswrapper[4805]: I0226 17:36:47.132441 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/476d8a34-25de-416a-90dd-173be4f21a1d-dns-svc\") pod \"dnsmasq-dns-8cc7fc4dc-k2ps5\" (UID: \"476d8a34-25de-416a-90dd-173be4f21a1d\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-k2ps5" Feb 26 17:36:47 crc kubenswrapper[4805]: I0226 17:36:47.132541 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/476d8a34-25de-416a-90dd-173be4f21a1d-config\") pod \"dnsmasq-dns-8cc7fc4dc-k2ps5\" (UID: \"476d8a34-25de-416a-90dd-173be4f21a1d\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-k2ps5" Feb 26 17:36:47 crc kubenswrapper[4805]: I0226 17:36:47.132657 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzrqm\" (UniqueName: \"kubernetes.io/projected/476d8a34-25de-416a-90dd-173be4f21a1d-kube-api-access-qzrqm\") pod \"dnsmasq-dns-8cc7fc4dc-k2ps5\" (UID: \"476d8a34-25de-416a-90dd-173be4f21a1d\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-k2ps5" Feb 26 17:36:47 crc kubenswrapper[4805]: I0226 17:36:47.133859 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/476d8a34-25de-416a-90dd-173be4f21a1d-ovsdbserver-sb\") pod \"dnsmasq-dns-8cc7fc4dc-k2ps5\" (UID: \"476d8a34-25de-416a-90dd-173be4f21a1d\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-k2ps5" Feb 26 17:36:47 crc kubenswrapper[4805]: I0226 17:36:47.135807 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/476d8a34-25de-416a-90dd-173be4f21a1d-dns-svc\") pod \"dnsmasq-dns-8cc7fc4dc-k2ps5\" (UID: \"476d8a34-25de-416a-90dd-173be4f21a1d\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-k2ps5" Feb 26 17:36:47 crc kubenswrapper[4805]: I0226 17:36:47.138141 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/476d8a34-25de-416a-90dd-173be4f21a1d-config\") pod \"dnsmasq-dns-8cc7fc4dc-k2ps5\" (UID: \"476d8a34-25de-416a-90dd-173be4f21a1d\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-k2ps5" Feb 26 17:36:47 crc kubenswrapper[4805]: I0226 17:36:47.138211 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-sj72q"] Feb 26 17:36:47 crc kubenswrapper[4805]: I0226 17:36:47.143939 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-sj72q" Feb 26 17:36:47 crc kubenswrapper[4805]: I0226 17:36:47.155937 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-sj72q"] Feb 26 17:36:47 crc kubenswrapper[4805]: I0226 17:36:47.170512 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 26 17:36:47 crc kubenswrapper[4805]: I0226 17:36:47.179462 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzrqm\" (UniqueName: \"kubernetes.io/projected/476d8a34-25de-416a-90dd-173be4f21a1d-kube-api-access-qzrqm\") pod \"dnsmasq-dns-8cc7fc4dc-k2ps5\" (UID: \"476d8a34-25de-416a-90dd-173be4f21a1d\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-k2ps5" Feb 26 17:36:47 crc kubenswrapper[4805]: I0226 17:36:47.235594 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a310598-e458-4267-99fd-2a14ee356946-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-sj72q\" (UID: \"4a310598-e458-4267-99fd-2a14ee356946\") " pod="openstack/dnsmasq-dns-b8fbc5445-sj72q" Feb 26 17:36:47 crc kubenswrapper[4805]: I0226 17:36:47.235700 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a310598-e458-4267-99fd-2a14ee356946-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-sj72q\" (UID: \"4a310598-e458-4267-99fd-2a14ee356946\") " pod="openstack/dnsmasq-dns-b8fbc5445-sj72q" Feb 26 17:36:47 crc kubenswrapper[4805]: I0226 17:36:47.235725 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a310598-e458-4267-99fd-2a14ee356946-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-sj72q\" (UID: \"4a310598-e458-4267-99fd-2a14ee356946\") " pod="openstack/dnsmasq-dns-b8fbc5445-sj72q" Feb 26 17:36:47 crc kubenswrapper[4805]: I0226 17:36:47.235753 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c7n9\" (UniqueName: \"kubernetes.io/projected/4a310598-e458-4267-99fd-2a14ee356946-kube-api-access-2c7n9\") pod \"dnsmasq-dns-b8fbc5445-sj72q\" (UID: \"4a310598-e458-4267-99fd-2a14ee356946\") " pod="openstack/dnsmasq-dns-b8fbc5445-sj72q" Feb 26 17:36:47 crc kubenswrapper[4805]: I0226 17:36:47.235786 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a310598-e458-4267-99fd-2a14ee356946-config\") pod \"dnsmasq-dns-b8fbc5445-sj72q\" (UID: \"4a310598-e458-4267-99fd-2a14ee356946\") " pod="openstack/dnsmasq-dns-b8fbc5445-sj72q" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.327089 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8cc7fc4dc-k2ps5" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.337640 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a310598-e458-4267-99fd-2a14ee356946-config\") pod \"dnsmasq-dns-b8fbc5445-sj72q\" (UID: \"4a310598-e458-4267-99fd-2a14ee356946\") " pod="openstack/dnsmasq-dns-b8fbc5445-sj72q" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.337722 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a310598-e458-4267-99fd-2a14ee356946-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-sj72q\" (UID: \"4a310598-e458-4267-99fd-2a14ee356946\") " pod="openstack/dnsmasq-dns-b8fbc5445-sj72q" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.337841 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a310598-e458-4267-99fd-2a14ee356946-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-sj72q\" (UID: \"4a310598-e458-4267-99fd-2a14ee356946\") " pod="openstack/dnsmasq-dns-b8fbc5445-sj72q" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.337863 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a310598-e458-4267-99fd-2a14ee356946-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-sj72q\" (UID: \"4a310598-e458-4267-99fd-2a14ee356946\") " pod="openstack/dnsmasq-dns-b8fbc5445-sj72q" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.337892 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c7n9\" (UniqueName: \"kubernetes.io/projected/4a310598-e458-4267-99fd-2a14ee356946-kube-api-access-2c7n9\") pod \"dnsmasq-dns-b8fbc5445-sj72q\" (UID: \"4a310598-e458-4267-99fd-2a14ee356946\") " pod="openstack/dnsmasq-dns-b8fbc5445-sj72q" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.339076 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a310598-e458-4267-99fd-2a14ee356946-config\") pod \"dnsmasq-dns-b8fbc5445-sj72q\" (UID: \"4a310598-e458-4267-99fd-2a14ee356946\") " pod="openstack/dnsmasq-dns-b8fbc5445-sj72q" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.339608 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a310598-e458-4267-99fd-2a14ee356946-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-sj72q\" (UID: \"4a310598-e458-4267-99fd-2a14ee356946\") " pod="openstack/dnsmasq-dns-b8fbc5445-sj72q" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.342880 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a310598-e458-4267-99fd-2a14ee356946-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-sj72q\" (UID: \"4a310598-e458-4267-99fd-2a14ee356946\") " pod="openstack/dnsmasq-dns-b8fbc5445-sj72q" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.351357 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a310598-e458-4267-99fd-2a14ee356946-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-sj72q\" (UID: \"4a310598-e458-4267-99fd-2a14ee356946\") " pod="openstack/dnsmasq-dns-b8fbc5445-sj72q" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.360169 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.362919 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c7n9\" (UniqueName: \"kubernetes.io/projected/4a310598-e458-4267-99fd-2a14ee356946-kube-api-access-2c7n9\") pod \"dnsmasq-dns-b8fbc5445-sj72q\" (UID: \"4a310598-e458-4267-99fd-2a14ee356946\") " pod="openstack/dnsmasq-dns-b8fbc5445-sj72q" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.382301 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.383977 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.386580 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.386690 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.386742 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.386694 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-ljq45" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.440068 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8563b335-2f2c-40da-811d-2ceaf2299da8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8563b335-2f2c-40da-811d-2ceaf2299da8\") " pod="openstack/ovn-northd-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.440115 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccrd8\" (UniqueName: \"kubernetes.io/projected/8563b335-2f2c-40da-811d-2ceaf2299da8-kube-api-access-ccrd8\") pod \"ovn-northd-0\" (UID: \"8563b335-2f2c-40da-811d-2ceaf2299da8\") " pod="openstack/ovn-northd-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.440157 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8563b335-2f2c-40da-811d-2ceaf2299da8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8563b335-2f2c-40da-811d-2ceaf2299da8\") " pod="openstack/ovn-northd-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.440327 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8563b335-2f2c-40da-811d-2ceaf2299da8-config\") pod \"ovn-northd-0\" (UID: \"8563b335-2f2c-40da-811d-2ceaf2299da8\") " pod="openstack/ovn-northd-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.440398 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8563b335-2f2c-40da-811d-2ceaf2299da8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"8563b335-2f2c-40da-811d-2ceaf2299da8\") " pod="openstack/ovn-northd-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.440633 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8563b335-2f2c-40da-811d-2ceaf2299da8-scripts\") pod \"ovn-northd-0\" (UID: \"8563b335-2f2c-40da-811d-2ceaf2299da8\") " pod="openstack/ovn-northd-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.440753 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/8563b335-2f2c-40da-811d-2ceaf2299da8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"8563b335-2f2c-40da-811d-2ceaf2299da8\") " pod="openstack/ovn-northd-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.492087 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-sj72q" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.542108 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8563b335-2f2c-40da-811d-2ceaf2299da8-scripts\") pod \"ovn-northd-0\" (UID: \"8563b335-2f2c-40da-811d-2ceaf2299da8\") " pod="openstack/ovn-northd-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.542583 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/8563b335-2f2c-40da-811d-2ceaf2299da8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"8563b335-2f2c-40da-811d-2ceaf2299da8\") " pod="openstack/ovn-northd-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.542668 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8563b335-2f2c-40da-811d-2ceaf2299da8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8563b335-2f2c-40da-811d-2ceaf2299da8\") " pod="openstack/ovn-northd-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.542720 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccrd8\" (UniqueName: \"kubernetes.io/projected/8563b335-2f2c-40da-811d-2ceaf2299da8-kube-api-access-ccrd8\") pod \"ovn-northd-0\" (UID: \"8563b335-2f2c-40da-811d-2ceaf2299da8\") " pod="openstack/ovn-northd-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.542854 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8563b335-2f2c-40da-811d-2ceaf2299da8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8563b335-2f2c-40da-811d-2ceaf2299da8\") " pod="openstack/ovn-northd-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.542948 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8563b335-2f2c-40da-811d-2ceaf2299da8-config\") pod \"ovn-northd-0\" (UID: \"8563b335-2f2c-40da-811d-2ceaf2299da8\") " pod="openstack/ovn-northd-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.543032 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8563b335-2f2c-40da-811d-2ceaf2299da8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"8563b335-2f2c-40da-811d-2ceaf2299da8\") " pod="openstack/ovn-northd-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.544997 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8563b335-2f2c-40da-811d-2ceaf2299da8-scripts\") pod \"ovn-northd-0\" (UID: \"8563b335-2f2c-40da-811d-2ceaf2299da8\") " pod="openstack/ovn-northd-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.545463 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8563b335-2f2c-40da-811d-2ceaf2299da8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8563b335-2f2c-40da-811d-2ceaf2299da8\") " pod="openstack/ovn-northd-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.545765 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8563b335-2f2c-40da-811d-2ceaf2299da8-config\") pod \"ovn-northd-0\" (UID: \"8563b335-2f2c-40da-811d-2ceaf2299da8\") " pod="openstack/ovn-northd-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.548727 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8563b335-2f2c-40da-811d-2ceaf2299da8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"8563b335-2f2c-40da-811d-2ceaf2299da8\") " pod="openstack/ovn-northd-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.549007 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8563b335-2f2c-40da-811d-2ceaf2299da8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8563b335-2f2c-40da-811d-2ceaf2299da8\") " pod="openstack/ovn-northd-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.553743 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/8563b335-2f2c-40da-811d-2ceaf2299da8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"8563b335-2f2c-40da-811d-2ceaf2299da8\") " pod="openstack/ovn-northd-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.568256 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccrd8\" (UniqueName: \"kubernetes.io/projected/8563b335-2f2c-40da-811d-2ceaf2299da8-kube-api-access-ccrd8\") pod \"ovn-northd-0\" (UID: \"8563b335-2f2c-40da-811d-2ceaf2299da8\") " pod="openstack/ovn-northd-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.705299 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.920161 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.931209 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.935582 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-pcpnw" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.935598 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.936131 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.936565 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.947193 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.973451 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-wlxbg"] Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.974696 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wlxbg" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.978438 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.978683 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.979240 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:47.992260 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-wlxbg"] Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.052117 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a466ee40-e6ef-4a36-96c6-88e7ce00a28c-etc-swift\") pod \"swift-storage-0\" (UID: \"a466ee40-e6ef-4a36-96c6-88e7ce00a28c\") " pod="openstack/swift-storage-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.052313 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a466ee40-e6ef-4a36-96c6-88e7ce00a28c-lock\") pod \"swift-storage-0\" (UID: \"a466ee40-e6ef-4a36-96c6-88e7ce00a28c\") " pod="openstack/swift-storage-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.052493 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a466ee40-e6ef-4a36-96c6-88e7ce00a28c-cache\") pod \"swift-storage-0\" (UID: \"a466ee40-e6ef-4a36-96c6-88e7ce00a28c\") " pod="openstack/swift-storage-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.052622 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a466ee40-e6ef-4a36-96c6-88e7ce00a28c-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"a466ee40-e6ef-4a36-96c6-88e7ce00a28c\") " pod="openstack/swift-storage-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.052832 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fjp4\" (UniqueName: \"kubernetes.io/projected/a466ee40-e6ef-4a36-96c6-88e7ce00a28c-kube-api-access-9fjp4\") pod \"swift-storage-0\" (UID: \"a466ee40-e6ef-4a36-96c6-88e7ce00a28c\") " pod="openstack/swift-storage-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.052921 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-652d4274-048a-4d6e-be31-05a8823aa8d5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652d4274-048a-4d6e-be31-05a8823aa8d5\") pod \"swift-storage-0\" (UID: \"a466ee40-e6ef-4a36-96c6-88e7ce00a28c\") " pod="openstack/swift-storage-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.154756 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fjp4\" (UniqueName: \"kubernetes.io/projected/a466ee40-e6ef-4a36-96c6-88e7ce00a28c-kube-api-access-9fjp4\") pod \"swift-storage-0\" (UID: \"a466ee40-e6ef-4a36-96c6-88e7ce00a28c\") " pod="openstack/swift-storage-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.154811 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-ring-data-devices\") pod \"swift-ring-rebalance-wlxbg\" (UID: \"81fa1ce3-014a-4a1d-b880-7d1ea1fb6975\") " pod="openstack/swift-ring-rebalance-wlxbg" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.154849 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-652d4274-048a-4d6e-be31-05a8823aa8d5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652d4274-048a-4d6e-be31-05a8823aa8d5\") pod \"swift-storage-0\" (UID: \"a466ee40-e6ef-4a36-96c6-88e7ce00a28c\") " pod="openstack/swift-storage-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.154885 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-scripts\") pod \"swift-ring-rebalance-wlxbg\" (UID: \"81fa1ce3-014a-4a1d-b880-7d1ea1fb6975\") " pod="openstack/swift-ring-rebalance-wlxbg" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.154909 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-etc-swift\") pod \"swift-ring-rebalance-wlxbg\" (UID: \"81fa1ce3-014a-4a1d-b880-7d1ea1fb6975\") " pod="openstack/swift-ring-rebalance-wlxbg" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.154931 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a466ee40-e6ef-4a36-96c6-88e7ce00a28c-etc-swift\") pod \"swift-storage-0\" (UID: \"a466ee40-e6ef-4a36-96c6-88e7ce00a28c\") " pod="openstack/swift-storage-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.154969 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-swiftconf\") pod \"swift-ring-rebalance-wlxbg\" (UID: \"81fa1ce3-014a-4a1d-b880-7d1ea1fb6975\") " pod="openstack/swift-ring-rebalance-wlxbg" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.154996 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-combined-ca-bundle\") pod \"swift-ring-rebalance-wlxbg\" (UID: \"81fa1ce3-014a-4a1d-b880-7d1ea1fb6975\") " pod="openstack/swift-ring-rebalance-wlxbg" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.155050 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-dispersionconf\") pod \"swift-ring-rebalance-wlxbg\" (UID: \"81fa1ce3-014a-4a1d-b880-7d1ea1fb6975\") " pod="openstack/swift-ring-rebalance-wlxbg" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.155074 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgv6x\" (UniqueName: \"kubernetes.io/projected/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-kube-api-access-hgv6x\") pod \"swift-ring-rebalance-wlxbg\" (UID: \"81fa1ce3-014a-4a1d-b880-7d1ea1fb6975\") " pod="openstack/swift-ring-rebalance-wlxbg" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.155090 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a466ee40-e6ef-4a36-96c6-88e7ce00a28c-lock\") pod \"swift-storage-0\" (UID: \"a466ee40-e6ef-4a36-96c6-88e7ce00a28c\") " pod="openstack/swift-storage-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.155119 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a466ee40-e6ef-4a36-96c6-88e7ce00a28c-cache\") pod \"swift-storage-0\" (UID: \"a466ee40-e6ef-4a36-96c6-88e7ce00a28c\") " pod="openstack/swift-storage-0" Feb 26 17:36:53 crc kubenswrapper[4805]: E0226 17:36:48.155222 4805 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 26 17:36:53 crc kubenswrapper[4805]: E0226 17:36:48.155244 4805 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 26 17:36:53 crc kubenswrapper[4805]: E0226 17:36:48.155297 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a466ee40-e6ef-4a36-96c6-88e7ce00a28c-etc-swift podName:a466ee40-e6ef-4a36-96c6-88e7ce00a28c nodeName:}" failed. No retries permitted until 2026-02-26 17:36:48.655285949 +0000 UTC m=+1323.217040288 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a466ee40-e6ef-4a36-96c6-88e7ce00a28c-etc-swift") pod "swift-storage-0" (UID: "a466ee40-e6ef-4a36-96c6-88e7ce00a28c") : configmap "swift-ring-files" not found Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.155486 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a466ee40-e6ef-4a36-96c6-88e7ce00a28c-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"a466ee40-e6ef-4a36-96c6-88e7ce00a28c\") " pod="openstack/swift-storage-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.155523 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a466ee40-e6ef-4a36-96c6-88e7ce00a28c-cache\") pod \"swift-storage-0\" (UID: \"a466ee40-e6ef-4a36-96c6-88e7ce00a28c\") " pod="openstack/swift-storage-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.155572 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a466ee40-e6ef-4a36-96c6-88e7ce00a28c-lock\") pod \"swift-storage-0\" (UID: \"a466ee40-e6ef-4a36-96c6-88e7ce00a28c\") " pod="openstack/swift-storage-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.157343 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.157362 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-652d4274-048a-4d6e-be31-05a8823aa8d5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652d4274-048a-4d6e-be31-05a8823aa8d5\") pod \"swift-storage-0\" (UID: \"a466ee40-e6ef-4a36-96c6-88e7ce00a28c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f01895f899237ff7cd427b8ab6d6688b348da0c3b3d3df00b25baed9ece7a076/globalmount\"" pod="openstack/swift-storage-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.186731 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-652d4274-048a-4d6e-be31-05a8823aa8d5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-652d4274-048a-4d6e-be31-05a8823aa8d5\") pod \"swift-storage-0\" (UID: \"a466ee40-e6ef-4a36-96c6-88e7ce00a28c\") " pod="openstack/swift-storage-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.256269 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a466ee40-e6ef-4a36-96c6-88e7ce00a28c-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"a466ee40-e6ef-4a36-96c6-88e7ce00a28c\") " pod="openstack/swift-storage-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.257007 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-ring-data-devices\") pod \"swift-ring-rebalance-wlxbg\" (UID: \"81fa1ce3-014a-4a1d-b880-7d1ea1fb6975\") " pod="openstack/swift-ring-rebalance-wlxbg" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.257097 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-scripts\") pod \"swift-ring-rebalance-wlxbg\" (UID: \"81fa1ce3-014a-4a1d-b880-7d1ea1fb6975\") " pod="openstack/swift-ring-rebalance-wlxbg" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.257131 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-etc-swift\") pod \"swift-ring-rebalance-wlxbg\" (UID: \"81fa1ce3-014a-4a1d-b880-7d1ea1fb6975\") " pod="openstack/swift-ring-rebalance-wlxbg" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.257211 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-swiftconf\") pod \"swift-ring-rebalance-wlxbg\" (UID: \"81fa1ce3-014a-4a1d-b880-7d1ea1fb6975\") " pod="openstack/swift-ring-rebalance-wlxbg" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.257269 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-combined-ca-bundle\") pod \"swift-ring-rebalance-wlxbg\" (UID: \"81fa1ce3-014a-4a1d-b880-7d1ea1fb6975\") " pod="openstack/swift-ring-rebalance-wlxbg" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.257307 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-dispersionconf\") pod \"swift-ring-rebalance-wlxbg\" (UID: \"81fa1ce3-014a-4a1d-b880-7d1ea1fb6975\") " pod="openstack/swift-ring-rebalance-wlxbg" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.257341 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgv6x\" (UniqueName: \"kubernetes.io/projected/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-kube-api-access-hgv6x\") pod \"swift-ring-rebalance-wlxbg\" (UID: \"81fa1ce3-014a-4a1d-b880-7d1ea1fb6975\") " pod="openstack/swift-ring-rebalance-wlxbg" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.257748 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-ring-data-devices\") pod \"swift-ring-rebalance-wlxbg\" (UID: \"81fa1ce3-014a-4a1d-b880-7d1ea1fb6975\") " pod="openstack/swift-ring-rebalance-wlxbg" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.258083 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-scripts\") pod \"swift-ring-rebalance-wlxbg\" (UID: \"81fa1ce3-014a-4a1d-b880-7d1ea1fb6975\") " pod="openstack/swift-ring-rebalance-wlxbg" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.258644 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-etc-swift\") pod \"swift-ring-rebalance-wlxbg\" (UID: \"81fa1ce3-014a-4a1d-b880-7d1ea1fb6975\") " pod="openstack/swift-ring-rebalance-wlxbg" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.260440 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fjp4\" (UniqueName: \"kubernetes.io/projected/a466ee40-e6ef-4a36-96c6-88e7ce00a28c-kube-api-access-9fjp4\") pod \"swift-storage-0\" (UID: \"a466ee40-e6ef-4a36-96c6-88e7ce00a28c\") " pod="openstack/swift-storage-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.262528 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-dispersionconf\") pod \"swift-ring-rebalance-wlxbg\" (UID: \"81fa1ce3-014a-4a1d-b880-7d1ea1fb6975\") " pod="openstack/swift-ring-rebalance-wlxbg" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.269513 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-swiftconf\") pod \"swift-ring-rebalance-wlxbg\" (UID: \"81fa1ce3-014a-4a1d-b880-7d1ea1fb6975\") " pod="openstack/swift-ring-rebalance-wlxbg" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.278708 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-combined-ca-bundle\") pod \"swift-ring-rebalance-wlxbg\" (UID: \"81fa1ce3-014a-4a1d-b880-7d1ea1fb6975\") " pod="openstack/swift-ring-rebalance-wlxbg" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.284385 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgv6x\" (UniqueName: \"kubernetes.io/projected/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-kube-api-access-hgv6x\") pod \"swift-ring-rebalance-wlxbg\" (UID: \"81fa1ce3-014a-4a1d-b880-7d1ea1fb6975\") " pod="openstack/swift-ring-rebalance-wlxbg" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.291465 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wlxbg" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.478282 4805 generic.go:334] "Generic (PLEG): container finished" podID="41552c16-eba4-4163-a652-8490f5dd0ef1" containerID="ab932f1bab8b8701623a8995c3ac8765ffb3326200867aab5fe35de894f66c9f" exitCode=0 Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.478348 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-k8brz" event={"ID":"41552c16-eba4-4163-a652-8490f5dd0ef1","Type":"ContainerDied","Data":"ab932f1bab8b8701623a8995c3ac8765ffb3326200867aab5fe35de894f66c9f"} Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.482965 4805 generic.go:334] "Generic (PLEG): container finished" podID="84eddd87-9e83-41ff-a0a9-f813279962cb" containerID="6bf0beda42e6e708f836a74b76ed8336808d2bc603c3ce10bfd1ae08c855afc8" exitCode=0 Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.483938 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-spqx5" event={"ID":"84eddd87-9e83-41ff-a0a9-f813279962cb","Type":"ContainerDied","Data":"6bf0beda42e6e708f836a74b76ed8336808d2bc603c3ce10bfd1ae08c855afc8"} Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:48.665367 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a466ee40-e6ef-4a36-96c6-88e7ce00a28c-etc-swift\") pod \"swift-storage-0\" (UID: \"a466ee40-e6ef-4a36-96c6-88e7ce00a28c\") " pod="openstack/swift-storage-0" Feb 26 17:36:53 crc kubenswrapper[4805]: E0226 17:36:48.665903 4805 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 26 17:36:53 crc kubenswrapper[4805]: E0226 17:36:48.665922 4805 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 26 17:36:53 crc kubenswrapper[4805]: E0226 17:36:48.665969 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a466ee40-e6ef-4a36-96c6-88e7ce00a28c-etc-swift podName:a466ee40-e6ef-4a36-96c6-88e7ce00a28c nodeName:}" failed. No retries permitted until 2026-02-26 17:36:49.665953092 +0000 UTC m=+1324.227707421 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a466ee40-e6ef-4a36-96c6-88e7ce00a28c-etc-swift") pod "swift-storage-0" (UID: "a466ee40-e6ef-4a36-96c6-88e7ce00a28c") : configmap "swift-ring-files" not found Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:49.129065 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-666b6646f7-k8brz" podUID="41552c16-eba4-4163-a652-8490f5dd0ef1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.108:5353: connect: connection refused" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:49.494553 4805 generic.go:334] "Generic (PLEG): container finished" podID="657f7632-1861-4399-9731-81e9977c7640" containerID="2817559e889b1a61eb26a2ea9dc87e75e4d3db17bcbb2aa350cec5dd4478ae3f" exitCode=0 Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:49.494599 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"657f7632-1861-4399-9731-81e9977c7640","Type":"ContainerDied","Data":"2817559e889b1a61eb26a2ea9dc87e75e4d3db17bcbb2aa350cec5dd4478ae3f"} Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:49.654266 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-spqx5" podUID="84eddd87-9e83-41ff-a0a9-f813279962cb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.109:5353: connect: connection refused" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:49.689261 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a466ee40-e6ef-4a36-96c6-88e7ce00a28c-etc-swift\") pod \"swift-storage-0\" (UID: \"a466ee40-e6ef-4a36-96c6-88e7ce00a28c\") " pod="openstack/swift-storage-0" Feb 26 17:36:53 crc kubenswrapper[4805]: E0226 17:36:49.689649 4805 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 26 17:36:53 crc kubenswrapper[4805]: E0226 17:36:49.689670 4805 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 26 17:36:53 crc kubenswrapper[4805]: E0226 17:36:49.689725 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a466ee40-e6ef-4a36-96c6-88e7ce00a28c-etc-swift podName:a466ee40-e6ef-4a36-96c6-88e7ce00a28c nodeName:}" failed. No retries permitted until 2026-02-26 17:36:51.689706538 +0000 UTC m=+1326.251460877 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a466ee40-e6ef-4a36-96c6-88e7ce00a28c-etc-swift") pod "swift-storage-0" (UID: "a466ee40-e6ef-4a36-96c6-88e7ce00a28c") : configmap "swift-ring-files" not found Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:50.522383 4805 generic.go:334] "Generic (PLEG): container finished" podID="26cecc08-6d2b-4e0f-a231-8ac8764e8ddf" containerID="5f8f83a81c7d67f14bd27a02024189b0a3e93d3dd707c453ba6d1bc131b90312" exitCode=0 Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:50.522403 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"26cecc08-6d2b-4e0f-a231-8ac8764e8ddf","Type":"ContainerDied","Data":"5f8f83a81c7d67f14bd27a02024189b0a3e93d3dd707c453ba6d1bc131b90312"} Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:51.067675 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:51.159855 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="0be4e187-2328-4b07-825d-2435d153499d" containerName="galera" probeResult="failure" output=< Feb 26 17:36:53 crc kubenswrapper[4805]: wsrep_local_state_comment (Joined) differs from Synced Feb 26 17:36:53 crc kubenswrapper[4805]: > Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:51.691838 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a466ee40-e6ef-4a36-96c6-88e7ce00a28c-etc-swift\") pod \"swift-storage-0\" (UID: \"a466ee40-e6ef-4a36-96c6-88e7ce00a28c\") " pod="openstack/swift-storage-0" Feb 26 17:36:53 crc kubenswrapper[4805]: E0226 17:36:51.692115 4805 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 26 17:36:53 crc kubenswrapper[4805]: E0226 17:36:51.692151 4805 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 26 17:36:53 crc kubenswrapper[4805]: E0226 17:36:51.692224 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a466ee40-e6ef-4a36-96c6-88e7ce00a28c-etc-swift podName:a466ee40-e6ef-4a36-96c6-88e7ce00a28c nodeName:}" failed. No retries permitted until 2026-02-26 17:36:55.692202145 +0000 UTC m=+1330.253956474 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a466ee40-e6ef-4a36-96c6-88e7ce00a28c-etc-swift") pod "swift-storage-0" (UID: "a466ee40-e6ef-4a36-96c6-88e7ce00a28c") : configmap "swift-ring-files" not found Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:51.994637 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-lbcw7"] Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:51.997479 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lbcw7" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:52.000668 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:52.004673 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lbcw7"] Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:52.025333 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:52.098857 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg8xc\" (UniqueName: \"kubernetes.io/projected/1286c88f-0dc6-4cf4-a610-ad7dceb34b34-kube-api-access-kg8xc\") pod \"root-account-create-update-lbcw7\" (UID: \"1286c88f-0dc6-4cf4-a610-ad7dceb34b34\") " pod="openstack/root-account-create-update-lbcw7" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:52.098977 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1286c88f-0dc6-4cf4-a610-ad7dceb34b34-operator-scripts\") pod \"root-account-create-update-lbcw7\" (UID: \"1286c88f-0dc6-4cf4-a610-ad7dceb34b34\") " pod="openstack/root-account-create-update-lbcw7" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:52.201361 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg8xc\" (UniqueName: \"kubernetes.io/projected/1286c88f-0dc6-4cf4-a610-ad7dceb34b34-kube-api-access-kg8xc\") pod \"root-account-create-update-lbcw7\" (UID: \"1286c88f-0dc6-4cf4-a610-ad7dceb34b34\") " pod="openstack/root-account-create-update-lbcw7" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:52.201447 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1286c88f-0dc6-4cf4-a610-ad7dceb34b34-operator-scripts\") pod \"root-account-create-update-lbcw7\" (UID: \"1286c88f-0dc6-4cf4-a610-ad7dceb34b34\") " pod="openstack/root-account-create-update-lbcw7" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:52.203154 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1286c88f-0dc6-4cf4-a610-ad7dceb34b34-operator-scripts\") pod \"root-account-create-update-lbcw7\" (UID: \"1286c88f-0dc6-4cf4-a610-ad7dceb34b34\") " pod="openstack/root-account-create-update-lbcw7" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:52.223008 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg8xc\" (UniqueName: \"kubernetes.io/projected/1286c88f-0dc6-4cf4-a610-ad7dceb34b34-kube-api-access-kg8xc\") pod \"root-account-create-update-lbcw7\" (UID: \"1286c88f-0dc6-4cf4-a610-ad7dceb34b34\") " pod="openstack/root-account-create-update-lbcw7" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:52.315211 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lbcw7" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.266576 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-ns67s"] Feb 26 17:36:53 crc kubenswrapper[4805]: W0226 17:36:53.274534 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01cf09f3_76a8_4c3b_ae5e_320e3fcc38cd.slice/crio-859dac9283e92485084377501856b5cd1f28016173b598b138cf10f7929ddf9a WatchSource:0}: Error finding container 859dac9283e92485084377501856b5cd1f28016173b598b138cf10f7929ddf9a: Status 404 returned error can't find the container with id 859dac9283e92485084377501856b5cd1f28016173b598b138cf10f7929ddf9a Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.482381 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-k8brz" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.516961 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-spqx5" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.552321 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-k8brz" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.552970 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-k8brz" event={"ID":"41552c16-eba4-4163-a652-8490f5dd0ef1","Type":"ContainerDied","Data":"d34b345a2b7eb4a6b12182f0c2c5e5a2eefcd54deace4b610905300f99c0accf"} Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.553009 4805 scope.go:117] "RemoveContainer" containerID="ab932f1bab8b8701623a8995c3ac8765ffb3326200867aab5fe35de894f66c9f" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.556703 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-ns67s" event={"ID":"01cf09f3-76a8-4c3b-ae5e-320e3fcc38cd","Type":"ContainerStarted","Data":"859dac9283e92485084377501856b5cd1f28016173b598b138cf10f7929ddf9a"} Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.560490 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-spqx5" event={"ID":"84eddd87-9e83-41ff-a0a9-f813279962cb","Type":"ContainerDied","Data":"b56a798e140a2bfd4a56340a983043352f0d87331dbf827d8752cbd659ff6778"} Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.560568 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-spqx5" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.578712 4805 scope.go:117] "RemoveContainer" containerID="93751eef9b15ca4526d5b992a2428019897b5550d5552e88b2d14e2c40b281a8" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.635353 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84eddd87-9e83-41ff-a0a9-f813279962cb-config\") pod \"84eddd87-9e83-41ff-a0a9-f813279962cb\" (UID: \"84eddd87-9e83-41ff-a0a9-f813279962cb\") " Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.635433 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g77c4\" (UniqueName: \"kubernetes.io/projected/84eddd87-9e83-41ff-a0a9-f813279962cb-kube-api-access-g77c4\") pod \"84eddd87-9e83-41ff-a0a9-f813279962cb\" (UID: \"84eddd87-9e83-41ff-a0a9-f813279962cb\") " Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.635466 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41552c16-eba4-4163-a652-8490f5dd0ef1-dns-svc\") pod \"41552c16-eba4-4163-a652-8490f5dd0ef1\" (UID: \"41552c16-eba4-4163-a652-8490f5dd0ef1\") " Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.635573 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84eddd87-9e83-41ff-a0a9-f813279962cb-dns-svc\") pod \"84eddd87-9e83-41ff-a0a9-f813279962cb\" (UID: \"84eddd87-9e83-41ff-a0a9-f813279962cb\") " Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.635598 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41552c16-eba4-4163-a652-8490f5dd0ef1-config\") pod \"41552c16-eba4-4163-a652-8490f5dd0ef1\" (UID: \"41552c16-eba4-4163-a652-8490f5dd0ef1\") " Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.635710 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z68jj\" (UniqueName: \"kubernetes.io/projected/41552c16-eba4-4163-a652-8490f5dd0ef1-kube-api-access-z68jj\") pod \"41552c16-eba4-4163-a652-8490f5dd0ef1\" (UID: \"41552c16-eba4-4163-a652-8490f5dd0ef1\") " Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.636277 4805 scope.go:117] "RemoveContainer" containerID="6bf0beda42e6e708f836a74b76ed8336808d2bc603c3ce10bfd1ae08c855afc8" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.643339 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41552c16-eba4-4163-a652-8490f5dd0ef1-kube-api-access-z68jj" (OuterVolumeSpecName: "kube-api-access-z68jj") pod "41552c16-eba4-4163-a652-8490f5dd0ef1" (UID: "41552c16-eba4-4163-a652-8490f5dd0ef1"). InnerVolumeSpecName "kube-api-access-z68jj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.643494 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84eddd87-9e83-41ff-a0a9-f813279962cb-kube-api-access-g77c4" (OuterVolumeSpecName: "kube-api-access-g77c4") pod "84eddd87-9e83-41ff-a0a9-f813279962cb" (UID: "84eddd87-9e83-41ff-a0a9-f813279962cb"). InnerVolumeSpecName "kube-api-access-g77c4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.668250 4805 scope.go:117] "RemoveContainer" containerID="dd32bc3cd3a5c36ea99515bced60f84d0a86c79b56c5453789ae68624edd1fed" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.707851 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84eddd87-9e83-41ff-a0a9-f813279962cb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "84eddd87-9e83-41ff-a0a9-f813279962cb" (UID: "84eddd87-9e83-41ff-a0a9-f813279962cb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.712502 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41552c16-eba4-4163-a652-8490f5dd0ef1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "41552c16-eba4-4163-a652-8490f5dd0ef1" (UID: "41552c16-eba4-4163-a652-8490f5dd0ef1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.712755 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84eddd87-9e83-41ff-a0a9-f813279962cb-config" (OuterVolumeSpecName: "config") pod "84eddd87-9e83-41ff-a0a9-f813279962cb" (UID: "84eddd87-9e83-41ff-a0a9-f813279962cb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.737028 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41552c16-eba4-4163-a652-8490f5dd0ef1-config" (OuterVolumeSpecName: "config") pod "41552c16-eba4-4163-a652-8490f5dd0ef1" (UID: "41552c16-eba4-4163-a652-8490f5dd0ef1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.755655 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z68jj\" (UniqueName: \"kubernetes.io/projected/41552c16-eba4-4163-a652-8490f5dd0ef1-kube-api-access-z68jj\") on node \"crc\" DevicePath \"\"" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.755687 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84eddd87-9e83-41ff-a0a9-f813279962cb-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.755697 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g77c4\" (UniqueName: \"kubernetes.io/projected/84eddd87-9e83-41ff-a0a9-f813279962cb-kube-api-access-g77c4\") on node \"crc\" DevicePath \"\"" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.755708 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41552c16-eba4-4163-a652-8490f5dd0ef1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.755717 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84eddd87-9e83-41ff-a0a9-f813279962cb-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.755729 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41552c16-eba4-4163-a652-8490f5dd0ef1-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.764969 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-sj72q"] Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.774496 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lbcw7"] Feb 26 17:36:53 crc kubenswrapper[4805]: W0226 17:36:53.778596 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1286c88f_0dc6_4cf4_a610_ad7dceb34b34.slice/crio-e508db7214529ffa9194fc03a9453bd604ed95f839140821b24b72dd4c476faf WatchSource:0}: Error finding container e508db7214529ffa9194fc03a9453bd604ed95f839140821b24b72dd4c476faf: Status 404 returned error can't find the container with id e508db7214529ffa9194fc03a9453bd604ed95f839140821b24b72dd4c476faf Feb 26 17:36:53 crc kubenswrapper[4805]: W0226 17:36:53.781462 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8563b335_2f2c_40da_811d_2ceaf2299da8.slice/crio-f8570bd82b952bf3d3616942dd2f1f36aa0b87abef87a72e77d7457a29f219bc WatchSource:0}: Error finding container f8570bd82b952bf3d3616942dd2f1f36aa0b87abef87a72e77d7457a29f219bc: Status 404 returned error can't find the container with id f8570bd82b952bf3d3616942dd2f1f36aa0b87abef87a72e77d7457a29f219bc Feb 26 17:36:53 crc kubenswrapper[4805]: W0226 17:36:53.785464 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod476d8a34_25de_416a_90dd_173be4f21a1d.slice/crio-5b80812a04649a2670d8377b94d772ad7600050bca4ce504770c342ac23b9a17 WatchSource:0}: Error finding container 5b80812a04649a2670d8377b94d772ad7600050bca4ce504770c342ac23b9a17: Status 404 returned error can't find the container with id 5b80812a04649a2670d8377b94d772ad7600050bca4ce504770c342ac23b9a17 Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.794201 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.814441 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8cc7fc4dc-k2ps5"] Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.826551 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-wlxbg"] Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.838855 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-m5z27"] Feb 26 17:36:53 crc kubenswrapper[4805]: E0226 17:36:53.839613 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41552c16-eba4-4163-a652-8490f5dd0ef1" containerName="dnsmasq-dns" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.839643 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="41552c16-eba4-4163-a652-8490f5dd0ef1" containerName="dnsmasq-dns" Feb 26 17:36:53 crc kubenswrapper[4805]: E0226 17:36:53.839664 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84eddd87-9e83-41ff-a0a9-f813279962cb" containerName="dnsmasq-dns" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.839671 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="84eddd87-9e83-41ff-a0a9-f813279962cb" containerName="dnsmasq-dns" Feb 26 17:36:53 crc kubenswrapper[4805]: E0226 17:36:53.839687 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84eddd87-9e83-41ff-a0a9-f813279962cb" containerName="init" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.839694 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="84eddd87-9e83-41ff-a0a9-f813279962cb" containerName="init" Feb 26 17:36:53 crc kubenswrapper[4805]: E0226 17:36:53.839707 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41552c16-eba4-4163-a652-8490f5dd0ef1" containerName="init" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.839713 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="41552c16-eba4-4163-a652-8490f5dd0ef1" containerName="init" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.839893 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="41552c16-eba4-4163-a652-8490f5dd0ef1" containerName="dnsmasq-dns" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.839917 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="84eddd87-9e83-41ff-a0a9-f813279962cb" containerName="dnsmasq-dns" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.842108 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-m5z27" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.852076 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-m5z27"] Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.858861 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3aaec7e4-5667-4efd-9506-ab97bd392d78-operator-scripts\") pod \"glance-db-create-m5z27\" (UID: \"3aaec7e4-5667-4efd-9506-ab97bd392d78\") " pod="openstack/glance-db-create-m5z27" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.859282 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sbbx\" (UniqueName: \"kubernetes.io/projected/3aaec7e4-5667-4efd-9506-ab97bd392d78-kube-api-access-7sbbx\") pod \"glance-db-create-m5z27\" (UID: \"3aaec7e4-5667-4efd-9506-ab97bd392d78\") " pod="openstack/glance-db-create-m5z27" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.908679 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-spqx5"] Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.924090 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-spqx5"] Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.934124 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-2b40-account-create-update-w85pf"] Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.936531 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2b40-account-create-update-w85pf" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.943188 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.944916 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-2b40-account-create-update-w85pf"] Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.957508 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-k8brz"] Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.961761 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sbbx\" (UniqueName: \"kubernetes.io/projected/3aaec7e4-5667-4efd-9506-ab97bd392d78-kube-api-access-7sbbx\") pod \"glance-db-create-m5z27\" (UID: \"3aaec7e4-5667-4efd-9506-ab97bd392d78\") " pod="openstack/glance-db-create-m5z27" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.961949 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1416491f-df93-494f-81a0-21e27134ce2f-operator-scripts\") pod \"glance-2b40-account-create-update-w85pf\" (UID: \"1416491f-df93-494f-81a0-21e27134ce2f\") " pod="openstack/glance-2b40-account-create-update-w85pf" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.962158 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3aaec7e4-5667-4efd-9506-ab97bd392d78-operator-scripts\") pod \"glance-db-create-m5z27\" (UID: \"3aaec7e4-5667-4efd-9506-ab97bd392d78\") " pod="openstack/glance-db-create-m5z27" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.962245 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj9dd\" (UniqueName: \"kubernetes.io/projected/1416491f-df93-494f-81a0-21e27134ce2f-kube-api-access-hj9dd\") pod \"glance-2b40-account-create-update-w85pf\" (UID: \"1416491f-df93-494f-81a0-21e27134ce2f\") " pod="openstack/glance-2b40-account-create-update-w85pf" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.964656 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3aaec7e4-5667-4efd-9506-ab97bd392d78-operator-scripts\") pod \"glance-db-create-m5z27\" (UID: \"3aaec7e4-5667-4efd-9506-ab97bd392d78\") " pod="openstack/glance-db-create-m5z27" Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.976663 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-k8brz"] Feb 26 17:36:53 crc kubenswrapper[4805]: I0226 17:36:53.995564 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sbbx\" (UniqueName: \"kubernetes.io/projected/3aaec7e4-5667-4efd-9506-ab97bd392d78-kube-api-access-7sbbx\") pod \"glance-db-create-m5z27\" (UID: \"3aaec7e4-5667-4efd-9506-ab97bd392d78\") " pod="openstack/glance-db-create-m5z27" Feb 26 17:36:54 crc kubenswrapper[4805]: I0226 17:36:54.064502 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1416491f-df93-494f-81a0-21e27134ce2f-operator-scripts\") pod \"glance-2b40-account-create-update-w85pf\" (UID: \"1416491f-df93-494f-81a0-21e27134ce2f\") " pod="openstack/glance-2b40-account-create-update-w85pf" Feb 26 17:36:54 crc kubenswrapper[4805]: I0226 17:36:54.064743 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj9dd\" (UniqueName: \"kubernetes.io/projected/1416491f-df93-494f-81a0-21e27134ce2f-kube-api-access-hj9dd\") pod \"glance-2b40-account-create-update-w85pf\" (UID: \"1416491f-df93-494f-81a0-21e27134ce2f\") " pod="openstack/glance-2b40-account-create-update-w85pf" Feb 26 17:36:54 crc kubenswrapper[4805]: I0226 17:36:54.067144 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1416491f-df93-494f-81a0-21e27134ce2f-operator-scripts\") pod \"glance-2b40-account-create-update-w85pf\" (UID: \"1416491f-df93-494f-81a0-21e27134ce2f\") " pod="openstack/glance-2b40-account-create-update-w85pf" Feb 26 17:36:54 crc kubenswrapper[4805]: I0226 17:36:54.087403 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj9dd\" (UniqueName: \"kubernetes.io/projected/1416491f-df93-494f-81a0-21e27134ce2f-kube-api-access-hj9dd\") pod \"glance-2b40-account-create-update-w85pf\" (UID: \"1416491f-df93-494f-81a0-21e27134ce2f\") " pod="openstack/glance-2b40-account-create-update-w85pf" Feb 26 17:36:54 crc kubenswrapper[4805]: E0226 17:36:54.148989 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41552c16_eba4_4163_a652_8490f5dd0ef1.slice/crio-d34b345a2b7eb4a6b12182f0c2c5e5a2eefcd54deace4b610905300f99c0accf\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84eddd87_9e83_41ff_a0a9_f813279962cb.slice/crio-b56a798e140a2bfd4a56340a983043352f0d87331dbf827d8752cbd659ff6778\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41552c16_eba4_4163_a652_8490f5dd0ef1.slice\": RecentStats: unable to find data in memory cache]" Feb 26 17:36:54 crc kubenswrapper[4805]: I0226 17:36:54.169677 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-m5z27" Feb 26 17:36:54 crc kubenswrapper[4805]: I0226 17:36:54.301619 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2b40-account-create-update-w85pf" Feb 26 17:36:54 crc kubenswrapper[4805]: I0226 17:36:54.577661 4805 generic.go:334] "Generic (PLEG): container finished" podID="4a310598-e458-4267-99fd-2a14ee356946" containerID="864b253ac289bfcf14b1df01e94f89382aecb3608168be88f389f72a35370aae" exitCode=0 Feb 26 17:36:54 crc kubenswrapper[4805]: I0226 17:36:54.577751 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-sj72q" event={"ID":"4a310598-e458-4267-99fd-2a14ee356946","Type":"ContainerDied","Data":"864b253ac289bfcf14b1df01e94f89382aecb3608168be88f389f72a35370aae"} Feb 26 17:36:54 crc kubenswrapper[4805]: I0226 17:36:54.578077 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-sj72q" event={"ID":"4a310598-e458-4267-99fd-2a14ee356946","Type":"ContainerStarted","Data":"bd90b42704ace84914d7e4c8a2bf4f250a90e3abd9c91495ac7e581742ea2ebe"} Feb 26 17:36:54 crc kubenswrapper[4805]: I0226 17:36:54.585069 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8cc7fc4dc-k2ps5" event={"ID":"476d8a34-25de-416a-90dd-173be4f21a1d","Type":"ContainerDied","Data":"42d34bcaa07707a4af69d894e6cb275f2108008a80f9ccf6e7a483b992df14d3"} Feb 26 17:36:54 crc kubenswrapper[4805]: I0226 17:36:54.585604 4805 generic.go:334] "Generic (PLEG): container finished" podID="476d8a34-25de-416a-90dd-173be4f21a1d" containerID="42d34bcaa07707a4af69d894e6cb275f2108008a80f9ccf6e7a483b992df14d3" exitCode=0 Feb 26 17:36:54 crc kubenswrapper[4805]: I0226 17:36:54.585703 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8cc7fc4dc-k2ps5" event={"ID":"476d8a34-25de-416a-90dd-173be4f21a1d","Type":"ContainerStarted","Data":"5b80812a04649a2670d8377b94d772ad7600050bca4ce504770c342ac23b9a17"} Feb 26 17:36:54 crc kubenswrapper[4805]: I0226 17:36:54.593162 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wlxbg" event={"ID":"81fa1ce3-014a-4a1d-b880-7d1ea1fb6975","Type":"ContainerStarted","Data":"ff447b3402c0e42a7ed48076341e6a60383479f074ef5de92881621ae6d43155"} Feb 26 17:36:54 crc kubenswrapper[4805]: I0226 17:36:54.606294 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-ns67s" event={"ID":"01cf09f3-76a8-4c3b-ae5e-320e3fcc38cd","Type":"ContainerStarted","Data":"d973d290c64dffd5aabf390c07339a63172a0d550556b14a44bc1373918b535e"} Feb 26 17:36:54 crc kubenswrapper[4805]: I0226 17:36:54.609585 4805 generic.go:334] "Generic (PLEG): container finished" podID="1286c88f-0dc6-4cf4-a610-ad7dceb34b34" containerID="347128673f3264a96be15b8a58cdae6efd8f975ee77be9595613db09f28ec470" exitCode=0 Feb 26 17:36:54 crc kubenswrapper[4805]: I0226 17:36:54.609724 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lbcw7" event={"ID":"1286c88f-0dc6-4cf4-a610-ad7dceb34b34","Type":"ContainerDied","Data":"347128673f3264a96be15b8a58cdae6efd8f975ee77be9595613db09f28ec470"} Feb 26 17:36:54 crc kubenswrapper[4805]: I0226 17:36:54.609758 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lbcw7" event={"ID":"1286c88f-0dc6-4cf4-a610-ad7dceb34b34","Type":"ContainerStarted","Data":"e508db7214529ffa9194fc03a9453bd604ed95f839140821b24b72dd4c476faf"} Feb 26 17:36:54 crc kubenswrapper[4805]: I0226 17:36:54.613171 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8563b335-2f2c-40da-811d-2ceaf2299da8","Type":"ContainerStarted","Data":"f8570bd82b952bf3d3616942dd2f1f36aa0b87abef87a72e77d7457a29f219bc"} Feb 26 17:36:54 crc kubenswrapper[4805]: I0226 17:36:54.795911 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-ns67s" podStartSLOduration=9.795887836 podStartE2EDuration="9.795887836s" podCreationTimestamp="2026-02-26 17:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:36:54.680487848 +0000 UTC m=+1329.242242187" watchObservedRunningTime="2026-02-26 17:36:54.795887836 +0000 UTC m=+1329.357642195" Feb 26 17:36:54 crc kubenswrapper[4805]: I0226 17:36:54.810588 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-mnplr"] Feb 26 17:36:54 crc kubenswrapper[4805]: I0226 17:36:54.813958 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mnplr" Feb 26 17:36:54 crc kubenswrapper[4805]: I0226 17:36:54.858756 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-mnplr"] Feb 26 17:36:54 crc kubenswrapper[4805]: I0226 17:36:54.882356 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-m5z27"] Feb 26 17:36:54 crc kubenswrapper[4805]: I0226 17:36:54.900783 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33a31864-4394-433d-9f97-c52b9d9984e5-operator-scripts\") pod \"keystone-db-create-mnplr\" (UID: \"33a31864-4394-433d-9f97-c52b9d9984e5\") " pod="openstack/keystone-db-create-mnplr" Feb 26 17:36:54 crc kubenswrapper[4805]: I0226 17:36:54.901103 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c45q\" (UniqueName: \"kubernetes.io/projected/33a31864-4394-433d-9f97-c52b9d9984e5-kube-api-access-8c45q\") pod \"keystone-db-create-mnplr\" (UID: \"33a31864-4394-433d-9f97-c52b9d9984e5\") " pod="openstack/keystone-db-create-mnplr" Feb 26 17:36:54 crc kubenswrapper[4805]: I0226 17:36:54.904867 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-a0e5-account-create-update-w7vr5"] Feb 26 17:36:54 crc kubenswrapper[4805]: I0226 17:36:54.907146 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a0e5-account-create-update-w7vr5" Feb 26 17:36:54 crc kubenswrapper[4805]: I0226 17:36:54.909569 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 26 17:36:54 crc kubenswrapper[4805]: I0226 17:36:54.932429 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a0e5-account-create-update-w7vr5"] Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.023902 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33a31864-4394-433d-9f97-c52b9d9984e5-operator-scripts\") pod \"keystone-db-create-mnplr\" (UID: \"33a31864-4394-433d-9f97-c52b9d9984e5\") " pod="openstack/keystone-db-create-mnplr" Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.024045 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c45q\" (UniqueName: \"kubernetes.io/projected/33a31864-4394-433d-9f97-c52b9d9984e5-kube-api-access-8c45q\") pod \"keystone-db-create-mnplr\" (UID: \"33a31864-4394-433d-9f97-c52b9d9984e5\") " pod="openstack/keystone-db-create-mnplr" Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.024079 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4393bdca-06c3-4243-abef-be4d46c5f0b3-operator-scripts\") pod \"keystone-a0e5-account-create-update-w7vr5\" (UID: \"4393bdca-06c3-4243-abef-be4d46c5f0b3\") " pod="openstack/keystone-a0e5-account-create-update-w7vr5" Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.024197 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgjlh\" (UniqueName: \"kubernetes.io/projected/4393bdca-06c3-4243-abef-be4d46c5f0b3-kube-api-access-cgjlh\") pod \"keystone-a0e5-account-create-update-w7vr5\" (UID: \"4393bdca-06c3-4243-abef-be4d46c5f0b3\") " pod="openstack/keystone-a0e5-account-create-update-w7vr5" Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.025799 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33a31864-4394-433d-9f97-c52b9d9984e5-operator-scripts\") pod \"keystone-db-create-mnplr\" (UID: \"33a31864-4394-433d-9f97-c52b9d9984e5\") " pod="openstack/keystone-db-create-mnplr" Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.093920 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41552c16-eba4-4163-a652-8490f5dd0ef1" path="/var/lib/kubelet/pods/41552c16-eba4-4163-a652-8490f5dd0ef1/volumes" Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.094822 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84eddd87-9e83-41ff-a0a9-f813279962cb" path="/var/lib/kubelet/pods/84eddd87-9e83-41ff-a0a9-f813279962cb/volumes" Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.095520 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-2b40-account-create-update-w85pf"] Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.095554 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-52f5k"] Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.096886 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-52f5k"] Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.097072 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-52f5k" Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.101521 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c45q\" (UniqueName: \"kubernetes.io/projected/33a31864-4394-433d-9f97-c52b9d9984e5-kube-api-access-8c45q\") pod \"keystone-db-create-mnplr\" (UID: \"33a31864-4394-433d-9f97-c52b9d9984e5\") " pod="openstack/keystone-db-create-mnplr" Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.125625 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4393bdca-06c3-4243-abef-be4d46c5f0b3-operator-scripts\") pod \"keystone-a0e5-account-create-update-w7vr5\" (UID: \"4393bdca-06c3-4243-abef-be4d46c5f0b3\") " pod="openstack/keystone-a0e5-account-create-update-w7vr5" Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.125739 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgjlh\" (UniqueName: \"kubernetes.io/projected/4393bdca-06c3-4243-abef-be4d46c5f0b3-kube-api-access-cgjlh\") pod \"keystone-a0e5-account-create-update-w7vr5\" (UID: \"4393bdca-06c3-4243-abef-be4d46c5f0b3\") " pod="openstack/keystone-a0e5-account-create-update-w7vr5" Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.133514 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4393bdca-06c3-4243-abef-be4d46c5f0b3-operator-scripts\") pod \"keystone-a0e5-account-create-update-w7vr5\" (UID: \"4393bdca-06c3-4243-abef-be4d46c5f0b3\") " pod="openstack/keystone-a0e5-account-create-update-w7vr5" Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.137131 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8349-account-create-update-7td5j"] Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.138429 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8349-account-create-update-7td5j" Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.142380 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.174505 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgjlh\" (UniqueName: \"kubernetes.io/projected/4393bdca-06c3-4243-abef-be4d46c5f0b3-kube-api-access-cgjlh\") pod \"keystone-a0e5-account-create-update-w7vr5\" (UID: \"4393bdca-06c3-4243-abef-be4d46c5f0b3\") " pod="openstack/keystone-a0e5-account-create-update-w7vr5" Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.187325 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mnplr" Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.210709 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8349-account-create-update-7td5j"] Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.228933 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78cdv\" (UniqueName: \"kubernetes.io/projected/83faf020-4a1d-4f62-846b-0d94b1eeabd1-kube-api-access-78cdv\") pod \"placement-db-create-52f5k\" (UID: \"83faf020-4a1d-4f62-846b-0d94b1eeabd1\") " pod="openstack/placement-db-create-52f5k" Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.229078 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83faf020-4a1d-4f62-846b-0d94b1eeabd1-operator-scripts\") pod \"placement-db-create-52f5k\" (UID: \"83faf020-4a1d-4f62-846b-0d94b1eeabd1\") " pod="openstack/placement-db-create-52f5k" Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.278478 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a0e5-account-create-update-w7vr5" Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.333524 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78cdv\" (UniqueName: \"kubernetes.io/projected/83faf020-4a1d-4f62-846b-0d94b1eeabd1-kube-api-access-78cdv\") pod \"placement-db-create-52f5k\" (UID: \"83faf020-4a1d-4f62-846b-0d94b1eeabd1\") " pod="openstack/placement-db-create-52f5k" Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.333626 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx582\" (UniqueName: \"kubernetes.io/projected/5024aeeb-12a7-40df-9ded-fa42366d647e-kube-api-access-tx582\") pod \"placement-8349-account-create-update-7td5j\" (UID: \"5024aeeb-12a7-40df-9ded-fa42366d647e\") " pod="openstack/placement-8349-account-create-update-7td5j" Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.333661 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83faf020-4a1d-4f62-846b-0d94b1eeabd1-operator-scripts\") pod \"placement-db-create-52f5k\" (UID: \"83faf020-4a1d-4f62-846b-0d94b1eeabd1\") " pod="openstack/placement-db-create-52f5k" Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.333722 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5024aeeb-12a7-40df-9ded-fa42366d647e-operator-scripts\") pod \"placement-8349-account-create-update-7td5j\" (UID: \"5024aeeb-12a7-40df-9ded-fa42366d647e\") " pod="openstack/placement-8349-account-create-update-7td5j" Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.334892 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83faf020-4a1d-4f62-846b-0d94b1eeabd1-operator-scripts\") pod \"placement-db-create-52f5k\" (UID: \"83faf020-4a1d-4f62-846b-0d94b1eeabd1\") " pod="openstack/placement-db-create-52f5k" Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.360357 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78cdv\" (UniqueName: \"kubernetes.io/projected/83faf020-4a1d-4f62-846b-0d94b1eeabd1-kube-api-access-78cdv\") pod \"placement-db-create-52f5k\" (UID: \"83faf020-4a1d-4f62-846b-0d94b1eeabd1\") " pod="openstack/placement-db-create-52f5k" Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.435494 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5024aeeb-12a7-40df-9ded-fa42366d647e-operator-scripts\") pod \"placement-8349-account-create-update-7td5j\" (UID: \"5024aeeb-12a7-40df-9ded-fa42366d647e\") " pod="openstack/placement-8349-account-create-update-7td5j" Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.435659 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx582\" (UniqueName: \"kubernetes.io/projected/5024aeeb-12a7-40df-9ded-fa42366d647e-kube-api-access-tx582\") pod \"placement-8349-account-create-update-7td5j\" (UID: \"5024aeeb-12a7-40df-9ded-fa42366d647e\") " pod="openstack/placement-8349-account-create-update-7td5j" Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.436718 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5024aeeb-12a7-40df-9ded-fa42366d647e-operator-scripts\") pod \"placement-8349-account-create-update-7td5j\" (UID: \"5024aeeb-12a7-40df-9ded-fa42366d647e\") " pod="openstack/placement-8349-account-create-update-7td5j" Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.444811 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-52f5k" Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.457436 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx582\" (UniqueName: \"kubernetes.io/projected/5024aeeb-12a7-40df-9ded-fa42366d647e-kube-api-access-tx582\") pod \"placement-8349-account-create-update-7td5j\" (UID: \"5024aeeb-12a7-40df-9ded-fa42366d647e\") " pod="openstack/placement-8349-account-create-update-7td5j" Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.483388 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8349-account-create-update-7td5j" Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.629090 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-m5z27" event={"ID":"3aaec7e4-5667-4efd-9506-ab97bd392d78","Type":"ContainerStarted","Data":"a4f52cce3dd3079a2cb59f93565d630d462a129694dca31bada8abe97dfef090"} Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.629148 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-m5z27" event={"ID":"3aaec7e4-5667-4efd-9506-ab97bd392d78","Type":"ContainerStarted","Data":"41b433b0d63f1bc3d2d4beaf1bcc36b9672131045e1600e05985c1cffc9f3aa7"} Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.633299 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-sj72q" event={"ID":"4a310598-e458-4267-99fd-2a14ee356946","Type":"ContainerStarted","Data":"d240c37e4c6840bafea545fc49876c1ff4731b8d719f5aeb77752f049fd123a6"} Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.634233 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-sj72q" Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.638097 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8cc7fc4dc-k2ps5" event={"ID":"476d8a34-25de-416a-90dd-173be4f21a1d","Type":"ContainerStarted","Data":"1fde00d3693042157572ca433db8e07019428c8f7f8c2646f927f249f8f4a4d4"} Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.638795 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8cc7fc4dc-k2ps5" Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.652610 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-m5z27" podStartSLOduration=2.652591949 podStartE2EDuration="2.652591949s" podCreationTimestamp="2026-02-26 17:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:36:55.650458985 +0000 UTC m=+1330.212213334" watchObservedRunningTime="2026-02-26 17:36:55.652591949 +0000 UTC m=+1330.214346288" Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.658004 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2b40-account-create-update-w85pf" event={"ID":"1416491f-df93-494f-81a0-21e27134ce2f","Type":"ContainerStarted","Data":"9a943d1648249b0562964d6f11679b1eaf47ade3109ce348b4a8f47ee4e0235d"} Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.685896 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8cc7fc4dc-k2ps5" podStartSLOduration=9.68586444 podStartE2EDuration="9.68586444s" podCreationTimestamp="2026-02-26 17:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:36:55.678793761 +0000 UTC m=+1330.240548120" watchObservedRunningTime="2026-02-26 17:36:55.68586444 +0000 UTC m=+1330.247618779" Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.714687 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-sj72q" podStartSLOduration=8.714664548 podStartE2EDuration="8.714664548s" podCreationTimestamp="2026-02-26 17:36:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:36:55.705121737 +0000 UTC m=+1330.266876076" watchObservedRunningTime="2026-02-26 17:36:55.714664548 +0000 UTC m=+1330.276418887" Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.743624 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a466ee40-e6ef-4a36-96c6-88e7ce00a28c-etc-swift\") pod \"swift-storage-0\" (UID: \"a466ee40-e6ef-4a36-96c6-88e7ce00a28c\") " pod="openstack/swift-storage-0" Feb 26 17:36:55 crc kubenswrapper[4805]: E0226 17:36:55.745923 4805 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 26 17:36:55 crc kubenswrapper[4805]: E0226 17:36:55.745957 4805 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 26 17:36:55 crc kubenswrapper[4805]: E0226 17:36:55.746011 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a466ee40-e6ef-4a36-96c6-88e7ce00a28c-etc-swift podName:a466ee40-e6ef-4a36-96c6-88e7ce00a28c nodeName:}" failed. No retries permitted until 2026-02-26 17:37:03.745990871 +0000 UTC m=+1338.307745280 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a466ee40-e6ef-4a36-96c6-88e7ce00a28c-etc-swift") pod "swift-storage-0" (UID: "a466ee40-e6ef-4a36-96c6-88e7ce00a28c") : configmap "swift-ring-files" not found Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.858838 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-mnplr"] Feb 26 17:36:55 crc kubenswrapper[4805]: I0226 17:36:55.983788 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a0e5-account-create-update-w7vr5"] Feb 26 17:36:56 crc kubenswrapper[4805]: I0226 17:36:56.137899 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-52f5k"] Feb 26 17:36:56 crc kubenswrapper[4805]: I0226 17:36:56.677697 4805 generic.go:334] "Generic (PLEG): container finished" podID="1416491f-df93-494f-81a0-21e27134ce2f" containerID="4e9eae4198f9f533ae9235bb059fbbf9fea594ad5c96c387c19e641fbe0ec573" exitCode=0 Feb 26 17:36:56 crc kubenswrapper[4805]: I0226 17:36:56.677778 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2b40-account-create-update-w85pf" event={"ID":"1416491f-df93-494f-81a0-21e27134ce2f","Type":"ContainerDied","Data":"4e9eae4198f9f533ae9235bb059fbbf9fea594ad5c96c387c19e641fbe0ec573"} Feb 26 17:36:56 crc kubenswrapper[4805]: I0226 17:36:56.680536 4805 generic.go:334] "Generic (PLEG): container finished" podID="3aaec7e4-5667-4efd-9506-ab97bd392d78" containerID="a4f52cce3dd3079a2cb59f93565d630d462a129694dca31bada8abe97dfef090" exitCode=0 Feb 26 17:36:56 crc kubenswrapper[4805]: I0226 17:36:56.681010 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-m5z27" event={"ID":"3aaec7e4-5667-4efd-9506-ab97bd392d78","Type":"ContainerDied","Data":"a4f52cce3dd3079a2cb59f93565d630d462a129694dca31bada8abe97dfef090"} Feb 26 17:36:56 crc kubenswrapper[4805]: W0226 17:36:56.956397 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33a31864_4394_433d_9f97_c52b9d9984e5.slice/crio-dea26c6579ce3f9c2ba0d9689343f62476de5b07c52a480d5f4b3b2f4a998d97 WatchSource:0}: Error finding container dea26c6579ce3f9c2ba0d9689343f62476de5b07c52a480d5f4b3b2f4a998d97: Status 404 returned error can't find the container with id dea26c6579ce3f9c2ba0d9689343f62476de5b07c52a480d5f4b3b2f4a998d97 Feb 26 17:36:56 crc kubenswrapper[4805]: W0226 17:36:56.958201 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4393bdca_06c3_4243_abef_be4d46c5f0b3.slice/crio-ecc23eb4b9d5d35e65e0721a649d6a94c77349f2e5467d772b24c372818236db WatchSource:0}: Error finding container ecc23eb4b9d5d35e65e0721a649d6a94c77349f2e5467d772b24c372818236db: Status 404 returned error can't find the container with id ecc23eb4b9d5d35e65e0721a649d6a94c77349f2e5467d772b24c372818236db Feb 26 17:36:57 crc kubenswrapper[4805]: W0226 17:36:57.005596 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83faf020_4a1d_4f62_846b_0d94b1eeabd1.slice/crio-5995737f83ffce475a389331ece65bdabab6aa00c742ef13395817ef971bce99 WatchSource:0}: Error finding container 5995737f83ffce475a389331ece65bdabab6aa00c742ef13395817ef971bce99: Status 404 returned error can't find the container with id 5995737f83ffce475a389331ece65bdabab6aa00c742ef13395817ef971bce99 Feb 26 17:36:57 crc kubenswrapper[4805]: I0226 17:36:57.098351 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lbcw7" Feb 26 17:36:57 crc kubenswrapper[4805]: I0226 17:36:57.131848 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kg8xc\" (UniqueName: \"kubernetes.io/projected/1286c88f-0dc6-4cf4-a610-ad7dceb34b34-kube-api-access-kg8xc\") pod \"1286c88f-0dc6-4cf4-a610-ad7dceb34b34\" (UID: \"1286c88f-0dc6-4cf4-a610-ad7dceb34b34\") " Feb 26 17:36:57 crc kubenswrapper[4805]: I0226 17:36:57.132133 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1286c88f-0dc6-4cf4-a610-ad7dceb34b34-operator-scripts\") pod \"1286c88f-0dc6-4cf4-a610-ad7dceb34b34\" (UID: \"1286c88f-0dc6-4cf4-a610-ad7dceb34b34\") " Feb 26 17:36:57 crc kubenswrapper[4805]: I0226 17:36:57.132577 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1286c88f-0dc6-4cf4-a610-ad7dceb34b34-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1286c88f-0dc6-4cf4-a610-ad7dceb34b34" (UID: "1286c88f-0dc6-4cf4-a610-ad7dceb34b34"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:36:57 crc kubenswrapper[4805]: I0226 17:36:57.137403 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1286c88f-0dc6-4cf4-a610-ad7dceb34b34-kube-api-access-kg8xc" (OuterVolumeSpecName: "kube-api-access-kg8xc") pod "1286c88f-0dc6-4cf4-a610-ad7dceb34b34" (UID: "1286c88f-0dc6-4cf4-a610-ad7dceb34b34"). InnerVolumeSpecName "kube-api-access-kg8xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:36:57 crc kubenswrapper[4805]: I0226 17:36:57.234281 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1286c88f-0dc6-4cf4-a610-ad7dceb34b34-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:36:57 crc kubenswrapper[4805]: I0226 17:36:57.234320 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kg8xc\" (UniqueName: \"kubernetes.io/projected/1286c88f-0dc6-4cf4-a610-ad7dceb34b34-kube-api-access-kg8xc\") on node \"crc\" DevicePath \"\"" Feb 26 17:36:57 crc kubenswrapper[4805]: I0226 17:36:57.435219 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8349-account-create-update-7td5j"] Feb 26 17:36:57 crc kubenswrapper[4805]: I0226 17:36:57.482550 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-tk4vq" Feb 26 17:36:57 crc kubenswrapper[4805]: I0226 17:36:57.698858 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lbcw7" event={"ID":"1286c88f-0dc6-4cf4-a610-ad7dceb34b34","Type":"ContainerDied","Data":"e508db7214529ffa9194fc03a9453bd604ed95f839140821b24b72dd4c476faf"} Feb 26 17:36:57 crc kubenswrapper[4805]: I0226 17:36:57.698916 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e508db7214529ffa9194fc03a9453bd604ed95f839140821b24b72dd4c476faf" Feb 26 17:36:57 crc kubenswrapper[4805]: I0226 17:36:57.698991 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lbcw7" Feb 26 17:36:57 crc kubenswrapper[4805]: I0226 17:36:57.700639 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-52f5k" event={"ID":"83faf020-4a1d-4f62-846b-0d94b1eeabd1","Type":"ContainerStarted","Data":"5995737f83ffce475a389331ece65bdabab6aa00c742ef13395817ef971bce99"} Feb 26 17:36:57 crc kubenswrapper[4805]: I0226 17:36:57.702361 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a0e5-account-create-update-w7vr5" event={"ID":"4393bdca-06c3-4243-abef-be4d46c5f0b3","Type":"ContainerStarted","Data":"ecc23eb4b9d5d35e65e0721a649d6a94c77349f2e5467d772b24c372818236db"} Feb 26 17:36:57 crc kubenswrapper[4805]: I0226 17:36:57.703384 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mnplr" event={"ID":"33a31864-4394-433d-9f97-c52b9d9984e5","Type":"ContainerStarted","Data":"dea26c6579ce3f9c2ba0d9689343f62476de5b07c52a480d5f4b3b2f4a998d97"} Feb 26 17:36:57 crc kubenswrapper[4805]: I0226 17:36:57.809673 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wxfvc" Feb 26 17:36:58 crc kubenswrapper[4805]: I0226 17:36:58.042235 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6" Feb 26 17:36:58 crc kubenswrapper[4805]: I0226 17:36:58.582558 4805 scope.go:117] "RemoveContainer" containerID="4352610a30c9dedb36f560774603a9265c1073fd82623f3f98c5f75a51581c5a" Feb 26 17:36:58 crc kubenswrapper[4805]: I0226 17:36:58.686277 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="537eaeba-93f9-4d28-871b-049946f86c2b" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 26 17:36:58 crc kubenswrapper[4805]: I0226 17:36:58.965903 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 26 17:36:59 crc kubenswrapper[4805]: I0226 17:36:59.011683 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 26 17:37:00 crc kubenswrapper[4805]: I0226 17:37:00.687668 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2b40-account-create-update-w85pf" Feb 26 17:37:00 crc kubenswrapper[4805]: I0226 17:37:00.708329 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-m5z27" Feb 26 17:37:00 crc kubenswrapper[4805]: I0226 17:37:00.758998 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-m5z27" event={"ID":"3aaec7e4-5667-4efd-9506-ab97bd392d78","Type":"ContainerDied","Data":"41b433b0d63f1bc3d2d4beaf1bcc36b9672131045e1600e05985c1cffc9f3aa7"} Feb 26 17:37:00 crc kubenswrapper[4805]: I0226 17:37:00.759141 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41b433b0d63f1bc3d2d4beaf1bcc36b9672131045e1600e05985c1cffc9f3aa7" Feb 26 17:37:00 crc kubenswrapper[4805]: I0226 17:37:00.759098 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-m5z27" Feb 26 17:37:00 crc kubenswrapper[4805]: I0226 17:37:00.765315 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2b40-account-create-update-w85pf" Feb 26 17:37:00 crc kubenswrapper[4805]: I0226 17:37:00.765307 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2b40-account-create-update-w85pf" event={"ID":"1416491f-df93-494f-81a0-21e27134ce2f","Type":"ContainerDied","Data":"9a943d1648249b0562964d6f11679b1eaf47ade3109ce348b4a8f47ee4e0235d"} Feb 26 17:37:00 crc kubenswrapper[4805]: I0226 17:37:00.765480 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a943d1648249b0562964d6f11679b1eaf47ade3109ce348b4a8f47ee4e0235d" Feb 26 17:37:00 crc kubenswrapper[4805]: I0226 17:37:00.766824 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8349-account-create-update-7td5j" event={"ID":"5024aeeb-12a7-40df-9ded-fa42366d647e","Type":"ContainerStarted","Data":"7f4a6de1640c707424c8725c4c6a3ed1173317f8732a1d10209c471d2ed15ac2"} Feb 26 17:37:00 crc kubenswrapper[4805]: I0226 17:37:00.767442 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3aaec7e4-5667-4efd-9506-ab97bd392d78-operator-scripts\") pod \"3aaec7e4-5667-4efd-9506-ab97bd392d78\" (UID: \"3aaec7e4-5667-4efd-9506-ab97bd392d78\") " Feb 26 17:37:00 crc kubenswrapper[4805]: I0226 17:37:00.767475 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sbbx\" (UniqueName: \"kubernetes.io/projected/3aaec7e4-5667-4efd-9506-ab97bd392d78-kube-api-access-7sbbx\") pod \"3aaec7e4-5667-4efd-9506-ab97bd392d78\" (UID: \"3aaec7e4-5667-4efd-9506-ab97bd392d78\") " Feb 26 17:37:00 crc kubenswrapper[4805]: I0226 17:37:00.767568 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hj9dd\" (UniqueName: \"kubernetes.io/projected/1416491f-df93-494f-81a0-21e27134ce2f-kube-api-access-hj9dd\") pod \"1416491f-df93-494f-81a0-21e27134ce2f\" (UID: \"1416491f-df93-494f-81a0-21e27134ce2f\") " Feb 26 17:37:00 crc kubenswrapper[4805]: I0226 17:37:00.767672 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1416491f-df93-494f-81a0-21e27134ce2f-operator-scripts\") pod \"1416491f-df93-494f-81a0-21e27134ce2f\" (UID: \"1416491f-df93-494f-81a0-21e27134ce2f\") " Feb 26 17:37:00 crc kubenswrapper[4805]: I0226 17:37:00.768548 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3aaec7e4-5667-4efd-9506-ab97bd392d78-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3aaec7e4-5667-4efd-9506-ab97bd392d78" (UID: "3aaec7e4-5667-4efd-9506-ab97bd392d78"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:37:00 crc kubenswrapper[4805]: I0226 17:37:00.768580 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1416491f-df93-494f-81a0-21e27134ce2f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1416491f-df93-494f-81a0-21e27134ce2f" (UID: "1416491f-df93-494f-81a0-21e27134ce2f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:37:00 crc kubenswrapper[4805]: I0226 17:37:00.768998 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3aaec7e4-5667-4efd-9506-ab97bd392d78-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:00 crc kubenswrapper[4805]: I0226 17:37:00.769013 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1416491f-df93-494f-81a0-21e27134ce2f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:00 crc kubenswrapper[4805]: I0226 17:37:00.774750 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aaec7e4-5667-4efd-9506-ab97bd392d78-kube-api-access-7sbbx" (OuterVolumeSpecName: "kube-api-access-7sbbx") pod "3aaec7e4-5667-4efd-9506-ab97bd392d78" (UID: "3aaec7e4-5667-4efd-9506-ab97bd392d78"). InnerVolumeSpecName "kube-api-access-7sbbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:37:00 crc kubenswrapper[4805]: I0226 17:37:00.786544 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1416491f-df93-494f-81a0-21e27134ce2f-kube-api-access-hj9dd" (OuterVolumeSpecName: "kube-api-access-hj9dd") pod "1416491f-df93-494f-81a0-21e27134ce2f" (UID: "1416491f-df93-494f-81a0-21e27134ce2f"). InnerVolumeSpecName "kube-api-access-hj9dd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:37:00 crc kubenswrapper[4805]: I0226 17:37:00.870511 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sbbx\" (UniqueName: \"kubernetes.io/projected/3aaec7e4-5667-4efd-9506-ab97bd392d78-kube-api-access-7sbbx\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:00 crc kubenswrapper[4805]: I0226 17:37:00.870543 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hj9dd\" (UniqueName: \"kubernetes.io/projected/1416491f-df93-494f-81a0-21e27134ce2f-kube-api-access-hj9dd\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:00 crc kubenswrapper[4805]: I0226 17:37:00.876069 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-lbcw7"] Feb 26 17:37:00 crc kubenswrapper[4805]: I0226 17:37:00.887134 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-lbcw7"] Feb 26 17:37:00 crc kubenswrapper[4805]: I0226 17:37:00.976549 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1286c88f-0dc6-4cf4-a610-ad7dceb34b34" path="/var/lib/kubelet/pods/1286c88f-0dc6-4cf4-a610-ad7dceb34b34/volumes" Feb 26 17:37:00 crc kubenswrapper[4805]: I0226 17:37:00.977599 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-9qmc8"] Feb 26 17:37:00 crc kubenswrapper[4805]: E0226 17:37:00.977956 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aaec7e4-5667-4efd-9506-ab97bd392d78" containerName="mariadb-database-create" Feb 26 17:37:00 crc kubenswrapper[4805]: I0226 17:37:00.977978 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aaec7e4-5667-4efd-9506-ab97bd392d78" containerName="mariadb-database-create" Feb 26 17:37:00 crc kubenswrapper[4805]: E0226 17:37:00.978001 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1416491f-df93-494f-81a0-21e27134ce2f" containerName="mariadb-account-create-update" Feb 26 17:37:00 crc kubenswrapper[4805]: I0226 17:37:00.978015 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1416491f-df93-494f-81a0-21e27134ce2f" containerName="mariadb-account-create-update" Feb 26 17:37:00 crc kubenswrapper[4805]: E0226 17:37:00.978331 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1286c88f-0dc6-4cf4-a610-ad7dceb34b34" containerName="mariadb-account-create-update" Feb 26 17:37:00 crc kubenswrapper[4805]: I0226 17:37:00.978343 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1286c88f-0dc6-4cf4-a610-ad7dceb34b34" containerName="mariadb-account-create-update" Feb 26 17:37:00 crc kubenswrapper[4805]: I0226 17:37:00.978579 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1286c88f-0dc6-4cf4-a610-ad7dceb34b34" containerName="mariadb-account-create-update" Feb 26 17:37:00 crc kubenswrapper[4805]: I0226 17:37:00.978599 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aaec7e4-5667-4efd-9506-ab97bd392d78" containerName="mariadb-database-create" Feb 26 17:37:00 crc kubenswrapper[4805]: I0226 17:37:00.978621 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1416491f-df93-494f-81a0-21e27134ce2f" containerName="mariadb-account-create-update" Feb 26 17:37:00 crc kubenswrapper[4805]: I0226 17:37:00.979356 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9qmc8"] Feb 26 17:37:00 crc kubenswrapper[4805]: I0226 17:37:00.979452 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9qmc8" Feb 26 17:37:00 crc kubenswrapper[4805]: I0226 17:37:00.988233 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 26 17:37:01 crc kubenswrapper[4805]: I0226 17:37:01.076007 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab16e0c7-ca1e-42c0-8a3d-0ceeb9410ce5-operator-scripts\") pod \"root-account-create-update-9qmc8\" (UID: \"ab16e0c7-ca1e-42c0-8a3d-0ceeb9410ce5\") " pod="openstack/root-account-create-update-9qmc8" Feb 26 17:37:01 crc kubenswrapper[4805]: I0226 17:37:01.076123 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpp2t\" (UniqueName: \"kubernetes.io/projected/ab16e0c7-ca1e-42c0-8a3d-0ceeb9410ce5-kube-api-access-zpp2t\") pod \"root-account-create-update-9qmc8\" (UID: \"ab16e0c7-ca1e-42c0-8a3d-0ceeb9410ce5\") " pod="openstack/root-account-create-update-9qmc8" Feb 26 17:37:01 crc kubenswrapper[4805]: I0226 17:37:01.177196 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab16e0c7-ca1e-42c0-8a3d-0ceeb9410ce5-operator-scripts\") pod \"root-account-create-update-9qmc8\" (UID: \"ab16e0c7-ca1e-42c0-8a3d-0ceeb9410ce5\") " pod="openstack/root-account-create-update-9qmc8" Feb 26 17:37:01 crc kubenswrapper[4805]: I0226 17:37:01.177288 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpp2t\" (UniqueName: \"kubernetes.io/projected/ab16e0c7-ca1e-42c0-8a3d-0ceeb9410ce5-kube-api-access-zpp2t\") pod \"root-account-create-update-9qmc8\" (UID: \"ab16e0c7-ca1e-42c0-8a3d-0ceeb9410ce5\") " pod="openstack/root-account-create-update-9qmc8" Feb 26 17:37:01 crc kubenswrapper[4805]: I0226 17:37:01.178888 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab16e0c7-ca1e-42c0-8a3d-0ceeb9410ce5-operator-scripts\") pod \"root-account-create-update-9qmc8\" (UID: \"ab16e0c7-ca1e-42c0-8a3d-0ceeb9410ce5\") " pod="openstack/root-account-create-update-9qmc8" Feb 26 17:37:01 crc kubenswrapper[4805]: I0226 17:37:01.214977 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpp2t\" (UniqueName: \"kubernetes.io/projected/ab16e0c7-ca1e-42c0-8a3d-0ceeb9410ce5-kube-api-access-zpp2t\") pod \"root-account-create-update-9qmc8\" (UID: \"ab16e0c7-ca1e-42c0-8a3d-0ceeb9410ce5\") " pod="openstack/root-account-create-update-9qmc8" Feb 26 17:37:01 crc kubenswrapper[4805]: I0226 17:37:01.302237 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9qmc8" Feb 26 17:37:02 crc kubenswrapper[4805]: I0226 17:37:02.329447 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8cc7fc4dc-k2ps5" Feb 26 17:37:02 crc kubenswrapper[4805]: I0226 17:37:02.494398 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-sj72q" Feb 26 17:37:02 crc kubenswrapper[4805]: I0226 17:37:02.562566 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8cc7fc4dc-k2ps5"] Feb 26 17:37:02 crc kubenswrapper[4805]: I0226 17:37:02.811917 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8cc7fc4dc-k2ps5" podUID="476d8a34-25de-416a-90dd-173be4f21a1d" containerName="dnsmasq-dns" containerID="cri-o://1fde00d3693042157572ca433db8e07019428c8f7f8c2646f927f249f8f4a4d4" gracePeriod=10 Feb 26 17:37:02 crc kubenswrapper[4805]: I0226 17:37:02.977864 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 17:37:02 crc kubenswrapper[4805]: I0226 17:37:02.977931 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 17:37:03 crc kubenswrapper[4805]: I0226 17:37:03.820972 4805 generic.go:334] "Generic (PLEG): container finished" podID="476d8a34-25de-416a-90dd-173be4f21a1d" containerID="1fde00d3693042157572ca433db8e07019428c8f7f8c2646f927f249f8f4a4d4" exitCode=0 Feb 26 17:37:03 crc kubenswrapper[4805]: I0226 17:37:03.821055 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8cc7fc4dc-k2ps5" event={"ID":"476d8a34-25de-416a-90dd-173be4f21a1d","Type":"ContainerDied","Data":"1fde00d3693042157572ca433db8e07019428c8f7f8c2646f927f249f8f4a4d4"} Feb 26 17:37:03 crc kubenswrapper[4805]: I0226 17:37:03.840031 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a466ee40-e6ef-4a36-96c6-88e7ce00a28c-etc-swift\") pod \"swift-storage-0\" (UID: \"a466ee40-e6ef-4a36-96c6-88e7ce00a28c\") " pod="openstack/swift-storage-0" Feb 26 17:37:03 crc kubenswrapper[4805]: E0226 17:37:03.840240 4805 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 26 17:37:03 crc kubenswrapper[4805]: E0226 17:37:03.840255 4805 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 26 17:37:03 crc kubenswrapper[4805]: E0226 17:37:03.840291 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a466ee40-e6ef-4a36-96c6-88e7ce00a28c-etc-swift podName:a466ee40-e6ef-4a36-96c6-88e7ce00a28c nodeName:}" failed. No retries permitted until 2026-02-26 17:37:19.840279126 +0000 UTC m=+1354.402033455 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a466ee40-e6ef-4a36-96c6-88e7ce00a28c-etc-swift") pod "swift-storage-0" (UID: "a466ee40-e6ef-4a36-96c6-88e7ce00a28c") : configmap "swift-ring-files" not found Feb 26 17:37:04 crc kubenswrapper[4805]: I0226 17:37:04.048256 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-sqks9"] Feb 26 17:37:04 crc kubenswrapper[4805]: I0226 17:37:04.049616 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-sqks9" Feb 26 17:37:04 crc kubenswrapper[4805]: I0226 17:37:04.051948 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 26 17:37:04 crc kubenswrapper[4805]: I0226 17:37:04.053028 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-ff6b8" Feb 26 17:37:04 crc kubenswrapper[4805]: I0226 17:37:04.058415 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-sqks9"] Feb 26 17:37:04 crc kubenswrapper[4805]: I0226 17:37:04.144037 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f21b0b57-d027-42a1-a3c9-b4030f589db8-config-data\") pod \"glance-db-sync-sqks9\" (UID: \"f21b0b57-d027-42a1-a3c9-b4030f589db8\") " pod="openstack/glance-db-sync-sqks9" Feb 26 17:37:04 crc kubenswrapper[4805]: I0226 17:37:04.144353 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f21b0b57-d027-42a1-a3c9-b4030f589db8-db-sync-config-data\") pod \"glance-db-sync-sqks9\" (UID: \"f21b0b57-d027-42a1-a3c9-b4030f589db8\") " pod="openstack/glance-db-sync-sqks9" Feb 26 17:37:04 crc kubenswrapper[4805]: I0226 17:37:04.144485 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5zcs\" (UniqueName: \"kubernetes.io/projected/f21b0b57-d027-42a1-a3c9-b4030f589db8-kube-api-access-g5zcs\") pod \"glance-db-sync-sqks9\" (UID: \"f21b0b57-d027-42a1-a3c9-b4030f589db8\") " pod="openstack/glance-db-sync-sqks9" Feb 26 17:37:04 crc kubenswrapper[4805]: I0226 17:37:04.144753 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f21b0b57-d027-42a1-a3c9-b4030f589db8-combined-ca-bundle\") pod \"glance-db-sync-sqks9\" (UID: \"f21b0b57-d027-42a1-a3c9-b4030f589db8\") " pod="openstack/glance-db-sync-sqks9" Feb 26 17:37:04 crc kubenswrapper[4805]: I0226 17:37:04.245508 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f21b0b57-d027-42a1-a3c9-b4030f589db8-config-data\") pod \"glance-db-sync-sqks9\" (UID: \"f21b0b57-d027-42a1-a3c9-b4030f589db8\") " pod="openstack/glance-db-sync-sqks9" Feb 26 17:37:04 crc kubenswrapper[4805]: I0226 17:37:04.245554 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f21b0b57-d027-42a1-a3c9-b4030f589db8-db-sync-config-data\") pod \"glance-db-sync-sqks9\" (UID: \"f21b0b57-d027-42a1-a3c9-b4030f589db8\") " pod="openstack/glance-db-sync-sqks9" Feb 26 17:37:04 crc kubenswrapper[4805]: I0226 17:37:04.245585 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5zcs\" (UniqueName: \"kubernetes.io/projected/f21b0b57-d027-42a1-a3c9-b4030f589db8-kube-api-access-g5zcs\") pod \"glance-db-sync-sqks9\" (UID: \"f21b0b57-d027-42a1-a3c9-b4030f589db8\") " pod="openstack/glance-db-sync-sqks9" Feb 26 17:37:04 crc kubenswrapper[4805]: I0226 17:37:04.245736 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f21b0b57-d027-42a1-a3c9-b4030f589db8-combined-ca-bundle\") pod \"glance-db-sync-sqks9\" (UID: \"f21b0b57-d027-42a1-a3c9-b4030f589db8\") " pod="openstack/glance-db-sync-sqks9" Feb 26 17:37:04 crc kubenswrapper[4805]: I0226 17:37:04.252667 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f21b0b57-d027-42a1-a3c9-b4030f589db8-db-sync-config-data\") pod \"glance-db-sync-sqks9\" (UID: \"f21b0b57-d027-42a1-a3c9-b4030f589db8\") " pod="openstack/glance-db-sync-sqks9" Feb 26 17:37:04 crc kubenswrapper[4805]: I0226 17:37:04.252841 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f21b0b57-d027-42a1-a3c9-b4030f589db8-config-data\") pod \"glance-db-sync-sqks9\" (UID: \"f21b0b57-d027-42a1-a3c9-b4030f589db8\") " pod="openstack/glance-db-sync-sqks9" Feb 26 17:37:04 crc kubenswrapper[4805]: I0226 17:37:04.252924 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f21b0b57-d027-42a1-a3c9-b4030f589db8-combined-ca-bundle\") pod \"glance-db-sync-sqks9\" (UID: \"f21b0b57-d027-42a1-a3c9-b4030f589db8\") " pod="openstack/glance-db-sync-sqks9" Feb 26 17:37:04 crc kubenswrapper[4805]: I0226 17:37:04.262289 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5zcs\" (UniqueName: \"kubernetes.io/projected/f21b0b57-d027-42a1-a3c9-b4030f589db8-kube-api-access-g5zcs\") pod \"glance-db-sync-sqks9\" (UID: \"f21b0b57-d027-42a1-a3c9-b4030f589db8\") " pod="openstack/glance-db-sync-sqks9" Feb 26 17:37:04 crc kubenswrapper[4805]: I0226 17:37:04.369735 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-sqks9" Feb 26 17:37:07 crc kubenswrapper[4805]: I0226 17:37:07.328526 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8cc7fc4dc-k2ps5" podUID="476d8a34-25de-416a-90dd-173be4f21a1d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.132:5353: connect: connection refused" Feb 26 17:37:08 crc kubenswrapper[4805]: I0226 17:37:08.689304 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="537eaeba-93f9-4d28-871b-049946f86c2b" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 26 17:37:11 crc kubenswrapper[4805]: E0226 17:37:11.007055 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741" Feb 26 17:37:11 crc kubenswrapper[4805]: E0226 17:37:11.007541 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus,Image:registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741,Command:[],Args:[--config.file=/etc/prometheus/config_out/prometheus.env.yaml --web.enable-lifecycle --web.route-prefix=/ --storage.tsdb.retention.time=24h --storage.tsdb.path=/prometheus --web.config.file=/etc/prometheus/web_config/web-config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:web,HostPort:0,ContainerPort:9090,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-out,ReadOnly:true,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-assets,ReadOnly:true,MountPath:/etc/prometheus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-db,ReadOnly:false,MountPath:/prometheus,SubPath:prometheus-db,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-1,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-2,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:web-config,ReadOnly:true,MountPath:/etc/prometheus/web_config/web-config.yaml,SubPath:web-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g65c7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/healthy,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:15,SuccessThreshold:1,FailureThreshold:60,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(657f7632-1861-4399-9731-81e9977c7640): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 17:37:11 crc kubenswrapper[4805]: I0226 17:37:11.557110 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8cc7fc4dc-k2ps5" Feb 26 17:37:11 crc kubenswrapper[4805]: I0226 17:37:11.628456 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/476d8a34-25de-416a-90dd-173be4f21a1d-config\") pod \"476d8a34-25de-416a-90dd-173be4f21a1d\" (UID: \"476d8a34-25de-416a-90dd-173be4f21a1d\") " Feb 26 17:37:11 crc kubenswrapper[4805]: I0226 17:37:11.628554 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/476d8a34-25de-416a-90dd-173be4f21a1d-dns-svc\") pod \"476d8a34-25de-416a-90dd-173be4f21a1d\" (UID: \"476d8a34-25de-416a-90dd-173be4f21a1d\") " Feb 26 17:37:11 crc kubenswrapper[4805]: I0226 17:37:11.628777 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzrqm\" (UniqueName: \"kubernetes.io/projected/476d8a34-25de-416a-90dd-173be4f21a1d-kube-api-access-qzrqm\") pod \"476d8a34-25de-416a-90dd-173be4f21a1d\" (UID: \"476d8a34-25de-416a-90dd-173be4f21a1d\") " Feb 26 17:37:11 crc kubenswrapper[4805]: I0226 17:37:11.628846 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/476d8a34-25de-416a-90dd-173be4f21a1d-ovsdbserver-sb\") pod \"476d8a34-25de-416a-90dd-173be4f21a1d\" (UID: \"476d8a34-25de-416a-90dd-173be4f21a1d\") " Feb 26 17:37:11 crc kubenswrapper[4805]: I0226 17:37:11.660446 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/476d8a34-25de-416a-90dd-173be4f21a1d-kube-api-access-qzrqm" (OuterVolumeSpecName: "kube-api-access-qzrqm") pod "476d8a34-25de-416a-90dd-173be4f21a1d" (UID: "476d8a34-25de-416a-90dd-173be4f21a1d"). InnerVolumeSpecName "kube-api-access-qzrqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:37:11 crc kubenswrapper[4805]: I0226 17:37:11.691841 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9qmc8"] Feb 26 17:37:11 crc kubenswrapper[4805]: I0226 17:37:11.731959 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzrqm\" (UniqueName: \"kubernetes.io/projected/476d8a34-25de-416a-90dd-173be4f21a1d-kube-api-access-qzrqm\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:11 crc kubenswrapper[4805]: I0226 17:37:11.848219 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-sqks9"] Feb 26 17:37:11 crc kubenswrapper[4805]: W0226 17:37:11.878870 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf21b0b57_d027_42a1_a3c9_b4030f589db8.slice/crio-f3ac6945ccb25945b32289bff6c662aa3ea1842af94a452a9d1f0fcf2a5107ab WatchSource:0}: Error finding container f3ac6945ccb25945b32289bff6c662aa3ea1842af94a452a9d1f0fcf2a5107ab: Status 404 returned error can't find the container with id f3ac6945ccb25945b32289bff6c662aa3ea1842af94a452a9d1f0fcf2a5107ab Feb 26 17:37:11 crc kubenswrapper[4805]: I0226 17:37:11.913559 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"26cecc08-6d2b-4e0f-a231-8ac8764e8ddf","Type":"ContainerStarted","Data":"900544163fb385b9cf3fdbdd1af1e229ed5f57a56d88ac05c0958ada2798a633"} Feb 26 17:37:11 crc kubenswrapper[4805]: I0226 17:37:11.917498 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8349-account-create-update-7td5j" event={"ID":"5024aeeb-12a7-40df-9ded-fa42366d647e","Type":"ContainerStarted","Data":"b281079e92ba07b5a4acc4789b4da704a0649a11d8f9d3d6a039e799e5ae8e41"} Feb 26 17:37:11 crc kubenswrapper[4805]: I0226 17:37:11.920334 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-52f5k" event={"ID":"83faf020-4a1d-4f62-846b-0d94b1eeabd1","Type":"ContainerStarted","Data":"4694545c248ad1de1e60a1831ee1465573606cb1e0f393a7c340ba2ff54365bc"} Feb 26 17:37:11 crc kubenswrapper[4805]: I0226 17:37:11.928143 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-sqks9" event={"ID":"f21b0b57-d027-42a1-a3c9-b4030f589db8","Type":"ContainerStarted","Data":"f3ac6945ccb25945b32289bff6c662aa3ea1842af94a452a9d1f0fcf2a5107ab"} Feb 26 17:37:11 crc kubenswrapper[4805]: I0226 17:37:11.943271 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-8349-account-create-update-7td5j" podStartSLOduration=16.943222611 podStartE2EDuration="16.943222611s" podCreationTimestamp="2026-02-26 17:36:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:37:11.942768289 +0000 UTC m=+1346.504522628" watchObservedRunningTime="2026-02-26 17:37:11.943222611 +0000 UTC m=+1346.504976950" Feb 26 17:37:11 crc kubenswrapper[4805]: I0226 17:37:11.944181 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/476d8a34-25de-416a-90dd-173be4f21a1d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "476d8a34-25de-416a-90dd-173be4f21a1d" (UID: "476d8a34-25de-416a-90dd-173be4f21a1d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:37:11 crc kubenswrapper[4805]: I0226 17:37:11.945464 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8cc7fc4dc-k2ps5" Feb 26 17:37:11 crc kubenswrapper[4805]: I0226 17:37:11.945639 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8cc7fc4dc-k2ps5" event={"ID":"476d8a34-25de-416a-90dd-173be4f21a1d","Type":"ContainerDied","Data":"5b80812a04649a2670d8377b94d772ad7600050bca4ce504770c342ac23b9a17"} Feb 26 17:37:11 crc kubenswrapper[4805]: I0226 17:37:11.945689 4805 scope.go:117] "RemoveContainer" containerID="1fde00d3693042157572ca433db8e07019428c8f7f8c2646f927f249f8f4a4d4" Feb 26 17:37:11 crc kubenswrapper[4805]: I0226 17:37:11.954485 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mnplr" event={"ID":"33a31864-4394-433d-9f97-c52b9d9984e5","Type":"ContainerStarted","Data":"417d3862f6ad7e6ddf3f2faeb530fe737a3fa35cb01317972a3ea0c206e65d07"} Feb 26 17:37:11 crc kubenswrapper[4805]: I0226 17:37:11.957230 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9qmc8" event={"ID":"ab16e0c7-ca1e-42c0-8a3d-0ceeb9410ce5","Type":"ContainerStarted","Data":"f18923a0382972221d201d4c00a58a8916995119907ed925198fed1c601edffa"} Feb 26 17:37:11 crc kubenswrapper[4805]: I0226 17:37:11.972117 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/476d8a34-25de-416a-90dd-173be4f21a1d-config" (OuterVolumeSpecName: "config") pod "476d8a34-25de-416a-90dd-173be4f21a1d" (UID: "476d8a34-25de-416a-90dd-173be4f21a1d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:37:11 crc kubenswrapper[4805]: I0226 17:37:11.983002 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-52f5k" podStartSLOduration=17.982973936 podStartE2EDuration="17.982973936s" podCreationTimestamp="2026-02-26 17:36:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:37:11.960428286 +0000 UTC m=+1346.522182635" watchObservedRunningTime="2026-02-26 17:37:11.982973936 +0000 UTC m=+1346.544728275" Feb 26 17:37:11 crc kubenswrapper[4805]: I0226 17:37:11.997710 4805 scope.go:117] "RemoveContainer" containerID="42d34bcaa07707a4af69d894e6cb275f2108008a80f9ccf6e7a483b992df14d3" Feb 26 17:37:12 crc kubenswrapper[4805]: I0226 17:37:12.003189 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-mnplr" podStartSLOduration=18.003154966 podStartE2EDuration="18.003154966s" podCreationTimestamp="2026-02-26 17:36:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:37:11.981188321 +0000 UTC m=+1346.542942660" watchObservedRunningTime="2026-02-26 17:37:12.003154966 +0000 UTC m=+1346.564909325" Feb 26 17:37:12 crc kubenswrapper[4805]: I0226 17:37:12.018762 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/476d8a34-25de-416a-90dd-173be4f21a1d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "476d8a34-25de-416a-90dd-173be4f21a1d" (UID: "476d8a34-25de-416a-90dd-173be4f21a1d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:37:12 crc kubenswrapper[4805]: I0226 17:37:12.047078 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/476d8a34-25de-416a-90dd-173be4f21a1d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:12 crc kubenswrapper[4805]: I0226 17:37:12.047577 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/476d8a34-25de-416a-90dd-173be4f21a1d-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:12 crc kubenswrapper[4805]: I0226 17:37:12.047596 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/476d8a34-25de-416a-90dd-173be4f21a1d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:12 crc kubenswrapper[4805]: I0226 17:37:12.400232 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8cc7fc4dc-k2ps5"] Feb 26 17:37:12 crc kubenswrapper[4805]: I0226 17:37:12.414284 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8cc7fc4dc-k2ps5"] Feb 26 17:37:12 crc kubenswrapper[4805]: I0226 17:37:12.968297 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="476d8a34-25de-416a-90dd-173be4f21a1d" path="/var/lib/kubelet/pods/476d8a34-25de-416a-90dd-173be4f21a1d/volumes" Feb 26 17:37:12 crc kubenswrapper[4805]: I0226 17:37:12.971709 4805 generic.go:334] "Generic (PLEG): container finished" podID="33a31864-4394-433d-9f97-c52b9d9984e5" containerID="417d3862f6ad7e6ddf3f2faeb530fe737a3fa35cb01317972a3ea0c206e65d07" exitCode=0 Feb 26 17:37:12 crc kubenswrapper[4805]: I0226 17:37:12.971765 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mnplr" event={"ID":"33a31864-4394-433d-9f97-c52b9d9984e5","Type":"ContainerDied","Data":"417d3862f6ad7e6ddf3f2faeb530fe737a3fa35cb01317972a3ea0c206e65d07"} Feb 26 17:37:12 crc kubenswrapper[4805]: I0226 17:37:12.975518 4805 generic.go:334] "Generic (PLEG): container finished" podID="5024aeeb-12a7-40df-9ded-fa42366d647e" containerID="b281079e92ba07b5a4acc4789b4da704a0649a11d8f9d3d6a039e799e5ae8e41" exitCode=0 Feb 26 17:37:12 crc kubenswrapper[4805]: I0226 17:37:12.975609 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8349-account-create-update-7td5j" event={"ID":"5024aeeb-12a7-40df-9ded-fa42366d647e","Type":"ContainerDied","Data":"b281079e92ba07b5a4acc4789b4da704a0649a11d8f9d3d6a039e799e5ae8e41"} Feb 26 17:37:12 crc kubenswrapper[4805]: I0226 17:37:12.978667 4805 generic.go:334] "Generic (PLEG): container finished" podID="ab16e0c7-ca1e-42c0-8a3d-0ceeb9410ce5" containerID="ac296c00f1e4df2dd1548a3e576398ad79c94db94c72b83b73cbb3332ea6f8ed" exitCode=0 Feb 26 17:37:12 crc kubenswrapper[4805]: I0226 17:37:12.978779 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9qmc8" event={"ID":"ab16e0c7-ca1e-42c0-8a3d-0ceeb9410ce5","Type":"ContainerDied","Data":"ac296c00f1e4df2dd1548a3e576398ad79c94db94c72b83b73cbb3332ea6f8ed"} Feb 26 17:37:12 crc kubenswrapper[4805]: I0226 17:37:12.982103 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wlxbg" event={"ID":"81fa1ce3-014a-4a1d-b880-7d1ea1fb6975","Type":"ContainerStarted","Data":"45a10978992e96e8dab30e7448d74563509ad4581de962d4250f4ac2fd0a6ef2"} Feb 26 17:37:12 crc kubenswrapper[4805]: I0226 17:37:12.985587 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8563b335-2f2c-40da-811d-2ceaf2299da8","Type":"ContainerStarted","Data":"41b358c775d45c7def32da86ba41ac973614814e34ecebaad0d76322f35dd161"} Feb 26 17:37:12 crc kubenswrapper[4805]: I0226 17:37:12.985634 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8563b335-2f2c-40da-811d-2ceaf2299da8","Type":"ContainerStarted","Data":"438ff6007e7f08c95f8c5a7a722506506ab74a2d66876bbdb4119b4372882b81"} Feb 26 17:37:12 crc kubenswrapper[4805]: I0226 17:37:12.993113 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 26 17:37:13 crc kubenswrapper[4805]: I0226 17:37:13.008761 4805 generic.go:334] "Generic (PLEG): container finished" podID="83faf020-4a1d-4f62-846b-0d94b1eeabd1" containerID="4694545c248ad1de1e60a1831ee1465573606cb1e0f393a7c340ba2ff54365bc" exitCode=0 Feb 26 17:37:13 crc kubenswrapper[4805]: I0226 17:37:13.008913 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-52f5k" event={"ID":"83faf020-4a1d-4f62-846b-0d94b1eeabd1","Type":"ContainerDied","Data":"4694545c248ad1de1e60a1831ee1465573606cb1e0f393a7c340ba2ff54365bc"} Feb 26 17:37:13 crc kubenswrapper[4805]: I0226 17:37:13.016158 4805 generic.go:334] "Generic (PLEG): container finished" podID="4393bdca-06c3-4243-abef-be4d46c5f0b3" containerID="abef1a1f78868bf97dea85233723ee6a0167853b8ad667efaba274f18715dea5" exitCode=0 Feb 26 17:37:13 crc kubenswrapper[4805]: I0226 17:37:13.016398 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a0e5-account-create-update-w7vr5" event={"ID":"4393bdca-06c3-4243-abef-be4d46c5f0b3","Type":"ContainerDied","Data":"abef1a1f78868bf97dea85233723ee6a0167853b8ad667efaba274f18715dea5"} Feb 26 17:37:13 crc kubenswrapper[4805]: I0226 17:37:13.059989 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-wlxbg" podStartSLOduration=8.779848347 podStartE2EDuration="26.05997129s" podCreationTimestamp="2026-02-26 17:36:47 +0000 UTC" firstStartedPulling="2026-02-26 17:36:53.831849928 +0000 UTC m=+1328.393604267" lastFinishedPulling="2026-02-26 17:37:11.111972871 +0000 UTC m=+1345.673727210" observedRunningTime="2026-02-26 17:37:13.057946808 +0000 UTC m=+1347.619701147" watchObservedRunningTime="2026-02-26 17:37:13.05997129 +0000 UTC m=+1347.621725629" Feb 26 17:37:13 crc kubenswrapper[4805]: I0226 17:37:13.088352 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=8.886702829 podStartE2EDuration="26.088332997s" podCreationTimestamp="2026-02-26 17:36:47 +0000 UTC" firstStartedPulling="2026-02-26 17:36:53.794127214 +0000 UTC m=+1328.355881563" lastFinishedPulling="2026-02-26 17:37:10.995757392 +0000 UTC m=+1345.557511731" observedRunningTime="2026-02-26 17:37:13.076153629 +0000 UTC m=+1347.637907968" watchObservedRunningTime="2026-02-26 17:37:13.088332997 +0000 UTC m=+1347.650087336" Feb 26 17:37:14 crc kubenswrapper[4805]: I0226 17:37:14.202661 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-2k9nw" podUID="6ff856bd-109f-4978-9b06-546d2afaf577" containerName="ovn-controller" probeResult="failure" output=< Feb 26 17:37:14 crc kubenswrapper[4805]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 26 17:37:14 crc kubenswrapper[4805]: > Feb 26 17:37:14 crc kubenswrapper[4805]: I0226 17:37:14.376583 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-tzv64" Feb 26 17:37:14 crc kubenswrapper[4805]: I0226 17:37:14.813440 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-tzv64" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.053762 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-2k9nw-config-2v4t4"] Feb 26 17:37:15 crc kubenswrapper[4805]: E0226 17:37:15.054291 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="476d8a34-25de-416a-90dd-173be4f21a1d" containerName="dnsmasq-dns" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.054310 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="476d8a34-25de-416a-90dd-173be4f21a1d" containerName="dnsmasq-dns" Feb 26 17:37:15 crc kubenswrapper[4805]: E0226 17:37:15.054338 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="476d8a34-25de-416a-90dd-173be4f21a1d" containerName="init" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.054344 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="476d8a34-25de-416a-90dd-173be4f21a1d" containerName="init" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.054540 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="476d8a34-25de-416a-90dd-173be4f21a1d" containerName="dnsmasq-dns" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.055205 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2k9nw-config-2v4t4" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.059196 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.078300 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"657f7632-1861-4399-9731-81e9977c7640","Type":"ContainerStarted","Data":"0a6fc87c6e1358119cfe76bc0ee3e0b170d487e30111a2a88de2ff8cb335f1b0"} Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.079113 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2k9nw-config-2v4t4"] Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.099584 4805 generic.go:334] "Generic (PLEG): container finished" podID="82935132-2a23-4b0c-86c5-be40089b7e0b" containerID="db9ad2b23715de3e228e0c29dc0b879be5e7ad0805b9838a501fa17eb0b47b4c" exitCode=0 Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.099685 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"82935132-2a23-4b0c-86c5-be40089b7e0b","Type":"ContainerDied","Data":"db9ad2b23715de3e228e0c29dc0b879be5e7ad0805b9838a501fa17eb0b47b4c"} Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.106794 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8349-account-create-update-7td5j" event={"ID":"5024aeeb-12a7-40df-9ded-fa42366d647e","Type":"ContainerDied","Data":"7f4a6de1640c707424c8725c4c6a3ed1173317f8732a1d10209c471d2ed15ac2"} Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.106841 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f4a6de1640c707424c8725c4c6a3ed1173317f8732a1d10209c471d2ed15ac2" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.110636 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9qmc8" event={"ID":"ab16e0c7-ca1e-42c0-8a3d-0ceeb9410ce5","Type":"ContainerDied","Data":"f18923a0382972221d201d4c00a58a8916995119907ed925198fed1c601edffa"} Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.110694 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f18923a0382972221d201d4c00a58a8916995119907ed925198fed1c601edffa" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.113307 4805 generic.go:334] "Generic (PLEG): container finished" podID="9c793c17-a107-4006-9e15-5a2ac2afa296" containerID="62b3d33a0aa7871219f4b9d15d0569cc3a99df503b74f0a8d470c476d2904b2f" exitCode=0 Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.113366 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9c793c17-a107-4006-9e15-5a2ac2afa296","Type":"ContainerDied","Data":"62b3d33a0aa7871219f4b9d15d0569cc3a99df503b74f0a8d470c476d2904b2f"} Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.116208 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-52f5k" event={"ID":"83faf020-4a1d-4f62-846b-0d94b1eeabd1","Type":"ContainerDied","Data":"5995737f83ffce475a389331ece65bdabab6aa00c742ef13395817ef971bce99"} Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.116237 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5995737f83ffce475a389331ece65bdabab6aa00c742ef13395817ef971bce99" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.122337 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a0e5-account-create-update-w7vr5" event={"ID":"4393bdca-06c3-4243-abef-be4d46c5f0b3","Type":"ContainerDied","Data":"ecc23eb4b9d5d35e65e0721a649d6a94c77349f2e5467d772b24c372818236db"} Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.122392 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecc23eb4b9d5d35e65e0721a649d6a94c77349f2e5467d772b24c372818236db" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.136659 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mnplr" event={"ID":"33a31864-4394-433d-9f97-c52b9d9984e5","Type":"ContainerDied","Data":"dea26c6579ce3f9c2ba0d9689343f62476de5b07c52a480d5f4b3b2f4a998d97"} Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.136716 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dea26c6579ce3f9c2ba0d9689343f62476de5b07c52a480d5f4b3b2f4a998d97" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.155886 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/59505e53-aa4c-4b3d-b53c-823454fa7471-scripts\") pod \"ovn-controller-2k9nw-config-2v4t4\" (UID: \"59505e53-aa4c-4b3d-b53c-823454fa7471\") " pod="openstack/ovn-controller-2k9nw-config-2v4t4" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.156063 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/59505e53-aa4c-4b3d-b53c-823454fa7471-additional-scripts\") pod \"ovn-controller-2k9nw-config-2v4t4\" (UID: \"59505e53-aa4c-4b3d-b53c-823454fa7471\") " pod="openstack/ovn-controller-2k9nw-config-2v4t4" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.156137 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/59505e53-aa4c-4b3d-b53c-823454fa7471-var-run\") pod \"ovn-controller-2k9nw-config-2v4t4\" (UID: \"59505e53-aa4c-4b3d-b53c-823454fa7471\") " pod="openstack/ovn-controller-2k9nw-config-2v4t4" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.156218 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/59505e53-aa4c-4b3d-b53c-823454fa7471-var-log-ovn\") pod \"ovn-controller-2k9nw-config-2v4t4\" (UID: \"59505e53-aa4c-4b3d-b53c-823454fa7471\") " pod="openstack/ovn-controller-2k9nw-config-2v4t4" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.156273 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqnlh\" (UniqueName: \"kubernetes.io/projected/59505e53-aa4c-4b3d-b53c-823454fa7471-kube-api-access-vqnlh\") pod \"ovn-controller-2k9nw-config-2v4t4\" (UID: \"59505e53-aa4c-4b3d-b53c-823454fa7471\") " pod="openstack/ovn-controller-2k9nw-config-2v4t4" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.156340 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/59505e53-aa4c-4b3d-b53c-823454fa7471-var-run-ovn\") pod \"ovn-controller-2k9nw-config-2v4t4\" (UID: \"59505e53-aa4c-4b3d-b53c-823454fa7471\") " pod="openstack/ovn-controller-2k9nw-config-2v4t4" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.214892 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a0e5-account-create-update-w7vr5" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.223305 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9qmc8" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.240551 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8349-account-create-update-7td5j" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.255834 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mnplr" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.264856 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/59505e53-aa4c-4b3d-b53c-823454fa7471-var-run-ovn\") pod \"ovn-controller-2k9nw-config-2v4t4\" (UID: \"59505e53-aa4c-4b3d-b53c-823454fa7471\") " pod="openstack/ovn-controller-2k9nw-config-2v4t4" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.266215 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/59505e53-aa4c-4b3d-b53c-823454fa7471-scripts\") pod \"ovn-controller-2k9nw-config-2v4t4\" (UID: \"59505e53-aa4c-4b3d-b53c-823454fa7471\") " pod="openstack/ovn-controller-2k9nw-config-2v4t4" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.267064 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/59505e53-aa4c-4b3d-b53c-823454fa7471-var-run-ovn\") pod \"ovn-controller-2k9nw-config-2v4t4\" (UID: \"59505e53-aa4c-4b3d-b53c-823454fa7471\") " pod="openstack/ovn-controller-2k9nw-config-2v4t4" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.267809 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/59505e53-aa4c-4b3d-b53c-823454fa7471-additional-scripts\") pod \"ovn-controller-2k9nw-config-2v4t4\" (UID: \"59505e53-aa4c-4b3d-b53c-823454fa7471\") " pod="openstack/ovn-controller-2k9nw-config-2v4t4" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.267924 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/59505e53-aa4c-4b3d-b53c-823454fa7471-var-run\") pod \"ovn-controller-2k9nw-config-2v4t4\" (UID: \"59505e53-aa4c-4b3d-b53c-823454fa7471\") " pod="openstack/ovn-controller-2k9nw-config-2v4t4" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.267994 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/59505e53-aa4c-4b3d-b53c-823454fa7471-var-log-ovn\") pod \"ovn-controller-2k9nw-config-2v4t4\" (UID: \"59505e53-aa4c-4b3d-b53c-823454fa7471\") " pod="openstack/ovn-controller-2k9nw-config-2v4t4" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.268056 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqnlh\" (UniqueName: \"kubernetes.io/projected/59505e53-aa4c-4b3d-b53c-823454fa7471-kube-api-access-vqnlh\") pod \"ovn-controller-2k9nw-config-2v4t4\" (UID: \"59505e53-aa4c-4b3d-b53c-823454fa7471\") " pod="openstack/ovn-controller-2k9nw-config-2v4t4" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.270450 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/59505e53-aa4c-4b3d-b53c-823454fa7471-var-log-ovn\") pod \"ovn-controller-2k9nw-config-2v4t4\" (UID: \"59505e53-aa4c-4b3d-b53c-823454fa7471\") " pod="openstack/ovn-controller-2k9nw-config-2v4t4" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.270456 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/59505e53-aa4c-4b3d-b53c-823454fa7471-var-run\") pod \"ovn-controller-2k9nw-config-2v4t4\" (UID: \"59505e53-aa4c-4b3d-b53c-823454fa7471\") " pod="openstack/ovn-controller-2k9nw-config-2v4t4" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.270971 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-52f5k" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.273279 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/59505e53-aa4c-4b3d-b53c-823454fa7471-scripts\") pod \"ovn-controller-2k9nw-config-2v4t4\" (UID: \"59505e53-aa4c-4b3d-b53c-823454fa7471\") " pod="openstack/ovn-controller-2k9nw-config-2v4t4" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.273706 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/59505e53-aa4c-4b3d-b53c-823454fa7471-additional-scripts\") pod \"ovn-controller-2k9nw-config-2v4t4\" (UID: \"59505e53-aa4c-4b3d-b53c-823454fa7471\") " pod="openstack/ovn-controller-2k9nw-config-2v4t4" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.296192 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqnlh\" (UniqueName: \"kubernetes.io/projected/59505e53-aa4c-4b3d-b53c-823454fa7471-kube-api-access-vqnlh\") pod \"ovn-controller-2k9nw-config-2v4t4\" (UID: \"59505e53-aa4c-4b3d-b53c-823454fa7471\") " pod="openstack/ovn-controller-2k9nw-config-2v4t4" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.716081 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab16e0c7-ca1e-42c0-8a3d-0ceeb9410ce5-operator-scripts\") pod \"ab16e0c7-ca1e-42c0-8a3d-0ceeb9410ce5\" (UID: \"ab16e0c7-ca1e-42c0-8a3d-0ceeb9410ce5\") " Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.716175 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8c45q\" (UniqueName: \"kubernetes.io/projected/33a31864-4394-433d-9f97-c52b9d9984e5-kube-api-access-8c45q\") pod \"33a31864-4394-433d-9f97-c52b9d9984e5\" (UID: \"33a31864-4394-433d-9f97-c52b9d9984e5\") " Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.716196 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33a31864-4394-433d-9f97-c52b9d9984e5-operator-scripts\") pod \"33a31864-4394-433d-9f97-c52b9d9984e5\" (UID: \"33a31864-4394-433d-9f97-c52b9d9984e5\") " Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.716258 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tx582\" (UniqueName: \"kubernetes.io/projected/5024aeeb-12a7-40df-9ded-fa42366d647e-kube-api-access-tx582\") pod \"5024aeeb-12a7-40df-9ded-fa42366d647e\" (UID: \"5024aeeb-12a7-40df-9ded-fa42366d647e\") " Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.716277 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpp2t\" (UniqueName: \"kubernetes.io/projected/ab16e0c7-ca1e-42c0-8a3d-0ceeb9410ce5-kube-api-access-zpp2t\") pod \"ab16e0c7-ca1e-42c0-8a3d-0ceeb9410ce5\" (UID: \"ab16e0c7-ca1e-42c0-8a3d-0ceeb9410ce5\") " Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.716372 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5024aeeb-12a7-40df-9ded-fa42366d647e-operator-scripts\") pod \"5024aeeb-12a7-40df-9ded-fa42366d647e\" (UID: \"5024aeeb-12a7-40df-9ded-fa42366d647e\") " Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.716418 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4393bdca-06c3-4243-abef-be4d46c5f0b3-operator-scripts\") pod \"4393bdca-06c3-4243-abef-be4d46c5f0b3\" (UID: \"4393bdca-06c3-4243-abef-be4d46c5f0b3\") " Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.716471 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgjlh\" (UniqueName: \"kubernetes.io/projected/4393bdca-06c3-4243-abef-be4d46c5f0b3-kube-api-access-cgjlh\") pod \"4393bdca-06c3-4243-abef-be4d46c5f0b3\" (UID: \"4393bdca-06c3-4243-abef-be4d46c5f0b3\") " Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.720701 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5024aeeb-12a7-40df-9ded-fa42366d647e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5024aeeb-12a7-40df-9ded-fa42366d647e" (UID: "5024aeeb-12a7-40df-9ded-fa42366d647e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.721776 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4393bdca-06c3-4243-abef-be4d46c5f0b3-kube-api-access-cgjlh" (OuterVolumeSpecName: "kube-api-access-cgjlh") pod "4393bdca-06c3-4243-abef-be4d46c5f0b3" (UID: "4393bdca-06c3-4243-abef-be4d46c5f0b3"). InnerVolumeSpecName "kube-api-access-cgjlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.722722 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4393bdca-06c3-4243-abef-be4d46c5f0b3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4393bdca-06c3-4243-abef-be4d46c5f0b3" (UID: "4393bdca-06c3-4243-abef-be4d46c5f0b3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.723275 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab16e0c7-ca1e-42c0-8a3d-0ceeb9410ce5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ab16e0c7-ca1e-42c0-8a3d-0ceeb9410ce5" (UID: "ab16e0c7-ca1e-42c0-8a3d-0ceeb9410ce5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.723651 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33a31864-4394-433d-9f97-c52b9d9984e5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "33a31864-4394-433d-9f97-c52b9d9984e5" (UID: "33a31864-4394-433d-9f97-c52b9d9984e5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.724851 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab16e0c7-ca1e-42c0-8a3d-0ceeb9410ce5-kube-api-access-zpp2t" (OuterVolumeSpecName: "kube-api-access-zpp2t") pod "ab16e0c7-ca1e-42c0-8a3d-0ceeb9410ce5" (UID: "ab16e0c7-ca1e-42c0-8a3d-0ceeb9410ce5"). InnerVolumeSpecName "kube-api-access-zpp2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.727156 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2k9nw-config-2v4t4" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.732744 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5024aeeb-12a7-40df-9ded-fa42366d647e-kube-api-access-tx582" (OuterVolumeSpecName: "kube-api-access-tx582") pod "5024aeeb-12a7-40df-9ded-fa42366d647e" (UID: "5024aeeb-12a7-40df-9ded-fa42366d647e"). InnerVolumeSpecName "kube-api-access-tx582". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.733253 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33a31864-4394-433d-9f97-c52b9d9984e5-kube-api-access-8c45q" (OuterVolumeSpecName: "kube-api-access-8c45q") pod "33a31864-4394-433d-9f97-c52b9d9984e5" (UID: "33a31864-4394-433d-9f97-c52b9d9984e5"). InnerVolumeSpecName "kube-api-access-8c45q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.819344 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78cdv\" (UniqueName: \"kubernetes.io/projected/83faf020-4a1d-4f62-846b-0d94b1eeabd1-kube-api-access-78cdv\") pod \"83faf020-4a1d-4f62-846b-0d94b1eeabd1\" (UID: \"83faf020-4a1d-4f62-846b-0d94b1eeabd1\") " Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.819583 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83faf020-4a1d-4f62-846b-0d94b1eeabd1-operator-scripts\") pod \"83faf020-4a1d-4f62-846b-0d94b1eeabd1\" (UID: \"83faf020-4a1d-4f62-846b-0d94b1eeabd1\") " Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.820095 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tx582\" (UniqueName: \"kubernetes.io/projected/5024aeeb-12a7-40df-9ded-fa42366d647e-kube-api-access-tx582\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.820113 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpp2t\" (UniqueName: \"kubernetes.io/projected/ab16e0c7-ca1e-42c0-8a3d-0ceeb9410ce5-kube-api-access-zpp2t\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.820123 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5024aeeb-12a7-40df-9ded-fa42366d647e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.820132 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4393bdca-06c3-4243-abef-be4d46c5f0b3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.820141 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgjlh\" (UniqueName: \"kubernetes.io/projected/4393bdca-06c3-4243-abef-be4d46c5f0b3-kube-api-access-cgjlh\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.820150 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab16e0c7-ca1e-42c0-8a3d-0ceeb9410ce5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.820160 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8c45q\" (UniqueName: \"kubernetes.io/projected/33a31864-4394-433d-9f97-c52b9d9984e5-kube-api-access-8c45q\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.820169 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33a31864-4394-433d-9f97-c52b9d9984e5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.824361 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83faf020-4a1d-4f62-846b-0d94b1eeabd1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "83faf020-4a1d-4f62-846b-0d94b1eeabd1" (UID: "83faf020-4a1d-4f62-846b-0d94b1eeabd1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.827342 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83faf020-4a1d-4f62-846b-0d94b1eeabd1-kube-api-access-78cdv" (OuterVolumeSpecName: "kube-api-access-78cdv") pod "83faf020-4a1d-4f62-846b-0d94b1eeabd1" (UID: "83faf020-4a1d-4f62-846b-0d94b1eeabd1"). InnerVolumeSpecName "kube-api-access-78cdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.921884 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83faf020-4a1d-4f62-846b-0d94b1eeabd1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:15 crc kubenswrapper[4805]: I0226 17:37:15.921915 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78cdv\" (UniqueName: \"kubernetes.io/projected/83faf020-4a1d-4f62-846b-0d94b1eeabd1-kube-api-access-78cdv\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:16 crc kubenswrapper[4805]: I0226 17:37:16.151410 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"26cecc08-6d2b-4e0f-a231-8ac8764e8ddf","Type":"ContainerStarted","Data":"ee7289ec894e4919c88e294f6e113edb8d0f554063fc8a90d016b5bec7dbd447"} Feb 26 17:37:16 crc kubenswrapper[4805]: I0226 17:37:16.151910 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Feb 26 17:37:16 crc kubenswrapper[4805]: I0226 17:37:16.155532 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Feb 26 17:37:16 crc kubenswrapper[4805]: I0226 17:37:16.155597 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"82935132-2a23-4b0c-86c5-be40089b7e0b","Type":"ContainerStarted","Data":"2b9573e545642ce301ecb6bd38b1385665b816b74305fdbf5b8d3ba79aedd146"} Feb 26 17:37:16 crc kubenswrapper[4805]: I0226 17:37:16.156805 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 26 17:37:16 crc kubenswrapper[4805]: I0226 17:37:16.159624 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-52f5k" Feb 26 17:37:16 crc kubenswrapper[4805]: I0226 17:37:16.159687 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9c793c17-a107-4006-9e15-5a2ac2afa296","Type":"ContainerStarted","Data":"b42a861aab0cae3e450ecd3eb59fe2377389b7472abe53b997d75d139154328d"} Feb 26 17:37:16 crc kubenswrapper[4805]: I0226 17:37:16.159763 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8349-account-create-update-7td5j" Feb 26 17:37:16 crc kubenswrapper[4805]: I0226 17:37:16.159696 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a0e5-account-create-update-w7vr5" Feb 26 17:37:16 crc kubenswrapper[4805]: I0226 17:37:16.159630 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mnplr" Feb 26 17:37:16 crc kubenswrapper[4805]: I0226 17:37:16.160405 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9qmc8" Feb 26 17:37:16 crc kubenswrapper[4805]: I0226 17:37:16.203678 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=28.607340629 podStartE2EDuration="1m10.203657882s" podCreationTimestamp="2026-02-26 17:36:06 +0000 UTC" firstStartedPulling="2026-02-26 17:36:29.519401063 +0000 UTC m=+1304.081155402" lastFinishedPulling="2026-02-26 17:37:11.115718316 +0000 UTC m=+1345.677472655" observedRunningTime="2026-02-26 17:37:16.20278863 +0000 UTC m=+1350.764542969" watchObservedRunningTime="2026-02-26 17:37:16.203657882 +0000 UTC m=+1350.765412221" Feb 26 17:37:16 crc kubenswrapper[4805]: I0226 17:37:16.264305 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371958.590773 podStartE2EDuration="1m18.264003388s" podCreationTimestamp="2026-02-26 17:35:58 +0000 UTC" firstStartedPulling="2026-02-26 17:36:00.975830211 +0000 UTC m=+1275.537584550" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:37:16.246223628 +0000 UTC m=+1350.807977967" watchObservedRunningTime="2026-02-26 17:37:16.264003388 +0000 UTC m=+1350.825757727" Feb 26 17:37:16 crc kubenswrapper[4805]: I0226 17:37:16.322112 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2k9nw-config-2v4t4"] Feb 26 17:37:16 crc kubenswrapper[4805]: I0226 17:37:16.323443 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.205013542 podStartE2EDuration="1m17.32342012s" podCreationTimestamp="2026-02-26 17:35:59 +0000 UTC" firstStartedPulling="2026-02-26 17:36:01.297287754 +0000 UTC m=+1275.859042093" lastFinishedPulling="2026-02-26 17:36:40.415694332 +0000 UTC m=+1314.977448671" observedRunningTime="2026-02-26 17:37:16.300635114 +0000 UTC m=+1350.862389453" watchObservedRunningTime="2026-02-26 17:37:16.32342012 +0000 UTC m=+1350.885174459" Feb 26 17:37:16 crc kubenswrapper[4805]: W0226 17:37:16.341747 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59505e53_aa4c_4b3d_b53c_823454fa7471.slice/crio-990d297a98566e77998c4a5bec7e847b8203e918d491177cfe18ac58282c12ab WatchSource:0}: Error finding container 990d297a98566e77998c4a5bec7e847b8203e918d491177cfe18ac58282c12ab: Status 404 returned error can't find the container with id 990d297a98566e77998c4a5bec7e847b8203e918d491177cfe18ac58282c12ab Feb 26 17:37:17 crc kubenswrapper[4805]: I0226 17:37:17.061750 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-9qmc8"] Feb 26 17:37:17 crc kubenswrapper[4805]: I0226 17:37:17.070744 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-9qmc8"] Feb 26 17:37:17 crc kubenswrapper[4805]: I0226 17:37:17.189528 4805 generic.go:334] "Generic (PLEG): container finished" podID="59505e53-aa4c-4b3d-b53c-823454fa7471" containerID="f610594ac0862580585db48e5977b66ff34a84e46ec0eabaf617e75c90e0ef23" exitCode=0 Feb 26 17:37:17 crc kubenswrapper[4805]: I0226 17:37:17.189577 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2k9nw-config-2v4t4" event={"ID":"59505e53-aa4c-4b3d-b53c-823454fa7471","Type":"ContainerDied","Data":"f610594ac0862580585db48e5977b66ff34a84e46ec0eabaf617e75c90e0ef23"} Feb 26 17:37:17 crc kubenswrapper[4805]: I0226 17:37:17.189619 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2k9nw-config-2v4t4" event={"ID":"59505e53-aa4c-4b3d-b53c-823454fa7471","Type":"ContainerStarted","Data":"990d297a98566e77998c4a5bec7e847b8203e918d491177cfe18ac58282c12ab"} Feb 26 17:37:18 crc kubenswrapper[4805]: I0226 17:37:18.694182 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="537eaeba-93f9-4d28-871b-049946f86c2b" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 26 17:37:18 crc kubenswrapper[4805]: I0226 17:37:18.966154 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab16e0c7-ca1e-42c0-8a3d-0ceeb9410ce5" path="/var/lib/kubelet/pods/ab16e0c7-ca1e-42c0-8a3d-0ceeb9410ce5/volumes" Feb 26 17:37:19 crc kubenswrapper[4805]: I0226 17:37:19.185080 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-2k9nw" Feb 26 17:37:19 crc kubenswrapper[4805]: I0226 17:37:19.901617 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a466ee40-e6ef-4a36-96c6-88e7ce00a28c-etc-swift\") pod \"swift-storage-0\" (UID: \"a466ee40-e6ef-4a36-96c6-88e7ce00a28c\") " pod="openstack/swift-storage-0" Feb 26 17:37:19 crc kubenswrapper[4805]: E0226 17:37:19.901807 4805 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 26 17:37:19 crc kubenswrapper[4805]: E0226 17:37:19.902675 4805 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 26 17:37:19 crc kubenswrapper[4805]: E0226 17:37:19.902726 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a466ee40-e6ef-4a36-96c6-88e7ce00a28c-etc-swift podName:a466ee40-e6ef-4a36-96c6-88e7ce00a28c nodeName:}" failed. No retries permitted until 2026-02-26 17:37:51.902707578 +0000 UTC m=+1386.464461917 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a466ee40-e6ef-4a36-96c6-88e7ce00a28c-etc-swift") pod "swift-storage-0" (UID: "a466ee40-e6ef-4a36-96c6-88e7ce00a28c") : configmap "swift-ring-files" not found Feb 26 17:37:20 crc kubenswrapper[4805]: I0226 17:37:20.771200 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:37:20 crc kubenswrapper[4805]: I0226 17:37:20.921724 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-mncb5"] Feb 26 17:37:20 crc kubenswrapper[4805]: E0226 17:37:20.922419 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4393bdca-06c3-4243-abef-be4d46c5f0b3" containerName="mariadb-account-create-update" Feb 26 17:37:20 crc kubenswrapper[4805]: I0226 17:37:20.922438 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="4393bdca-06c3-4243-abef-be4d46c5f0b3" containerName="mariadb-account-create-update" Feb 26 17:37:20 crc kubenswrapper[4805]: E0226 17:37:20.922476 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33a31864-4394-433d-9f97-c52b9d9984e5" containerName="mariadb-database-create" Feb 26 17:37:20 crc kubenswrapper[4805]: I0226 17:37:20.922483 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="33a31864-4394-433d-9f97-c52b9d9984e5" containerName="mariadb-database-create" Feb 26 17:37:20 crc kubenswrapper[4805]: E0226 17:37:20.922499 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83faf020-4a1d-4f62-846b-0d94b1eeabd1" containerName="mariadb-database-create" Feb 26 17:37:20 crc kubenswrapper[4805]: I0226 17:37:20.922507 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="83faf020-4a1d-4f62-846b-0d94b1eeabd1" containerName="mariadb-database-create" Feb 26 17:37:20 crc kubenswrapper[4805]: E0226 17:37:20.922520 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab16e0c7-ca1e-42c0-8a3d-0ceeb9410ce5" containerName="mariadb-account-create-update" Feb 26 17:37:20 crc kubenswrapper[4805]: I0226 17:37:20.922528 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab16e0c7-ca1e-42c0-8a3d-0ceeb9410ce5" containerName="mariadb-account-create-update" Feb 26 17:37:20 crc kubenswrapper[4805]: E0226 17:37:20.922539 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5024aeeb-12a7-40df-9ded-fa42366d647e" containerName="mariadb-account-create-update" Feb 26 17:37:20 crc kubenswrapper[4805]: I0226 17:37:20.922547 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5024aeeb-12a7-40df-9ded-fa42366d647e" containerName="mariadb-account-create-update" Feb 26 17:37:20 crc kubenswrapper[4805]: I0226 17:37:20.922812 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="33a31864-4394-433d-9f97-c52b9d9984e5" containerName="mariadb-database-create" Feb 26 17:37:20 crc kubenswrapper[4805]: I0226 17:37:20.922852 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="4393bdca-06c3-4243-abef-be4d46c5f0b3" containerName="mariadb-account-create-update" Feb 26 17:37:20 crc kubenswrapper[4805]: I0226 17:37:20.923182 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="83faf020-4a1d-4f62-846b-0d94b1eeabd1" containerName="mariadb-database-create" Feb 26 17:37:20 crc kubenswrapper[4805]: I0226 17:37:20.923199 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="5024aeeb-12a7-40df-9ded-fa42366d647e" containerName="mariadb-account-create-update" Feb 26 17:37:20 crc kubenswrapper[4805]: I0226 17:37:20.923212 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab16e0c7-ca1e-42c0-8a3d-0ceeb9410ce5" containerName="mariadb-account-create-update" Feb 26 17:37:20 crc kubenswrapper[4805]: I0226 17:37:20.924153 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mncb5" Feb 26 17:37:20 crc kubenswrapper[4805]: I0226 17:37:20.929690 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 26 17:37:20 crc kubenswrapper[4805]: I0226 17:37:20.941619 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-mncb5"] Feb 26 17:37:21 crc kubenswrapper[4805]: I0226 17:37:21.024568 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vsjr\" (UniqueName: \"kubernetes.io/projected/79c16d87-0988-452a-99cc-559a6db293d4-kube-api-access-2vsjr\") pod \"root-account-create-update-mncb5\" (UID: \"79c16d87-0988-452a-99cc-559a6db293d4\") " pod="openstack/root-account-create-update-mncb5" Feb 26 17:37:21 crc kubenswrapper[4805]: I0226 17:37:21.024653 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79c16d87-0988-452a-99cc-559a6db293d4-operator-scripts\") pod \"root-account-create-update-mncb5\" (UID: \"79c16d87-0988-452a-99cc-559a6db293d4\") " pod="openstack/root-account-create-update-mncb5" Feb 26 17:37:21 crc kubenswrapper[4805]: I0226 17:37:21.126220 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vsjr\" (UniqueName: \"kubernetes.io/projected/79c16d87-0988-452a-99cc-559a6db293d4-kube-api-access-2vsjr\") pod \"root-account-create-update-mncb5\" (UID: \"79c16d87-0988-452a-99cc-559a6db293d4\") " pod="openstack/root-account-create-update-mncb5" Feb 26 17:37:21 crc kubenswrapper[4805]: I0226 17:37:21.126337 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79c16d87-0988-452a-99cc-559a6db293d4-operator-scripts\") pod \"root-account-create-update-mncb5\" (UID: \"79c16d87-0988-452a-99cc-559a6db293d4\") " pod="openstack/root-account-create-update-mncb5" Feb 26 17:37:21 crc kubenswrapper[4805]: I0226 17:37:21.128692 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79c16d87-0988-452a-99cc-559a6db293d4-operator-scripts\") pod \"root-account-create-update-mncb5\" (UID: \"79c16d87-0988-452a-99cc-559a6db293d4\") " pod="openstack/root-account-create-update-mncb5" Feb 26 17:37:21 crc kubenswrapper[4805]: I0226 17:37:21.151973 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vsjr\" (UniqueName: \"kubernetes.io/projected/79c16d87-0988-452a-99cc-559a6db293d4-kube-api-access-2vsjr\") pod \"root-account-create-update-mncb5\" (UID: \"79c16d87-0988-452a-99cc-559a6db293d4\") " pod="openstack/root-account-create-update-mncb5" Feb 26 17:37:21 crc kubenswrapper[4805]: I0226 17:37:21.257060 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mncb5" Feb 26 17:37:22 crc kubenswrapper[4805]: I0226 17:37:22.095027 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2k9nw-config-2v4t4" Feb 26 17:37:22 crc kubenswrapper[4805]: I0226 17:37:22.248671 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2k9nw-config-2v4t4" event={"ID":"59505e53-aa4c-4b3d-b53c-823454fa7471","Type":"ContainerDied","Data":"990d297a98566e77998c4a5bec7e847b8203e918d491177cfe18ac58282c12ab"} Feb 26 17:37:22 crc kubenswrapper[4805]: I0226 17:37:22.248720 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="990d297a98566e77998c4a5bec7e847b8203e918d491177cfe18ac58282c12ab" Feb 26 17:37:22 crc kubenswrapper[4805]: I0226 17:37:22.248773 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2k9nw-config-2v4t4" Feb 26 17:37:22 crc kubenswrapper[4805]: I0226 17:37:22.258808 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/59505e53-aa4c-4b3d-b53c-823454fa7471-var-log-ovn\") pod \"59505e53-aa4c-4b3d-b53c-823454fa7471\" (UID: \"59505e53-aa4c-4b3d-b53c-823454fa7471\") " Feb 26 17:37:22 crc kubenswrapper[4805]: I0226 17:37:22.258899 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/59505e53-aa4c-4b3d-b53c-823454fa7471-additional-scripts\") pod \"59505e53-aa4c-4b3d-b53c-823454fa7471\" (UID: \"59505e53-aa4c-4b3d-b53c-823454fa7471\") " Feb 26 17:37:22 crc kubenswrapper[4805]: I0226 17:37:22.258924 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/59505e53-aa4c-4b3d-b53c-823454fa7471-var-run-ovn\") pod \"59505e53-aa4c-4b3d-b53c-823454fa7471\" (UID: \"59505e53-aa4c-4b3d-b53c-823454fa7471\") " Feb 26 17:37:22 crc kubenswrapper[4805]: I0226 17:37:22.258945 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqnlh\" (UniqueName: \"kubernetes.io/projected/59505e53-aa4c-4b3d-b53c-823454fa7471-kube-api-access-vqnlh\") pod \"59505e53-aa4c-4b3d-b53c-823454fa7471\" (UID: \"59505e53-aa4c-4b3d-b53c-823454fa7471\") " Feb 26 17:37:22 crc kubenswrapper[4805]: I0226 17:37:22.258979 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/59505e53-aa4c-4b3d-b53c-823454fa7471-scripts\") pod \"59505e53-aa4c-4b3d-b53c-823454fa7471\" (UID: \"59505e53-aa4c-4b3d-b53c-823454fa7471\") " Feb 26 17:37:22 crc kubenswrapper[4805]: I0226 17:37:22.259009 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/59505e53-aa4c-4b3d-b53c-823454fa7471-var-run\") pod \"59505e53-aa4c-4b3d-b53c-823454fa7471\" (UID: \"59505e53-aa4c-4b3d-b53c-823454fa7471\") " Feb 26 17:37:22 crc kubenswrapper[4805]: I0226 17:37:22.259600 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59505e53-aa4c-4b3d-b53c-823454fa7471-var-run" (OuterVolumeSpecName: "var-run") pod "59505e53-aa4c-4b3d-b53c-823454fa7471" (UID: "59505e53-aa4c-4b3d-b53c-823454fa7471"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 17:37:22 crc kubenswrapper[4805]: I0226 17:37:22.259649 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59505e53-aa4c-4b3d-b53c-823454fa7471-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "59505e53-aa4c-4b3d-b53c-823454fa7471" (UID: "59505e53-aa4c-4b3d-b53c-823454fa7471"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 17:37:22 crc kubenswrapper[4805]: I0226 17:37:22.259998 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59505e53-aa4c-4b3d-b53c-823454fa7471-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "59505e53-aa4c-4b3d-b53c-823454fa7471" (UID: "59505e53-aa4c-4b3d-b53c-823454fa7471"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 17:37:22 crc kubenswrapper[4805]: I0226 17:37:22.260477 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59505e53-aa4c-4b3d-b53c-823454fa7471-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "59505e53-aa4c-4b3d-b53c-823454fa7471" (UID: "59505e53-aa4c-4b3d-b53c-823454fa7471"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:37:22 crc kubenswrapper[4805]: I0226 17:37:22.261809 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59505e53-aa4c-4b3d-b53c-823454fa7471-scripts" (OuterVolumeSpecName: "scripts") pod "59505e53-aa4c-4b3d-b53c-823454fa7471" (UID: "59505e53-aa4c-4b3d-b53c-823454fa7471"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:37:22 crc kubenswrapper[4805]: I0226 17:37:22.264612 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59505e53-aa4c-4b3d-b53c-823454fa7471-kube-api-access-vqnlh" (OuterVolumeSpecName: "kube-api-access-vqnlh") pod "59505e53-aa4c-4b3d-b53c-823454fa7471" (UID: "59505e53-aa4c-4b3d-b53c-823454fa7471"). InnerVolumeSpecName "kube-api-access-vqnlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:37:22 crc kubenswrapper[4805]: I0226 17:37:22.361474 4805 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/59505e53-aa4c-4b3d-b53c-823454fa7471-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:22 crc kubenswrapper[4805]: I0226 17:37:22.361814 4805 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/59505e53-aa4c-4b3d-b53c-823454fa7471-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:22 crc kubenswrapper[4805]: I0226 17:37:22.361831 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqnlh\" (UniqueName: \"kubernetes.io/projected/59505e53-aa4c-4b3d-b53c-823454fa7471-kube-api-access-vqnlh\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:22 crc kubenswrapper[4805]: I0226 17:37:22.361846 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/59505e53-aa4c-4b3d-b53c-823454fa7471-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:22 crc kubenswrapper[4805]: I0226 17:37:22.361857 4805 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/59505e53-aa4c-4b3d-b53c-823454fa7471-var-run\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:22 crc kubenswrapper[4805]: I0226 17:37:22.361868 4805 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/59505e53-aa4c-4b3d-b53c-823454fa7471-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:23 crc kubenswrapper[4805]: I0226 17:37:23.202853 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-2k9nw-config-2v4t4"] Feb 26 17:37:23 crc kubenswrapper[4805]: I0226 17:37:23.210558 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-2k9nw-config-2v4t4"] Feb 26 17:37:23 crc kubenswrapper[4805]: I0226 17:37:23.262703 4805 generic.go:334] "Generic (PLEG): container finished" podID="81fa1ce3-014a-4a1d-b880-7d1ea1fb6975" containerID="45a10978992e96e8dab30e7448d74563509ad4581de962d4250f4ac2fd0a6ef2" exitCode=0 Feb 26 17:37:23 crc kubenswrapper[4805]: I0226 17:37:23.262748 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wlxbg" event={"ID":"81fa1ce3-014a-4a1d-b880-7d1ea1fb6975","Type":"ContainerDied","Data":"45a10978992e96e8dab30e7448d74563509ad4581de962d4250f4ac2fd0a6ef2"} Feb 26 17:37:23 crc kubenswrapper[4805]: I0226 17:37:23.344936 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-2k9nw-config-h7wv6"] Feb 26 17:37:23 crc kubenswrapper[4805]: E0226 17:37:23.345482 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59505e53-aa4c-4b3d-b53c-823454fa7471" containerName="ovn-config" Feb 26 17:37:23 crc kubenswrapper[4805]: I0226 17:37:23.357173 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="59505e53-aa4c-4b3d-b53c-823454fa7471" containerName="ovn-config" Feb 26 17:37:23 crc kubenswrapper[4805]: I0226 17:37:23.357624 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="59505e53-aa4c-4b3d-b53c-823454fa7471" containerName="ovn-config" Feb 26 17:37:23 crc kubenswrapper[4805]: I0226 17:37:23.358328 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2k9nw-config-h7wv6" Feb 26 17:37:23 crc kubenswrapper[4805]: I0226 17:37:23.363228 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 26 17:37:23 crc kubenswrapper[4805]: I0226 17:37:23.370308 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2k9nw-config-h7wv6"] Feb 26 17:37:23 crc kubenswrapper[4805]: I0226 17:37:23.383174 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9lhc\" (UniqueName: \"kubernetes.io/projected/2230e6b5-fd24-44a3-8426-c45cd271d04c-kube-api-access-j9lhc\") pod \"ovn-controller-2k9nw-config-h7wv6\" (UID: \"2230e6b5-fd24-44a3-8426-c45cd271d04c\") " pod="openstack/ovn-controller-2k9nw-config-h7wv6" Feb 26 17:37:23 crc kubenswrapper[4805]: I0226 17:37:23.383260 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2230e6b5-fd24-44a3-8426-c45cd271d04c-var-run\") pod \"ovn-controller-2k9nw-config-h7wv6\" (UID: \"2230e6b5-fd24-44a3-8426-c45cd271d04c\") " pod="openstack/ovn-controller-2k9nw-config-h7wv6" Feb 26 17:37:23 crc kubenswrapper[4805]: I0226 17:37:23.383285 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2230e6b5-fd24-44a3-8426-c45cd271d04c-scripts\") pod \"ovn-controller-2k9nw-config-h7wv6\" (UID: \"2230e6b5-fd24-44a3-8426-c45cd271d04c\") " pod="openstack/ovn-controller-2k9nw-config-h7wv6" Feb 26 17:37:23 crc kubenswrapper[4805]: I0226 17:37:23.383426 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2230e6b5-fd24-44a3-8426-c45cd271d04c-var-log-ovn\") pod \"ovn-controller-2k9nw-config-h7wv6\" (UID: \"2230e6b5-fd24-44a3-8426-c45cd271d04c\") " pod="openstack/ovn-controller-2k9nw-config-h7wv6" Feb 26 17:37:23 crc kubenswrapper[4805]: I0226 17:37:23.383471 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2230e6b5-fd24-44a3-8426-c45cd271d04c-var-run-ovn\") pod \"ovn-controller-2k9nw-config-h7wv6\" (UID: \"2230e6b5-fd24-44a3-8426-c45cd271d04c\") " pod="openstack/ovn-controller-2k9nw-config-h7wv6" Feb 26 17:37:23 crc kubenswrapper[4805]: I0226 17:37:23.383515 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2230e6b5-fd24-44a3-8426-c45cd271d04c-additional-scripts\") pod \"ovn-controller-2k9nw-config-h7wv6\" (UID: \"2230e6b5-fd24-44a3-8426-c45cd271d04c\") " pod="openstack/ovn-controller-2k9nw-config-h7wv6" Feb 26 17:37:23 crc kubenswrapper[4805]: I0226 17:37:23.484354 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2230e6b5-fd24-44a3-8426-c45cd271d04c-var-run\") pod \"ovn-controller-2k9nw-config-h7wv6\" (UID: \"2230e6b5-fd24-44a3-8426-c45cd271d04c\") " pod="openstack/ovn-controller-2k9nw-config-h7wv6" Feb 26 17:37:23 crc kubenswrapper[4805]: I0226 17:37:23.484407 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2230e6b5-fd24-44a3-8426-c45cd271d04c-scripts\") pod \"ovn-controller-2k9nw-config-h7wv6\" (UID: \"2230e6b5-fd24-44a3-8426-c45cd271d04c\") " pod="openstack/ovn-controller-2k9nw-config-h7wv6" Feb 26 17:37:23 crc kubenswrapper[4805]: I0226 17:37:23.484487 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2230e6b5-fd24-44a3-8426-c45cd271d04c-var-log-ovn\") pod \"ovn-controller-2k9nw-config-h7wv6\" (UID: \"2230e6b5-fd24-44a3-8426-c45cd271d04c\") " pod="openstack/ovn-controller-2k9nw-config-h7wv6" Feb 26 17:37:23 crc kubenswrapper[4805]: I0226 17:37:23.484510 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2230e6b5-fd24-44a3-8426-c45cd271d04c-var-run-ovn\") pod \"ovn-controller-2k9nw-config-h7wv6\" (UID: \"2230e6b5-fd24-44a3-8426-c45cd271d04c\") " pod="openstack/ovn-controller-2k9nw-config-h7wv6" Feb 26 17:37:23 crc kubenswrapper[4805]: I0226 17:37:23.484540 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2230e6b5-fd24-44a3-8426-c45cd271d04c-additional-scripts\") pod \"ovn-controller-2k9nw-config-h7wv6\" (UID: \"2230e6b5-fd24-44a3-8426-c45cd271d04c\") " pod="openstack/ovn-controller-2k9nw-config-h7wv6" Feb 26 17:37:23 crc kubenswrapper[4805]: I0226 17:37:23.484642 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9lhc\" (UniqueName: \"kubernetes.io/projected/2230e6b5-fd24-44a3-8426-c45cd271d04c-kube-api-access-j9lhc\") pod \"ovn-controller-2k9nw-config-h7wv6\" (UID: \"2230e6b5-fd24-44a3-8426-c45cd271d04c\") " pod="openstack/ovn-controller-2k9nw-config-h7wv6" Feb 26 17:37:23 crc kubenswrapper[4805]: I0226 17:37:23.485009 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2230e6b5-fd24-44a3-8426-c45cd271d04c-var-run\") pod \"ovn-controller-2k9nw-config-h7wv6\" (UID: \"2230e6b5-fd24-44a3-8426-c45cd271d04c\") " pod="openstack/ovn-controller-2k9nw-config-h7wv6" Feb 26 17:37:23 crc kubenswrapper[4805]: I0226 17:37:23.485138 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2230e6b5-fd24-44a3-8426-c45cd271d04c-var-log-ovn\") pod \"ovn-controller-2k9nw-config-h7wv6\" (UID: \"2230e6b5-fd24-44a3-8426-c45cd271d04c\") " pod="openstack/ovn-controller-2k9nw-config-h7wv6" Feb 26 17:37:23 crc kubenswrapper[4805]: I0226 17:37:23.485421 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2230e6b5-fd24-44a3-8426-c45cd271d04c-var-run-ovn\") pod \"ovn-controller-2k9nw-config-h7wv6\" (UID: \"2230e6b5-fd24-44a3-8426-c45cd271d04c\") " pod="openstack/ovn-controller-2k9nw-config-h7wv6" Feb 26 17:37:23 crc kubenswrapper[4805]: I0226 17:37:23.485590 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2230e6b5-fd24-44a3-8426-c45cd271d04c-additional-scripts\") pod \"ovn-controller-2k9nw-config-h7wv6\" (UID: \"2230e6b5-fd24-44a3-8426-c45cd271d04c\") " pod="openstack/ovn-controller-2k9nw-config-h7wv6" Feb 26 17:37:23 crc kubenswrapper[4805]: I0226 17:37:23.486727 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2230e6b5-fd24-44a3-8426-c45cd271d04c-scripts\") pod \"ovn-controller-2k9nw-config-h7wv6\" (UID: \"2230e6b5-fd24-44a3-8426-c45cd271d04c\") " pod="openstack/ovn-controller-2k9nw-config-h7wv6" Feb 26 17:37:23 crc kubenswrapper[4805]: I0226 17:37:23.505330 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9lhc\" (UniqueName: \"kubernetes.io/projected/2230e6b5-fd24-44a3-8426-c45cd271d04c-kube-api-access-j9lhc\") pod \"ovn-controller-2k9nw-config-h7wv6\" (UID: \"2230e6b5-fd24-44a3-8426-c45cd271d04c\") " pod="openstack/ovn-controller-2k9nw-config-h7wv6" Feb 26 17:37:23 crc kubenswrapper[4805]: I0226 17:37:23.674737 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2k9nw-config-h7wv6" Feb 26 17:37:24 crc kubenswrapper[4805]: I0226 17:37:24.963818 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59505e53-aa4c-4b3d-b53c-823454fa7471" path="/var/lib/kubelet/pods/59505e53-aa4c-4b3d-b53c-823454fa7471/volumes" Feb 26 17:37:27 crc kubenswrapper[4805]: I0226 17:37:27.774257 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 26 17:37:28 crc kubenswrapper[4805]: I0226 17:37:28.687600 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="537eaeba-93f9-4d28-871b-049946f86c2b" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 26 17:37:29 crc kubenswrapper[4805]: E0226 17:37:29.611478 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Feb 26 17:37:29 crc kubenswrapper[4805]: E0226 17:37:29.611925 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g5zcs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-sqks9_openstack(f21b0b57-d027-42a1-a3c9-b4030f589db8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 17:37:29 crc kubenswrapper[4805]: E0226 17:37:29.613411 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-sqks9" podUID="f21b0b57-d027-42a1-a3c9-b4030f589db8" Feb 26 17:37:29 crc kubenswrapper[4805]: I0226 17:37:29.748277 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wlxbg" Feb 26 17:37:29 crc kubenswrapper[4805]: I0226 17:37:29.816789 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-etc-swift\") pod \"81fa1ce3-014a-4a1d-b880-7d1ea1fb6975\" (UID: \"81fa1ce3-014a-4a1d-b880-7d1ea1fb6975\") " Feb 26 17:37:29 crc kubenswrapper[4805]: I0226 17:37:29.816898 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-dispersionconf\") pod \"81fa1ce3-014a-4a1d-b880-7d1ea1fb6975\" (UID: \"81fa1ce3-014a-4a1d-b880-7d1ea1fb6975\") " Feb 26 17:37:29 crc kubenswrapper[4805]: I0226 17:37:29.816930 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-scripts\") pod \"81fa1ce3-014a-4a1d-b880-7d1ea1fb6975\" (UID: \"81fa1ce3-014a-4a1d-b880-7d1ea1fb6975\") " Feb 26 17:37:29 crc kubenswrapper[4805]: I0226 17:37:29.817065 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-swiftconf\") pod \"81fa1ce3-014a-4a1d-b880-7d1ea1fb6975\" (UID: \"81fa1ce3-014a-4a1d-b880-7d1ea1fb6975\") " Feb 26 17:37:29 crc kubenswrapper[4805]: I0226 17:37:29.817099 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-combined-ca-bundle\") pod \"81fa1ce3-014a-4a1d-b880-7d1ea1fb6975\" (UID: \"81fa1ce3-014a-4a1d-b880-7d1ea1fb6975\") " Feb 26 17:37:29 crc kubenswrapper[4805]: I0226 17:37:29.817193 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgv6x\" (UniqueName: \"kubernetes.io/projected/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-kube-api-access-hgv6x\") pod \"81fa1ce3-014a-4a1d-b880-7d1ea1fb6975\" (UID: \"81fa1ce3-014a-4a1d-b880-7d1ea1fb6975\") " Feb 26 17:37:29 crc kubenswrapper[4805]: I0226 17:37:29.817230 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-ring-data-devices\") pod \"81fa1ce3-014a-4a1d-b880-7d1ea1fb6975\" (UID: \"81fa1ce3-014a-4a1d-b880-7d1ea1fb6975\") " Feb 26 17:37:29 crc kubenswrapper[4805]: I0226 17:37:29.818069 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "81fa1ce3-014a-4a1d-b880-7d1ea1fb6975" (UID: "81fa1ce3-014a-4a1d-b880-7d1ea1fb6975"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:37:29 crc kubenswrapper[4805]: I0226 17:37:29.818269 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "81fa1ce3-014a-4a1d-b880-7d1ea1fb6975" (UID: "81fa1ce3-014a-4a1d-b880-7d1ea1fb6975"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:37:29 crc kubenswrapper[4805]: I0226 17:37:29.822117 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-kube-api-access-hgv6x" (OuterVolumeSpecName: "kube-api-access-hgv6x") pod "81fa1ce3-014a-4a1d-b880-7d1ea1fb6975" (UID: "81fa1ce3-014a-4a1d-b880-7d1ea1fb6975"). InnerVolumeSpecName "kube-api-access-hgv6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:37:29 crc kubenswrapper[4805]: E0226 17:37:29.830698 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/prometheus-metric-storage-0" podUID="657f7632-1861-4399-9731-81e9977c7640" Feb 26 17:37:29 crc kubenswrapper[4805]: I0226 17:37:29.844333 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "81fa1ce3-014a-4a1d-b880-7d1ea1fb6975" (UID: "81fa1ce3-014a-4a1d-b880-7d1ea1fb6975"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:37:29 crc kubenswrapper[4805]: I0226 17:37:29.844867 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-scripts" (OuterVolumeSpecName: "scripts") pod "81fa1ce3-014a-4a1d-b880-7d1ea1fb6975" (UID: "81fa1ce3-014a-4a1d-b880-7d1ea1fb6975"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:37:29 crc kubenswrapper[4805]: I0226 17:37:29.845143 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81fa1ce3-014a-4a1d-b880-7d1ea1fb6975" (UID: "81fa1ce3-014a-4a1d-b880-7d1ea1fb6975"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:37:29 crc kubenswrapper[4805]: I0226 17:37:29.848989 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "81fa1ce3-014a-4a1d-b880-7d1ea1fb6975" (UID: "81fa1ce3-014a-4a1d-b880-7d1ea1fb6975"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:37:29 crc kubenswrapper[4805]: I0226 17:37:29.919462 4805 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:29 crc kubenswrapper[4805]: I0226 17:37:29.919499 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:29 crc kubenswrapper[4805]: I0226 17:37:29.919516 4805 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:29 crc kubenswrapper[4805]: I0226 17:37:29.919528 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:29 crc kubenswrapper[4805]: I0226 17:37:29.919542 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgv6x\" (UniqueName: \"kubernetes.io/projected/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-kube-api-access-hgv6x\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:29 crc kubenswrapper[4805]: I0226 17:37:29.919556 4805 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:29 crc kubenswrapper[4805]: I0226 17:37:29.919566 4805 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/81fa1ce3-014a-4a1d-b880-7d1ea1fb6975-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:30 crc kubenswrapper[4805]: I0226 17:37:30.086085 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-mncb5"] Feb 26 17:37:30 crc kubenswrapper[4805]: I0226 17:37:30.093584 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2k9nw-config-h7wv6"] Feb 26 17:37:30 crc kubenswrapper[4805]: W0226 17:37:30.095352 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79c16d87_0988_452a_99cc_559a6db293d4.slice/crio-9d571c30e7becd8408ea646f0cff56250f4dd63265d3e5a406cde4e2bc1b63e0 WatchSource:0}: Error finding container 9d571c30e7becd8408ea646f0cff56250f4dd63265d3e5a406cde4e2bc1b63e0: Status 404 returned error can't find the container with id 9d571c30e7becd8408ea646f0cff56250f4dd63265d3e5a406cde4e2bc1b63e0 Feb 26 17:37:30 crc kubenswrapper[4805]: W0226 17:37:30.101792 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2230e6b5_fd24_44a3_8426_c45cd271d04c.slice/crio-fddb0a697c0f9b7cbc2934017fe435f5c3f4ca064649384a2270d7c7f10273b7 WatchSource:0}: Error finding container fddb0a697c0f9b7cbc2934017fe435f5c3f4ca064649384a2270d7c7f10273b7: Status 404 returned error can't find the container with id fddb0a697c0f9b7cbc2934017fe435f5c3f4ca064649384a2270d7c7f10273b7 Feb 26 17:37:30 crc kubenswrapper[4805]: I0226 17:37:30.327420 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mncb5" event={"ID":"79c16d87-0988-452a-99cc-559a6db293d4","Type":"ContainerStarted","Data":"d552197cadf847ffe729fef5968af668f3ef5785b74c02b27d1baec1d29c447b"} Feb 26 17:37:30 crc kubenswrapper[4805]: I0226 17:37:30.327473 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mncb5" event={"ID":"79c16d87-0988-452a-99cc-559a6db293d4","Type":"ContainerStarted","Data":"9d571c30e7becd8408ea646f0cff56250f4dd63265d3e5a406cde4e2bc1b63e0"} Feb 26 17:37:30 crc kubenswrapper[4805]: I0226 17:37:30.329906 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"657f7632-1861-4399-9731-81e9977c7640","Type":"ContainerStarted","Data":"759f92c87f54e0f15f61adc82ebf04171eead97bed3a72586349d2316cd0318a"} Feb 26 17:37:30 crc kubenswrapper[4805]: I0226 17:37:30.330279 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 26 17:37:30 crc kubenswrapper[4805]: I0226 17:37:30.332386 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wlxbg" event={"ID":"81fa1ce3-014a-4a1d-b880-7d1ea1fb6975","Type":"ContainerDied","Data":"ff447b3402c0e42a7ed48076341e6a60383479f074ef5de92881621ae6d43155"} Feb 26 17:37:30 crc kubenswrapper[4805]: I0226 17:37:30.332410 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wlxbg" Feb 26 17:37:30 crc kubenswrapper[4805]: I0226 17:37:30.332418 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff447b3402c0e42a7ed48076341e6a60383479f074ef5de92881621ae6d43155" Feb 26 17:37:30 crc kubenswrapper[4805]: I0226 17:37:30.333970 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2k9nw-config-h7wv6" event={"ID":"2230e6b5-fd24-44a3-8426-c45cd271d04c","Type":"ContainerStarted","Data":"fddb0a697c0f9b7cbc2934017fe435f5c3f4ca064649384a2270d7c7f10273b7"} Feb 26 17:37:30 crc kubenswrapper[4805]: E0226 17:37:30.335209 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-sqks9" podUID="f21b0b57-d027-42a1-a3c9-b4030f589db8" Feb 26 17:37:30 crc kubenswrapper[4805]: I0226 17:37:30.353616 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-mncb5" podStartSLOduration=10.353595923 podStartE2EDuration="10.353595923s" podCreationTimestamp="2026-02-26 17:37:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:37:30.341809645 +0000 UTC m=+1364.903563984" watchObservedRunningTime="2026-02-26 17:37:30.353595923 +0000 UTC m=+1364.915350262" Feb 26 17:37:30 crc kubenswrapper[4805]: I0226 17:37:30.668830 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-9dzgt"] Feb 26 17:37:30 crc kubenswrapper[4805]: E0226 17:37:30.669636 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81fa1ce3-014a-4a1d-b880-7d1ea1fb6975" containerName="swift-ring-rebalance" Feb 26 17:37:30 crc kubenswrapper[4805]: I0226 17:37:30.669655 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="81fa1ce3-014a-4a1d-b880-7d1ea1fb6975" containerName="swift-ring-rebalance" Feb 26 17:37:30 crc kubenswrapper[4805]: I0226 17:37:30.669916 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="81fa1ce3-014a-4a1d-b880-7d1ea1fb6975" containerName="swift-ring-rebalance" Feb 26 17:37:30 crc kubenswrapper[4805]: I0226 17:37:30.670861 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-9dzgt" Feb 26 17:37:30 crc kubenswrapper[4805]: I0226 17:37:30.686857 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-9dzgt"] Feb 26 17:37:30 crc kubenswrapper[4805]: I0226 17:37:30.774237 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:37:30 crc kubenswrapper[4805]: I0226 17:37:30.838668 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d63385b-f9af-42fb-b2eb-b1e3e8975f44-operator-scripts\") pod \"cinder-db-create-9dzgt\" (UID: \"9d63385b-f9af-42fb-b2eb-b1e3e8975f44\") " pod="openstack/cinder-db-create-9dzgt" Feb 26 17:37:30 crc kubenswrapper[4805]: I0226 17:37:30.838870 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb7nj\" (UniqueName: \"kubernetes.io/projected/9d63385b-f9af-42fb-b2eb-b1e3e8975f44-kube-api-access-lb7nj\") pod \"cinder-db-create-9dzgt\" (UID: \"9d63385b-f9af-42fb-b2eb-b1e3e8975f44\") " pod="openstack/cinder-db-create-9dzgt" Feb 26 17:37:30 crc kubenswrapper[4805]: I0226 17:37:30.899649 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-create-rdwzk"] Feb 26 17:37:30 crc kubenswrapper[4805]: I0226 17:37:30.900923 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-rdwzk" Feb 26 17:37:30 crc kubenswrapper[4805]: I0226 17:37:30.923502 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-rdwzk"] Feb 26 17:37:30 crc kubenswrapper[4805]: I0226 17:37:30.941714 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lb7nj\" (UniqueName: \"kubernetes.io/projected/9d63385b-f9af-42fb-b2eb-b1e3e8975f44-kube-api-access-lb7nj\") pod \"cinder-db-create-9dzgt\" (UID: \"9d63385b-f9af-42fb-b2eb-b1e3e8975f44\") " pod="openstack/cinder-db-create-9dzgt" Feb 26 17:37:30 crc kubenswrapper[4805]: I0226 17:37:30.941786 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d63385b-f9af-42fb-b2eb-b1e3e8975f44-operator-scripts\") pod \"cinder-db-create-9dzgt\" (UID: \"9d63385b-f9af-42fb-b2eb-b1e3e8975f44\") " pod="openstack/cinder-db-create-9dzgt" Feb 26 17:37:30 crc kubenswrapper[4805]: I0226 17:37:30.944000 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d63385b-f9af-42fb-b2eb-b1e3e8975f44-operator-scripts\") pod \"cinder-db-create-9dzgt\" (UID: \"9d63385b-f9af-42fb-b2eb-b1e3e8975f44\") " pod="openstack/cinder-db-create-9dzgt" Feb 26 17:37:30 crc kubenswrapper[4805]: I0226 17:37:30.951082 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-0480-account-create-update-pxmst"] Feb 26 17:37:30 crc kubenswrapper[4805]: I0226 17:37:30.952866 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0480-account-create-update-pxmst" Feb 26 17:37:30 crc kubenswrapper[4805]: I0226 17:37:30.962453 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 26 17:37:30 crc kubenswrapper[4805]: I0226 17:37:30.971740 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb7nj\" (UniqueName: \"kubernetes.io/projected/9d63385b-f9af-42fb-b2eb-b1e3e8975f44-kube-api-access-lb7nj\") pod \"cinder-db-create-9dzgt\" (UID: \"9d63385b-f9af-42fb-b2eb-b1e3e8975f44\") " pod="openstack/cinder-db-create-9dzgt" Feb 26 17:37:30 crc kubenswrapper[4805]: I0226 17:37:30.992228 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0480-account-create-update-pxmst"] Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.014789 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-9dzgt" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.044305 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7f037b7-d4fa-44cd-a90e-3ba04fd196dc-operator-scripts\") pod \"cinder-0480-account-create-update-pxmst\" (UID: \"a7f037b7-d4fa-44cd-a90e-3ba04fd196dc\") " pod="openstack/cinder-0480-account-create-update-pxmst" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.044513 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5db511e1-e71e-41b9-98d7-e2e3b3a74f8c-operator-scripts\") pod \"cloudkitty-db-create-rdwzk\" (UID: \"5db511e1-e71e-41b9-98d7-e2e3b3a74f8c\") " pod="openstack/cloudkitty-db-create-rdwzk" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.044546 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzs86\" (UniqueName: \"kubernetes.io/projected/a7f037b7-d4fa-44cd-a90e-3ba04fd196dc-kube-api-access-hzs86\") pod \"cinder-0480-account-create-update-pxmst\" (UID: \"a7f037b7-d4fa-44cd-a90e-3ba04fd196dc\") " pod="openstack/cinder-0480-account-create-update-pxmst" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.044671 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxcvc\" (UniqueName: \"kubernetes.io/projected/5db511e1-e71e-41b9-98d7-e2e3b3a74f8c-kube-api-access-vxcvc\") pod \"cloudkitty-db-create-rdwzk\" (UID: \"5db511e1-e71e-41b9-98d7-e2e3b3a74f8c\") " pod="openstack/cloudkitty-db-create-rdwzk" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.074471 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-vnpwb"] Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.075919 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vnpwb" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.087181 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-vnpwb"] Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.146529 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7f037b7-d4fa-44cd-a90e-3ba04fd196dc-operator-scripts\") pod \"cinder-0480-account-create-update-pxmst\" (UID: \"a7f037b7-d4fa-44cd-a90e-3ba04fd196dc\") " pod="openstack/cinder-0480-account-create-update-pxmst" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.146899 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92d622ea-77ad-4464-8359-f8e53216ebe8-operator-scripts\") pod \"barbican-db-create-vnpwb\" (UID: \"92d622ea-77ad-4464-8359-f8e53216ebe8\") " pod="openstack/barbican-db-create-vnpwb" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.147053 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5db511e1-e71e-41b9-98d7-e2e3b3a74f8c-operator-scripts\") pod \"cloudkitty-db-create-rdwzk\" (UID: \"5db511e1-e71e-41b9-98d7-e2e3b3a74f8c\") " pod="openstack/cloudkitty-db-create-rdwzk" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.147096 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzs86\" (UniqueName: \"kubernetes.io/projected/a7f037b7-d4fa-44cd-a90e-3ba04fd196dc-kube-api-access-hzs86\") pod \"cinder-0480-account-create-update-pxmst\" (UID: \"a7f037b7-d4fa-44cd-a90e-3ba04fd196dc\") " pod="openstack/cinder-0480-account-create-update-pxmst" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.147189 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxcvc\" (UniqueName: \"kubernetes.io/projected/5db511e1-e71e-41b9-98d7-e2e3b3a74f8c-kube-api-access-vxcvc\") pod \"cloudkitty-db-create-rdwzk\" (UID: \"5db511e1-e71e-41b9-98d7-e2e3b3a74f8c\") " pod="openstack/cloudkitty-db-create-rdwzk" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.147263 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9dv6\" (UniqueName: \"kubernetes.io/projected/92d622ea-77ad-4464-8359-f8e53216ebe8-kube-api-access-f9dv6\") pod \"barbican-db-create-vnpwb\" (UID: \"92d622ea-77ad-4464-8359-f8e53216ebe8\") " pod="openstack/barbican-db-create-vnpwb" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.147439 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7f037b7-d4fa-44cd-a90e-3ba04fd196dc-operator-scripts\") pod \"cinder-0480-account-create-update-pxmst\" (UID: \"a7f037b7-d4fa-44cd-a90e-3ba04fd196dc\") " pod="openstack/cinder-0480-account-create-update-pxmst" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.147952 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5db511e1-e71e-41b9-98d7-e2e3b3a74f8c-operator-scripts\") pod \"cloudkitty-db-create-rdwzk\" (UID: \"5db511e1-e71e-41b9-98d7-e2e3b3a74f8c\") " pod="openstack/cloudkitty-db-create-rdwzk" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.166549 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-xbnw6"] Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.168102 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-xbnw6" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.171433 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.171592 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bvnnf" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.171642 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.171799 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.176226 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-xbnw6"] Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.190909 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzs86\" (UniqueName: \"kubernetes.io/projected/a7f037b7-d4fa-44cd-a90e-3ba04fd196dc-kube-api-access-hzs86\") pod \"cinder-0480-account-create-update-pxmst\" (UID: \"a7f037b7-d4fa-44cd-a90e-3ba04fd196dc\") " pod="openstack/cinder-0480-account-create-update-pxmst" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.203250 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-2632-account-create-update-p9bx4"] Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.204528 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2632-account-create-update-p9bx4" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.206799 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxcvc\" (UniqueName: \"kubernetes.io/projected/5db511e1-e71e-41b9-98d7-e2e3b3a74f8c-kube-api-access-vxcvc\") pod \"cloudkitty-db-create-rdwzk\" (UID: \"5db511e1-e71e-41b9-98d7-e2e3b3a74f8c\") " pod="openstack/cloudkitty-db-create-rdwzk" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.209373 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.221421 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-2632-account-create-update-p9bx4"] Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.241873 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-rdwzk" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.253198 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9dv6\" (UniqueName: \"kubernetes.io/projected/92d622ea-77ad-4464-8359-f8e53216ebe8-kube-api-access-f9dv6\") pod \"barbican-db-create-vnpwb\" (UID: \"92d622ea-77ad-4464-8359-f8e53216ebe8\") " pod="openstack/barbican-db-create-vnpwb" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.253262 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/570af5c6-aef6-4d10-93c9-439dfb9695ee-combined-ca-bundle\") pod \"keystone-db-sync-xbnw6\" (UID: \"570af5c6-aef6-4d10-93c9-439dfb9695ee\") " pod="openstack/keystone-db-sync-xbnw6" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.253370 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92d622ea-77ad-4464-8359-f8e53216ebe8-operator-scripts\") pod \"barbican-db-create-vnpwb\" (UID: \"92d622ea-77ad-4464-8359-f8e53216ebe8\") " pod="openstack/barbican-db-create-vnpwb" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.253433 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/570af5c6-aef6-4d10-93c9-439dfb9695ee-config-data\") pod \"keystone-db-sync-xbnw6\" (UID: \"570af5c6-aef6-4d10-93c9-439dfb9695ee\") " pod="openstack/keystone-db-sync-xbnw6" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.253475 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcdk4\" (UniqueName: \"kubernetes.io/projected/570af5c6-aef6-4d10-93c9-439dfb9695ee-kube-api-access-xcdk4\") pod \"keystone-db-sync-xbnw6\" (UID: \"570af5c6-aef6-4d10-93c9-439dfb9695ee\") " pod="openstack/keystone-db-sync-xbnw6" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.255050 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92d622ea-77ad-4464-8359-f8e53216ebe8-operator-scripts\") pod \"barbican-db-create-vnpwb\" (UID: \"92d622ea-77ad-4464-8359-f8e53216ebe8\") " pod="openstack/barbican-db-create-vnpwb" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.279786 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9dv6\" (UniqueName: \"kubernetes.io/projected/92d622ea-77ad-4464-8359-f8e53216ebe8-kube-api-access-f9dv6\") pod \"barbican-db-create-vnpwb\" (UID: \"92d622ea-77ad-4464-8359-f8e53216ebe8\") " pod="openstack/barbican-db-create-vnpwb" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.294642 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-5wk8w"] Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.295827 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5wk8w" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.312283 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-5wk8w"] Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.334616 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5832-account-create-update-dk78q"] Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.335825 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5832-account-create-update-dk78q" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.340258 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.355823 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqblk\" (UniqueName: \"kubernetes.io/projected/10b4ec10-7e0f-4640-be48-7ed1584ff69f-kube-api-access-bqblk\") pod \"neutron-db-create-5wk8w\" (UID: \"10b4ec10-7e0f-4640-be48-7ed1584ff69f\") " pod="openstack/neutron-db-create-5wk8w" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.355911 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10b4ec10-7e0f-4640-be48-7ed1584ff69f-operator-scripts\") pod \"neutron-db-create-5wk8w\" (UID: \"10b4ec10-7e0f-4640-be48-7ed1584ff69f\") " pod="openstack/neutron-db-create-5wk8w" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.355970 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/570af5c6-aef6-4d10-93c9-439dfb9695ee-config-data\") pod \"keystone-db-sync-xbnw6\" (UID: \"570af5c6-aef6-4d10-93c9-439dfb9695ee\") " pod="openstack/keystone-db-sync-xbnw6" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.356036 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcdk4\" (UniqueName: \"kubernetes.io/projected/570af5c6-aef6-4d10-93c9-439dfb9695ee-kube-api-access-xcdk4\") pod \"keystone-db-sync-xbnw6\" (UID: \"570af5c6-aef6-4d10-93c9-439dfb9695ee\") " pod="openstack/keystone-db-sync-xbnw6" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.356120 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcb840ba-94cc-4a7a-beea-6505c3f54f5d-operator-scripts\") pod \"barbican-2632-account-create-update-p9bx4\" (UID: \"bcb840ba-94cc-4a7a-beea-6505c3f54f5d\") " pod="openstack/barbican-2632-account-create-update-p9bx4" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.356135 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfttj\" (UniqueName: \"kubernetes.io/projected/bcb840ba-94cc-4a7a-beea-6505c3f54f5d-kube-api-access-pfttj\") pod \"barbican-2632-account-create-update-p9bx4\" (UID: \"bcb840ba-94cc-4a7a-beea-6505c3f54f5d\") " pod="openstack/barbican-2632-account-create-update-p9bx4" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.356159 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/570af5c6-aef6-4d10-93c9-439dfb9695ee-combined-ca-bundle\") pod \"keystone-db-sync-xbnw6\" (UID: \"570af5c6-aef6-4d10-93c9-439dfb9695ee\") " pod="openstack/keystone-db-sync-xbnw6" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.365826 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/570af5c6-aef6-4d10-93c9-439dfb9695ee-combined-ca-bundle\") pod \"keystone-db-sync-xbnw6\" (UID: \"570af5c6-aef6-4d10-93c9-439dfb9695ee\") " pod="openstack/keystone-db-sync-xbnw6" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.375126 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/570af5c6-aef6-4d10-93c9-439dfb9695ee-config-data\") pod \"keystone-db-sync-xbnw6\" (UID: \"570af5c6-aef6-4d10-93c9-439dfb9695ee\") " pod="openstack/keystone-db-sync-xbnw6" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.387248 4805 generic.go:334] "Generic (PLEG): container finished" podID="2230e6b5-fd24-44a3-8426-c45cd271d04c" containerID="9b1d804fe2c74f3e360988241c924b22f92f93ac52ec093add6b80e0ea81450c" exitCode=0 Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.387328 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2k9nw-config-h7wv6" event={"ID":"2230e6b5-fd24-44a3-8426-c45cd271d04c","Type":"ContainerDied","Data":"9b1d804fe2c74f3e360988241c924b22f92f93ac52ec093add6b80e0ea81450c"} Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.390093 4805 generic.go:334] "Generic (PLEG): container finished" podID="79c16d87-0988-452a-99cc-559a6db293d4" containerID="d552197cadf847ffe729fef5968af668f3ef5785b74c02b27d1baec1d29c447b" exitCode=0 Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.390122 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mncb5" event={"ID":"79c16d87-0988-452a-99cc-559a6db293d4","Type":"ContainerDied","Data":"d552197cadf847ffe729fef5968af668f3ef5785b74c02b27d1baec1d29c447b"} Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.393491 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5832-account-create-update-dk78q"] Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.399870 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcdk4\" (UniqueName: \"kubernetes.io/projected/570af5c6-aef6-4d10-93c9-439dfb9695ee-kube-api-access-xcdk4\") pod \"keystone-db-sync-xbnw6\" (UID: \"570af5c6-aef6-4d10-93c9-439dfb9695ee\") " pod="openstack/keystone-db-sync-xbnw6" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.423403 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-6bb0-account-create-update-kjgd9"] Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.424510 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-6bb0-account-create-update-kjgd9" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.426185 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0480-account-create-update-pxmst" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.428787 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-db-secret" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.438527 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vnpwb" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.439460 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-6bb0-account-create-update-kjgd9"] Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.457574 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78b81ab7-3249-4960-b05c-7c75f88ed845-operator-scripts\") pod \"neutron-5832-account-create-update-dk78q\" (UID: \"78b81ab7-3249-4960-b05c-7c75f88ed845\") " pod="openstack/neutron-5832-account-create-update-dk78q" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.457674 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcb840ba-94cc-4a7a-beea-6505c3f54f5d-operator-scripts\") pod \"barbican-2632-account-create-update-p9bx4\" (UID: \"bcb840ba-94cc-4a7a-beea-6505c3f54f5d\") " pod="openstack/barbican-2632-account-create-update-p9bx4" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.457696 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfttj\" (UniqueName: \"kubernetes.io/projected/bcb840ba-94cc-4a7a-beea-6505c3f54f5d-kube-api-access-pfttj\") pod \"barbican-2632-account-create-update-p9bx4\" (UID: \"bcb840ba-94cc-4a7a-beea-6505c3f54f5d\") " pod="openstack/barbican-2632-account-create-update-p9bx4" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.457713 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rj47\" (UniqueName: \"kubernetes.io/projected/78b81ab7-3249-4960-b05c-7c75f88ed845-kube-api-access-2rj47\") pod \"neutron-5832-account-create-update-dk78q\" (UID: \"78b81ab7-3249-4960-b05c-7c75f88ed845\") " pod="openstack/neutron-5832-account-create-update-dk78q" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.457759 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqblk\" (UniqueName: \"kubernetes.io/projected/10b4ec10-7e0f-4640-be48-7ed1584ff69f-kube-api-access-bqblk\") pod \"neutron-db-create-5wk8w\" (UID: \"10b4ec10-7e0f-4640-be48-7ed1584ff69f\") " pod="openstack/neutron-db-create-5wk8w" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.457822 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10b4ec10-7e0f-4640-be48-7ed1584ff69f-operator-scripts\") pod \"neutron-db-create-5wk8w\" (UID: \"10b4ec10-7e0f-4640-be48-7ed1584ff69f\") " pod="openstack/neutron-db-create-5wk8w" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.458711 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10b4ec10-7e0f-4640-be48-7ed1584ff69f-operator-scripts\") pod \"neutron-db-create-5wk8w\" (UID: \"10b4ec10-7e0f-4640-be48-7ed1584ff69f\") " pod="openstack/neutron-db-create-5wk8w" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.458773 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcb840ba-94cc-4a7a-beea-6505c3f54f5d-operator-scripts\") pod \"barbican-2632-account-create-update-p9bx4\" (UID: \"bcb840ba-94cc-4a7a-beea-6505c3f54f5d\") " pod="openstack/barbican-2632-account-create-update-p9bx4" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.479994 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfttj\" (UniqueName: \"kubernetes.io/projected/bcb840ba-94cc-4a7a-beea-6505c3f54f5d-kube-api-access-pfttj\") pod \"barbican-2632-account-create-update-p9bx4\" (UID: \"bcb840ba-94cc-4a7a-beea-6505c3f54f5d\") " pod="openstack/barbican-2632-account-create-update-p9bx4" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.481167 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqblk\" (UniqueName: \"kubernetes.io/projected/10b4ec10-7e0f-4640-be48-7ed1584ff69f-kube-api-access-bqblk\") pod \"neutron-db-create-5wk8w\" (UID: \"10b4ec10-7e0f-4640-be48-7ed1584ff69f\") " pod="openstack/neutron-db-create-5wk8w" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.560744 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78b81ab7-3249-4960-b05c-7c75f88ed845-operator-scripts\") pod \"neutron-5832-account-create-update-dk78q\" (UID: \"78b81ab7-3249-4960-b05c-7c75f88ed845\") " pod="openstack/neutron-5832-account-create-update-dk78q" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.560837 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbqz4\" (UniqueName: \"kubernetes.io/projected/febba13f-b5af-4cb3-be2a-38f379c5b1aa-kube-api-access-xbqz4\") pod \"cloudkitty-6bb0-account-create-update-kjgd9\" (UID: \"febba13f-b5af-4cb3-be2a-38f379c5b1aa\") " pod="openstack/cloudkitty-6bb0-account-create-update-kjgd9" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.560920 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rj47\" (UniqueName: \"kubernetes.io/projected/78b81ab7-3249-4960-b05c-7c75f88ed845-kube-api-access-2rj47\") pod \"neutron-5832-account-create-update-dk78q\" (UID: \"78b81ab7-3249-4960-b05c-7c75f88ed845\") " pod="openstack/neutron-5832-account-create-update-dk78q" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.560953 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/febba13f-b5af-4cb3-be2a-38f379c5b1aa-operator-scripts\") pod \"cloudkitty-6bb0-account-create-update-kjgd9\" (UID: \"febba13f-b5af-4cb3-be2a-38f379c5b1aa\") " pod="openstack/cloudkitty-6bb0-account-create-update-kjgd9" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.561879 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78b81ab7-3249-4960-b05c-7c75f88ed845-operator-scripts\") pod \"neutron-5832-account-create-update-dk78q\" (UID: \"78b81ab7-3249-4960-b05c-7c75f88ed845\") " pod="openstack/neutron-5832-account-create-update-dk78q" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.590405 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rj47\" (UniqueName: \"kubernetes.io/projected/78b81ab7-3249-4960-b05c-7c75f88ed845-kube-api-access-2rj47\") pod \"neutron-5832-account-create-update-dk78q\" (UID: \"78b81ab7-3249-4960-b05c-7c75f88ed845\") " pod="openstack/neutron-5832-account-create-update-dk78q" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.629567 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-xbnw6" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.645509 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2632-account-create-update-p9bx4" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.663233 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbqz4\" (UniqueName: \"kubernetes.io/projected/febba13f-b5af-4cb3-be2a-38f379c5b1aa-kube-api-access-xbqz4\") pod \"cloudkitty-6bb0-account-create-update-kjgd9\" (UID: \"febba13f-b5af-4cb3-be2a-38f379c5b1aa\") " pod="openstack/cloudkitty-6bb0-account-create-update-kjgd9" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.663361 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/febba13f-b5af-4cb3-be2a-38f379c5b1aa-operator-scripts\") pod \"cloudkitty-6bb0-account-create-update-kjgd9\" (UID: \"febba13f-b5af-4cb3-be2a-38f379c5b1aa\") " pod="openstack/cloudkitty-6bb0-account-create-update-kjgd9" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.664290 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/febba13f-b5af-4cb3-be2a-38f379c5b1aa-operator-scripts\") pod \"cloudkitty-6bb0-account-create-update-kjgd9\" (UID: \"febba13f-b5af-4cb3-be2a-38f379c5b1aa\") " pod="openstack/cloudkitty-6bb0-account-create-update-kjgd9" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.675864 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5wk8w" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.681688 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbqz4\" (UniqueName: \"kubernetes.io/projected/febba13f-b5af-4cb3-be2a-38f379c5b1aa-kube-api-access-xbqz4\") pod \"cloudkitty-6bb0-account-create-update-kjgd9\" (UID: \"febba13f-b5af-4cb3-be2a-38f379c5b1aa\") " pod="openstack/cloudkitty-6bb0-account-create-update-kjgd9" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.689509 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5832-account-create-update-dk78q" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.745463 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-6bb0-account-create-update-kjgd9" Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.791248 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-9dzgt"] Feb 26 17:37:31 crc kubenswrapper[4805]: I0226 17:37:31.837203 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-rdwzk"] Feb 26 17:37:32 crc kubenswrapper[4805]: I0226 17:37:32.083294 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-vnpwb"] Feb 26 17:37:32 crc kubenswrapper[4805]: I0226 17:37:32.095773 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0480-account-create-update-pxmst"] Feb 26 17:37:32 crc kubenswrapper[4805]: I0226 17:37:32.373781 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-2632-account-create-update-p9bx4"] Feb 26 17:37:32 crc kubenswrapper[4805]: I0226 17:37:32.389307 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-xbnw6"] Feb 26 17:37:32 crc kubenswrapper[4805]: I0226 17:37:32.409587 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5832-account-create-update-dk78q"] Feb 26 17:37:32 crc kubenswrapper[4805]: I0226 17:37:32.423209 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-5wk8w"] Feb 26 17:37:32 crc kubenswrapper[4805]: W0226 17:37:32.430191 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10b4ec10_7e0f_4640_be48_7ed1584ff69f.slice/crio-e42282bd3dbe0b79e085b0c606af66352c78efd96fdf31ed3e4058403b9cb6f0 WatchSource:0}: Error finding container e42282bd3dbe0b79e085b0c606af66352c78efd96fdf31ed3e4058403b9cb6f0: Status 404 returned error can't find the container with id e42282bd3dbe0b79e085b0c606af66352c78efd96fdf31ed3e4058403b9cb6f0 Feb 26 17:37:32 crc kubenswrapper[4805]: I0226 17:37:32.464085 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2632-account-create-update-p9bx4" event={"ID":"bcb840ba-94cc-4a7a-beea-6505c3f54f5d","Type":"ContainerStarted","Data":"eb1116a93d6a2648bb8d76b840f2df340e8d34958aaa8dddd64adb21224a005c"} Feb 26 17:37:32 crc kubenswrapper[4805]: I0226 17:37:32.488663 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vnpwb" event={"ID":"92d622ea-77ad-4464-8359-f8e53216ebe8","Type":"ContainerStarted","Data":"63d43e4d689c086fac7be79b97f7df6fccc4cd22d9f1515c95e7ccbfd5cfecbc"} Feb 26 17:37:32 crc kubenswrapper[4805]: I0226 17:37:32.512721 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-vnpwb" podStartSLOduration=1.5127043489999998 podStartE2EDuration="1.512704349s" podCreationTimestamp="2026-02-26 17:37:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:37:32.511181091 +0000 UTC m=+1367.072935430" watchObservedRunningTime="2026-02-26 17:37:32.512704349 +0000 UTC m=+1367.074458688" Feb 26 17:37:32 crc kubenswrapper[4805]: I0226 17:37:32.541844 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"657f7632-1861-4399-9731-81e9977c7640","Type":"ContainerStarted","Data":"cfd7f4516b1e377f9ae6dd83a9a71bee2ee6a82f821e09f6f6ccff2dcb87d2e6"} Feb 26 17:37:32 crc kubenswrapper[4805]: I0226 17:37:32.552361 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-9dzgt" event={"ID":"9d63385b-f9af-42fb-b2eb-b1e3e8975f44","Type":"ContainerStarted","Data":"2dda3c25afac783c6d780332cbc08c804a3ec2abd063b47691a80e7f8395bb35"} Feb 26 17:37:32 crc kubenswrapper[4805]: I0226 17:37:32.552448 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-9dzgt" event={"ID":"9d63385b-f9af-42fb-b2eb-b1e3e8975f44","Type":"ContainerStarted","Data":"e94ce3eaa4d7e995434db65ed75c6e9b8214621cae610782e83cc221aa9f713d"} Feb 26 17:37:32 crc kubenswrapper[4805]: I0226 17:37:32.578037 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0480-account-create-update-pxmst" event={"ID":"a7f037b7-d4fa-44cd-a90e-3ba04fd196dc","Type":"ContainerStarted","Data":"ff547a727c3c968f2699fede59b737bedf7fbd78e91116040f3ad2d9290b7f6a"} Feb 26 17:37:32 crc kubenswrapper[4805]: I0226 17:37:32.578086 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0480-account-create-update-pxmst" event={"ID":"a7f037b7-d4fa-44cd-a90e-3ba04fd196dc","Type":"ContainerStarted","Data":"435867369b90cc7d4ea62a9cee7ee3bd6bbb03656a2bcce0681e2500c6f0b6d7"} Feb 26 17:37:32 crc kubenswrapper[4805]: I0226 17:37:32.580234 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-rdwzk" event={"ID":"5db511e1-e71e-41b9-98d7-e2e3b3a74f8c","Type":"ContainerStarted","Data":"0a44610bcbb0a3478571154c788d395245df50b17c2df5a8002c8afd5d19109a"} Feb 26 17:37:32 crc kubenswrapper[4805]: I0226 17:37:32.580264 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-rdwzk" event={"ID":"5db511e1-e71e-41b9-98d7-e2e3b3a74f8c","Type":"ContainerStarted","Data":"82af9491ff68115b0dd7b494795c8bfa8645c17440740ae0178d5195b48329c9"} Feb 26 17:37:32 crc kubenswrapper[4805]: I0226 17:37:32.584514 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-xbnw6" event={"ID":"570af5c6-aef6-4d10-93c9-439dfb9695ee","Type":"ContainerStarted","Data":"cf2cd1dd4246972ada24759ba4510dd670fa1507db64fecfe0c3009c37be2bc7"} Feb 26 17:37:32 crc kubenswrapper[4805]: I0226 17:37:32.634460 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=25.205893841 podStartE2EDuration="1m27.634435457s" podCreationTimestamp="2026-02-26 17:36:05 +0000 UTC" firstStartedPulling="2026-02-26 17:36:29.474659092 +0000 UTC m=+1304.036413431" lastFinishedPulling="2026-02-26 17:37:31.903200708 +0000 UTC m=+1366.464955047" observedRunningTime="2026-02-26 17:37:32.625404859 +0000 UTC m=+1367.187159198" watchObservedRunningTime="2026-02-26 17:37:32.634435457 +0000 UTC m=+1367.196189796" Feb 26 17:37:32 crc kubenswrapper[4805]: I0226 17:37:32.675006 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-9dzgt" podStartSLOduration=2.674966312 podStartE2EDuration="2.674966312s" podCreationTimestamp="2026-02-26 17:37:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:37:32.657385178 +0000 UTC m=+1367.219139527" watchObservedRunningTime="2026-02-26 17:37:32.674966312 +0000 UTC m=+1367.236720651" Feb 26 17:37:32 crc kubenswrapper[4805]: I0226 17:37:32.695532 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-0480-account-create-update-pxmst" podStartSLOduration=2.695499521 podStartE2EDuration="2.695499521s" podCreationTimestamp="2026-02-26 17:37:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:37:32.692180837 +0000 UTC m=+1367.253935176" watchObservedRunningTime="2026-02-26 17:37:32.695499521 +0000 UTC m=+1367.257253860" Feb 26 17:37:32 crc kubenswrapper[4805]: I0226 17:37:32.727045 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-6bb0-account-create-update-kjgd9"] Feb 26 17:37:32 crc kubenswrapper[4805]: I0226 17:37:32.785154 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-db-create-rdwzk" podStartSLOduration=2.785125448 podStartE2EDuration="2.785125448s" podCreationTimestamp="2026-02-26 17:37:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:37:32.747707591 +0000 UTC m=+1367.309461930" watchObservedRunningTime="2026-02-26 17:37:32.785125448 +0000 UTC m=+1367.346879787" Feb 26 17:37:32 crc kubenswrapper[4805]: I0226 17:37:32.978607 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 17:37:32 crc kubenswrapper[4805]: I0226 17:37:32.978710 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 17:37:32 crc kubenswrapper[4805]: I0226 17:37:32.978764 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" Feb 26 17:37:32 crc kubenswrapper[4805]: I0226 17:37:32.979705 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"790d5b8d614e0ff22c848f43674f8b7d4c300d976397a943518fa87467a5a9a3"} pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 17:37:32 crc kubenswrapper[4805]: I0226 17:37:32.979772 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" containerID="cri-o://790d5b8d614e0ff22c848f43674f8b7d4c300d976397a943518fa87467a5a9a3" gracePeriod=600 Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.208285 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2k9nw-config-h7wv6" Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.403812 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2230e6b5-fd24-44a3-8426-c45cd271d04c-var-log-ovn\") pod \"2230e6b5-fd24-44a3-8426-c45cd271d04c\" (UID: \"2230e6b5-fd24-44a3-8426-c45cd271d04c\") " Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.403922 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2230e6b5-fd24-44a3-8426-c45cd271d04c-additional-scripts\") pod \"2230e6b5-fd24-44a3-8426-c45cd271d04c\" (UID: \"2230e6b5-fd24-44a3-8426-c45cd271d04c\") " Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.403955 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2230e6b5-fd24-44a3-8426-c45cd271d04c-var-run\") pod \"2230e6b5-fd24-44a3-8426-c45cd271d04c\" (UID: \"2230e6b5-fd24-44a3-8426-c45cd271d04c\") " Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.404059 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9lhc\" (UniqueName: \"kubernetes.io/projected/2230e6b5-fd24-44a3-8426-c45cd271d04c-kube-api-access-j9lhc\") pod \"2230e6b5-fd24-44a3-8426-c45cd271d04c\" (UID: \"2230e6b5-fd24-44a3-8426-c45cd271d04c\") " Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.404100 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2230e6b5-fd24-44a3-8426-c45cd271d04c-var-run-ovn\") pod \"2230e6b5-fd24-44a3-8426-c45cd271d04c\" (UID: \"2230e6b5-fd24-44a3-8426-c45cd271d04c\") " Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.404149 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2230e6b5-fd24-44a3-8426-c45cd271d04c-scripts\") pod \"2230e6b5-fd24-44a3-8426-c45cd271d04c\" (UID: \"2230e6b5-fd24-44a3-8426-c45cd271d04c\") " Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.404824 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2230e6b5-fd24-44a3-8426-c45cd271d04c-var-run" (OuterVolumeSpecName: "var-run") pod "2230e6b5-fd24-44a3-8426-c45cd271d04c" (UID: "2230e6b5-fd24-44a3-8426-c45cd271d04c"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.404939 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2230e6b5-fd24-44a3-8426-c45cd271d04c-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "2230e6b5-fd24-44a3-8426-c45cd271d04c" (UID: "2230e6b5-fd24-44a3-8426-c45cd271d04c"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.405153 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2230e6b5-fd24-44a3-8426-c45cd271d04c-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "2230e6b5-fd24-44a3-8426-c45cd271d04c" (UID: "2230e6b5-fd24-44a3-8426-c45cd271d04c"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.405564 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2230e6b5-fd24-44a3-8426-c45cd271d04c-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "2230e6b5-fd24-44a3-8426-c45cd271d04c" (UID: "2230e6b5-fd24-44a3-8426-c45cd271d04c"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.406397 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2230e6b5-fd24-44a3-8426-c45cd271d04c-scripts" (OuterVolumeSpecName: "scripts") pod "2230e6b5-fd24-44a3-8426-c45cd271d04c" (UID: "2230e6b5-fd24-44a3-8426-c45cd271d04c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.435527 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2230e6b5-fd24-44a3-8426-c45cd271d04c-kube-api-access-j9lhc" (OuterVolumeSpecName: "kube-api-access-j9lhc") pod "2230e6b5-fd24-44a3-8426-c45cd271d04c" (UID: "2230e6b5-fd24-44a3-8426-c45cd271d04c"). InnerVolumeSpecName "kube-api-access-j9lhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.500401 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mncb5" Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.507150 4805 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2230e6b5-fd24-44a3-8426-c45cd271d04c-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.507195 4805 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2230e6b5-fd24-44a3-8426-c45cd271d04c-var-run\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.507223 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9lhc\" (UniqueName: \"kubernetes.io/projected/2230e6b5-fd24-44a3-8426-c45cd271d04c-kube-api-access-j9lhc\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.507233 4805 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2230e6b5-fd24-44a3-8426-c45cd271d04c-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.507242 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2230e6b5-fd24-44a3-8426-c45cd271d04c-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.507249 4805 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2230e6b5-fd24-44a3-8426-c45cd271d04c-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.599284 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2k9nw-config-h7wv6" event={"ID":"2230e6b5-fd24-44a3-8426-c45cd271d04c","Type":"ContainerDied","Data":"fddb0a697c0f9b7cbc2934017fe435f5c3f4ca064649384a2270d7c7f10273b7"} Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.599311 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2k9nw-config-h7wv6" Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.599322 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fddb0a697c0f9b7cbc2934017fe435f5c3f4ca064649384a2270d7c7f10273b7" Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.601979 4805 generic.go:334] "Generic (PLEG): container finished" podID="a7f037b7-d4fa-44cd-a90e-3ba04fd196dc" containerID="ff547a727c3c968f2699fede59b737bedf7fbd78e91116040f3ad2d9290b7f6a" exitCode=0 Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.602051 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0480-account-create-update-pxmst" event={"ID":"a7f037b7-d4fa-44cd-a90e-3ba04fd196dc","Type":"ContainerDied","Data":"ff547a727c3c968f2699fede59b737bedf7fbd78e91116040f3ad2d9290b7f6a"} Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.607949 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79c16d87-0988-452a-99cc-559a6db293d4-operator-scripts\") pod \"79c16d87-0988-452a-99cc-559a6db293d4\" (UID: \"79c16d87-0988-452a-99cc-559a6db293d4\") " Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.608072 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vsjr\" (UniqueName: \"kubernetes.io/projected/79c16d87-0988-452a-99cc-559a6db293d4-kube-api-access-2vsjr\") pod \"79c16d87-0988-452a-99cc-559a6db293d4\" (UID: \"79c16d87-0988-452a-99cc-559a6db293d4\") " Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.610126 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79c16d87-0988-452a-99cc-559a6db293d4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "79c16d87-0988-452a-99cc-559a6db293d4" (UID: "79c16d87-0988-452a-99cc-559a6db293d4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.613059 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79c16d87-0988-452a-99cc-559a6db293d4-kube-api-access-2vsjr" (OuterVolumeSpecName: "kube-api-access-2vsjr") pod "79c16d87-0988-452a-99cc-559a6db293d4" (UID: "79c16d87-0988-452a-99cc-559a6db293d4"). InnerVolumeSpecName "kube-api-access-2vsjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.615504 4805 generic.go:334] "Generic (PLEG): container finished" podID="5db511e1-e71e-41b9-98d7-e2e3b3a74f8c" containerID="0a44610bcbb0a3478571154c788d395245df50b17c2df5a8002c8afd5d19109a" exitCode=0 Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.615571 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-rdwzk" event={"ID":"5db511e1-e71e-41b9-98d7-e2e3b3a74f8c","Type":"ContainerDied","Data":"0a44610bcbb0a3478571154c788d395245df50b17c2df5a8002c8afd5d19109a"} Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.617424 4805 generic.go:334] "Generic (PLEG): container finished" podID="92d622ea-77ad-4464-8359-f8e53216ebe8" containerID="63252f9c61caf239d73cd6d4b132187d9c44755e88bab88df84bd73f65ead253" exitCode=0 Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.617490 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vnpwb" event={"ID":"92d622ea-77ad-4464-8359-f8e53216ebe8","Type":"ContainerDied","Data":"63252f9c61caf239d73cd6d4b132187d9c44755e88bab88df84bd73f65ead253"} Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.624484 4805 generic.go:334] "Generic (PLEG): container finished" podID="10b4ec10-7e0f-4640-be48-7ed1584ff69f" containerID="36c0183f2698f80ffe4d28383cabae18a68abf130af834a15aeb87225b2371c5" exitCode=0 Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.624542 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5wk8w" event={"ID":"10b4ec10-7e0f-4640-be48-7ed1584ff69f","Type":"ContainerDied","Data":"36c0183f2698f80ffe4d28383cabae18a68abf130af834a15aeb87225b2371c5"} Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.624568 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5wk8w" event={"ID":"10b4ec10-7e0f-4640-be48-7ed1584ff69f","Type":"ContainerStarted","Data":"e42282bd3dbe0b79e085b0c606af66352c78efd96fdf31ed3e4058403b9cb6f0"} Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.637442 4805 generic.go:334] "Generic (PLEG): container finished" podID="9d63385b-f9af-42fb-b2eb-b1e3e8975f44" containerID="2dda3c25afac783c6d780332cbc08c804a3ec2abd063b47691a80e7f8395bb35" exitCode=0 Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.637515 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-9dzgt" event={"ID":"9d63385b-f9af-42fb-b2eb-b1e3e8975f44","Type":"ContainerDied","Data":"2dda3c25afac783c6d780332cbc08c804a3ec2abd063b47691a80e7f8395bb35"} Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.641150 4805 generic.go:334] "Generic (PLEG): container finished" podID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerID="790d5b8d614e0ff22c848f43674f8b7d4c300d976397a943518fa87467a5a9a3" exitCode=0 Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.641210 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" event={"ID":"25e83477-65d0-41be-8e55-fdacfc5871a8","Type":"ContainerDied","Data":"790d5b8d614e0ff22c848f43674f8b7d4c300d976397a943518fa87467a5a9a3"} Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.641232 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" event={"ID":"25e83477-65d0-41be-8e55-fdacfc5871a8","Type":"ContainerStarted","Data":"fda738fe0407aa3e4e71cd0054243c0ef019a44dbbf48701bf838c7b50aeb1e6"} Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.641247 4805 scope.go:117] "RemoveContainer" containerID="990e4f598fe9e20fbcba699271a0052db1db79e508164396a947ff262b514683" Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.647960 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5832-account-create-update-dk78q" event={"ID":"78b81ab7-3249-4960-b05c-7c75f88ed845","Type":"ContainerStarted","Data":"4aa4e1421aad85bae0d2ce9ae9802e3c6bd70d7283d0a45441de19e55988499b"} Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.647999 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5832-account-create-update-dk78q" event={"ID":"78b81ab7-3249-4960-b05c-7c75f88ed845","Type":"ContainerStarted","Data":"3fa831e460531e02e0847dd12193bc493d159db8abeb288f344138d8ef99f484"} Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.654181 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mncb5" Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.654400 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mncb5" event={"ID":"79c16d87-0988-452a-99cc-559a6db293d4","Type":"ContainerDied","Data":"9d571c30e7becd8408ea646f0cff56250f4dd63265d3e5a406cde4e2bc1b63e0"} Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.654437 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d571c30e7becd8408ea646f0cff56250f4dd63265d3e5a406cde4e2bc1b63e0" Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.656216 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-6bb0-account-create-update-kjgd9" event={"ID":"febba13f-b5af-4cb3-be2a-38f379c5b1aa","Type":"ContainerStarted","Data":"2d3b4f81d72c2cdc692c7d533cb82e0290bde2c965cdd60167423057c63f2172"} Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.656240 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-6bb0-account-create-update-kjgd9" event={"ID":"febba13f-b5af-4cb3-be2a-38f379c5b1aa","Type":"ContainerStarted","Data":"1ffd397df81fe79de4a3c442405a7fd57fa3cea8714afd574317b49a1c608e75"} Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.668845 4805 generic.go:334] "Generic (PLEG): container finished" podID="bcb840ba-94cc-4a7a-beea-6505c3f54f5d" containerID="59d7445bf75aecfb3b80dae3da2d2fc0dd8f32a73db55fda4c59c5049b3b50ed" exitCode=0 Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.669812 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2632-account-create-update-p9bx4" event={"ID":"bcb840ba-94cc-4a7a-beea-6505c3f54f5d","Type":"ContainerDied","Data":"59d7445bf75aecfb3b80dae3da2d2fc0dd8f32a73db55fda4c59c5049b3b50ed"} Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.714238 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79c16d87-0988-452a-99cc-559a6db293d4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.714286 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vsjr\" (UniqueName: \"kubernetes.io/projected/79c16d87-0988-452a-99cc-559a6db293d4-kube-api-access-2vsjr\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.745481 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-6bb0-account-create-update-kjgd9" podStartSLOduration=2.745456931 podStartE2EDuration="2.745456931s" podCreationTimestamp="2026-02-26 17:37:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:37:33.736676829 +0000 UTC m=+1368.298431188" watchObservedRunningTime="2026-02-26 17:37:33.745456931 +0000 UTC m=+1368.307211260" Feb 26 17:37:33 crc kubenswrapper[4805]: I0226 17:37:33.797279 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5832-account-create-update-dk78q" podStartSLOduration=2.797256651 podStartE2EDuration="2.797256651s" podCreationTimestamp="2026-02-26 17:37:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:37:33.780677302 +0000 UTC m=+1368.342431641" watchObservedRunningTime="2026-02-26 17:37:33.797256651 +0000 UTC m=+1368.359011010" Feb 26 17:37:34 crc kubenswrapper[4805]: I0226 17:37:34.335139 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-2k9nw-config-h7wv6"] Feb 26 17:37:34 crc kubenswrapper[4805]: I0226 17:37:34.348462 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-2k9nw-config-h7wv6"] Feb 26 17:37:34 crc kubenswrapper[4805]: I0226 17:37:34.680987 4805 generic.go:334] "Generic (PLEG): container finished" podID="febba13f-b5af-4cb3-be2a-38f379c5b1aa" containerID="2d3b4f81d72c2cdc692c7d533cb82e0290bde2c965cdd60167423057c63f2172" exitCode=0 Feb 26 17:37:34 crc kubenswrapper[4805]: I0226 17:37:34.681079 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-6bb0-account-create-update-kjgd9" event={"ID":"febba13f-b5af-4cb3-be2a-38f379c5b1aa","Type":"ContainerDied","Data":"2d3b4f81d72c2cdc692c7d533cb82e0290bde2c965cdd60167423057c63f2172"} Feb 26 17:37:34 crc kubenswrapper[4805]: I0226 17:37:34.689719 4805 generic.go:334] "Generic (PLEG): container finished" podID="78b81ab7-3249-4960-b05c-7c75f88ed845" containerID="4aa4e1421aad85bae0d2ce9ae9802e3c6bd70d7283d0a45441de19e55988499b" exitCode=0 Feb 26 17:37:34 crc kubenswrapper[4805]: I0226 17:37:34.689951 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5832-account-create-update-dk78q" event={"ID":"78b81ab7-3249-4960-b05c-7c75f88ed845","Type":"ContainerDied","Data":"4aa4e1421aad85bae0d2ce9ae9802e3c6bd70d7283d0a45441de19e55988499b"} Feb 26 17:37:34 crc kubenswrapper[4805]: I0226 17:37:34.965305 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2230e6b5-fd24-44a3-8426-c45cd271d04c" path="/var/lib/kubelet/pods/2230e6b5-fd24-44a3-8426-c45cd271d04c/volumes" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.154538 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vnpwb" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.344718 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9dv6\" (UniqueName: \"kubernetes.io/projected/92d622ea-77ad-4464-8359-f8e53216ebe8-kube-api-access-f9dv6\") pod \"92d622ea-77ad-4464-8359-f8e53216ebe8\" (UID: \"92d622ea-77ad-4464-8359-f8e53216ebe8\") " Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.344911 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92d622ea-77ad-4464-8359-f8e53216ebe8-operator-scripts\") pod \"92d622ea-77ad-4464-8359-f8e53216ebe8\" (UID: \"92d622ea-77ad-4464-8359-f8e53216ebe8\") " Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.351600 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92d622ea-77ad-4464-8359-f8e53216ebe8-kube-api-access-f9dv6" (OuterVolumeSpecName: "kube-api-access-f9dv6") pod "92d622ea-77ad-4464-8359-f8e53216ebe8" (UID: "92d622ea-77ad-4464-8359-f8e53216ebe8"). InnerVolumeSpecName "kube-api-access-f9dv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.451402 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9dv6\" (UniqueName: \"kubernetes.io/projected/92d622ea-77ad-4464-8359-f8e53216ebe8-kube-api-access-f9dv6\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.460643 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0480-account-create-update-pxmst" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.466489 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-rdwzk" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.486709 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-9dzgt" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.508967 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2632-account-create-update-p9bx4" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.511289 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5wk8w" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.656525 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfttj\" (UniqueName: \"kubernetes.io/projected/bcb840ba-94cc-4a7a-beea-6505c3f54f5d-kube-api-access-pfttj\") pod \"bcb840ba-94cc-4a7a-beea-6505c3f54f5d\" (UID: \"bcb840ba-94cc-4a7a-beea-6505c3f54f5d\") " Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.656615 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqblk\" (UniqueName: \"kubernetes.io/projected/10b4ec10-7e0f-4640-be48-7ed1584ff69f-kube-api-access-bqblk\") pod \"10b4ec10-7e0f-4640-be48-7ed1584ff69f\" (UID: \"10b4ec10-7e0f-4640-be48-7ed1584ff69f\") " Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.656659 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzs86\" (UniqueName: \"kubernetes.io/projected/a7f037b7-d4fa-44cd-a90e-3ba04fd196dc-kube-api-access-hzs86\") pod \"a7f037b7-d4fa-44cd-a90e-3ba04fd196dc\" (UID: \"a7f037b7-d4fa-44cd-a90e-3ba04fd196dc\") " Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.656699 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5db511e1-e71e-41b9-98d7-e2e3b3a74f8c-operator-scripts\") pod \"5db511e1-e71e-41b9-98d7-e2e3b3a74f8c\" (UID: \"5db511e1-e71e-41b9-98d7-e2e3b3a74f8c\") " Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.656746 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxcvc\" (UniqueName: \"kubernetes.io/projected/5db511e1-e71e-41b9-98d7-e2e3b3a74f8c-kube-api-access-vxcvc\") pod \"5db511e1-e71e-41b9-98d7-e2e3b3a74f8c\" (UID: \"5db511e1-e71e-41b9-98d7-e2e3b3a74f8c\") " Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.656907 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10b4ec10-7e0f-4640-be48-7ed1584ff69f-operator-scripts\") pod \"10b4ec10-7e0f-4640-be48-7ed1584ff69f\" (UID: \"10b4ec10-7e0f-4640-be48-7ed1584ff69f\") " Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.656947 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lb7nj\" (UniqueName: \"kubernetes.io/projected/9d63385b-f9af-42fb-b2eb-b1e3e8975f44-kube-api-access-lb7nj\") pod \"9d63385b-f9af-42fb-b2eb-b1e3e8975f44\" (UID: \"9d63385b-f9af-42fb-b2eb-b1e3e8975f44\") " Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.656992 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcb840ba-94cc-4a7a-beea-6505c3f54f5d-operator-scripts\") pod \"bcb840ba-94cc-4a7a-beea-6505c3f54f5d\" (UID: \"bcb840ba-94cc-4a7a-beea-6505c3f54f5d\") " Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.657101 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7f037b7-d4fa-44cd-a90e-3ba04fd196dc-operator-scripts\") pod \"a7f037b7-d4fa-44cd-a90e-3ba04fd196dc\" (UID: \"a7f037b7-d4fa-44cd-a90e-3ba04fd196dc\") " Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.657145 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d63385b-f9af-42fb-b2eb-b1e3e8975f44-operator-scripts\") pod \"9d63385b-f9af-42fb-b2eb-b1e3e8975f44\" (UID: \"9d63385b-f9af-42fb-b2eb-b1e3e8975f44\") " Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.660993 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7f037b7-d4fa-44cd-a90e-3ba04fd196dc-kube-api-access-hzs86" (OuterVolumeSpecName: "kube-api-access-hzs86") pod "a7f037b7-d4fa-44cd-a90e-3ba04fd196dc" (UID: "a7f037b7-d4fa-44cd-a90e-3ba04fd196dc"). InnerVolumeSpecName "kube-api-access-hzs86". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.661441 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d63385b-f9af-42fb-b2eb-b1e3e8975f44-kube-api-access-lb7nj" (OuterVolumeSpecName: "kube-api-access-lb7nj") pod "9d63385b-f9af-42fb-b2eb-b1e3e8975f44" (UID: "9d63385b-f9af-42fb-b2eb-b1e3e8975f44"). InnerVolumeSpecName "kube-api-access-lb7nj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.661625 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcb840ba-94cc-4a7a-beea-6505c3f54f5d-kube-api-access-pfttj" (OuterVolumeSpecName: "kube-api-access-pfttj") pod "bcb840ba-94cc-4a7a-beea-6505c3f54f5d" (UID: "bcb840ba-94cc-4a7a-beea-6505c3f54f5d"). InnerVolumeSpecName "kube-api-access-pfttj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.664896 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5db511e1-e71e-41b9-98d7-e2e3b3a74f8c-kube-api-access-vxcvc" (OuterVolumeSpecName: "kube-api-access-vxcvc") pod "5db511e1-e71e-41b9-98d7-e2e3b3a74f8c" (UID: "5db511e1-e71e-41b9-98d7-e2e3b3a74f8c"). InnerVolumeSpecName "kube-api-access-vxcvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.664963 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10b4ec10-7e0f-4640-be48-7ed1584ff69f-kube-api-access-bqblk" (OuterVolumeSpecName: "kube-api-access-bqblk") pod "10b4ec10-7e0f-4640-be48-7ed1584ff69f" (UID: "10b4ec10-7e0f-4640-be48-7ed1584ff69f"). InnerVolumeSpecName "kube-api-access-bqblk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.699672 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-9dzgt" event={"ID":"9d63385b-f9af-42fb-b2eb-b1e3e8975f44","Type":"ContainerDied","Data":"e94ce3eaa4d7e995434db65ed75c6e9b8214621cae610782e83cc221aa9f713d"} Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.699718 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e94ce3eaa4d7e995434db65ed75c6e9b8214621cae610782e83cc221aa9f713d" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.699721 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-9dzgt" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.700930 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0480-account-create-update-pxmst" event={"ID":"a7f037b7-d4fa-44cd-a90e-3ba04fd196dc","Type":"ContainerDied","Data":"435867369b90cc7d4ea62a9cee7ee3bd6bbb03656a2bcce0681e2500c6f0b6d7"} Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.700962 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="435867369b90cc7d4ea62a9cee7ee3bd6bbb03656a2bcce0681e2500c6f0b6d7" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.701038 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0480-account-create-update-pxmst" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.702815 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-rdwzk" event={"ID":"5db511e1-e71e-41b9-98d7-e2e3b3a74f8c","Type":"ContainerDied","Data":"82af9491ff68115b0dd7b494795c8bfa8645c17440740ae0178d5195b48329c9"} Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.702838 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82af9491ff68115b0dd7b494795c8bfa8645c17440740ae0178d5195b48329c9" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.702892 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-rdwzk" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.704566 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2632-account-create-update-p9bx4" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.704566 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2632-account-create-update-p9bx4" event={"ID":"bcb840ba-94cc-4a7a-beea-6505c3f54f5d","Type":"ContainerDied","Data":"eb1116a93d6a2648bb8d76b840f2df340e8d34958aaa8dddd64adb21224a005c"} Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.704730 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb1116a93d6a2648bb8d76b840f2df340e8d34958aaa8dddd64adb21224a005c" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.706609 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vnpwb" event={"ID":"92d622ea-77ad-4464-8359-f8e53216ebe8","Type":"ContainerDied","Data":"63d43e4d689c086fac7be79b97f7df6fccc4cd22d9f1515c95e7ccbfd5cfecbc"} Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.706616 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vnpwb" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.706643 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63d43e4d689c086fac7be79b97f7df6fccc4cd22d9f1515c95e7ccbfd5cfecbc" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.709244 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5wk8w" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.709255 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5wk8w" event={"ID":"10b4ec10-7e0f-4640-be48-7ed1584ff69f","Type":"ContainerDied","Data":"e42282bd3dbe0b79e085b0c606af66352c78efd96fdf31ed3e4058403b9cb6f0"} Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.709342 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e42282bd3dbe0b79e085b0c606af66352c78efd96fdf31ed3e4058403b9cb6f0" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.761565 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfttj\" (UniqueName: \"kubernetes.io/projected/bcb840ba-94cc-4a7a-beea-6505c3f54f5d-kube-api-access-pfttj\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.761612 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqblk\" (UniqueName: \"kubernetes.io/projected/10b4ec10-7e0f-4640-be48-7ed1584ff69f-kube-api-access-bqblk\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.761624 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzs86\" (UniqueName: \"kubernetes.io/projected/a7f037b7-d4fa-44cd-a90e-3ba04fd196dc-kube-api-access-hzs86\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.761635 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxcvc\" (UniqueName: \"kubernetes.io/projected/5db511e1-e71e-41b9-98d7-e2e3b3a74f8c-kube-api-access-vxcvc\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.761648 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lb7nj\" (UniqueName: \"kubernetes.io/projected/9d63385b-f9af-42fb-b2eb-b1e3e8975f44-kube-api-access-lb7nj\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.787156 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d63385b-f9af-42fb-b2eb-b1e3e8975f44-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9d63385b-f9af-42fb-b2eb-b1e3e8975f44" (UID: "9d63385b-f9af-42fb-b2eb-b1e3e8975f44"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.787243 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5db511e1-e71e-41b9-98d7-e2e3b3a74f8c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5db511e1-e71e-41b9-98d7-e2e3b3a74f8c" (UID: "5db511e1-e71e-41b9-98d7-e2e3b3a74f8c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.787285 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcb840ba-94cc-4a7a-beea-6505c3f54f5d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bcb840ba-94cc-4a7a-beea-6505c3f54f5d" (UID: "bcb840ba-94cc-4a7a-beea-6505c3f54f5d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.787311 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7f037b7-d4fa-44cd-a90e-3ba04fd196dc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a7f037b7-d4fa-44cd-a90e-3ba04fd196dc" (UID: "a7f037b7-d4fa-44cd-a90e-3ba04fd196dc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.787472 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92d622ea-77ad-4464-8359-f8e53216ebe8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "92d622ea-77ad-4464-8359-f8e53216ebe8" (UID: "92d622ea-77ad-4464-8359-f8e53216ebe8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.788945 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10b4ec10-7e0f-4640-be48-7ed1584ff69f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "10b4ec10-7e0f-4640-be48-7ed1584ff69f" (UID: "10b4ec10-7e0f-4640-be48-7ed1584ff69f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.863793 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5db511e1-e71e-41b9-98d7-e2e3b3a74f8c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.863871 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92d622ea-77ad-4464-8359-f8e53216ebe8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.863888 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10b4ec10-7e0f-4640-be48-7ed1584ff69f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.863904 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcb840ba-94cc-4a7a-beea-6505c3f54f5d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.863943 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7f037b7-d4fa-44cd-a90e-3ba04fd196dc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.863959 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d63385b-f9af-42fb-b2eb-b1e3e8975f44-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:35 crc kubenswrapper[4805]: I0226 17:37:35.941605 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5832-account-create-update-dk78q" Feb 26 17:37:36 crc kubenswrapper[4805]: I0226 17:37:36.069894 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rj47\" (UniqueName: \"kubernetes.io/projected/78b81ab7-3249-4960-b05c-7c75f88ed845-kube-api-access-2rj47\") pod \"78b81ab7-3249-4960-b05c-7c75f88ed845\" (UID: \"78b81ab7-3249-4960-b05c-7c75f88ed845\") " Feb 26 17:37:36 crc kubenswrapper[4805]: I0226 17:37:36.070117 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78b81ab7-3249-4960-b05c-7c75f88ed845-operator-scripts\") pod \"78b81ab7-3249-4960-b05c-7c75f88ed845\" (UID: \"78b81ab7-3249-4960-b05c-7c75f88ed845\") " Feb 26 17:37:36 crc kubenswrapper[4805]: I0226 17:37:36.070690 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78b81ab7-3249-4960-b05c-7c75f88ed845-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "78b81ab7-3249-4960-b05c-7c75f88ed845" (UID: "78b81ab7-3249-4960-b05c-7c75f88ed845"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:37:36 crc kubenswrapper[4805]: I0226 17:37:36.075868 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78b81ab7-3249-4960-b05c-7c75f88ed845-kube-api-access-2rj47" (OuterVolumeSpecName: "kube-api-access-2rj47") pod "78b81ab7-3249-4960-b05c-7c75f88ed845" (UID: "78b81ab7-3249-4960-b05c-7c75f88ed845"). InnerVolumeSpecName "kube-api-access-2rj47". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:37:36 crc kubenswrapper[4805]: I0226 17:37:36.171943 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78b81ab7-3249-4960-b05c-7c75f88ed845-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:36 crc kubenswrapper[4805]: I0226 17:37:36.172445 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rj47\" (UniqueName: \"kubernetes.io/projected/78b81ab7-3249-4960-b05c-7c75f88ed845-kube-api-access-2rj47\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:36 crc kubenswrapper[4805]: I0226 17:37:36.198581 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-6bb0-account-create-update-kjgd9" Feb 26 17:37:36 crc kubenswrapper[4805]: I0226 17:37:36.375256 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbqz4\" (UniqueName: \"kubernetes.io/projected/febba13f-b5af-4cb3-be2a-38f379c5b1aa-kube-api-access-xbqz4\") pod \"febba13f-b5af-4cb3-be2a-38f379c5b1aa\" (UID: \"febba13f-b5af-4cb3-be2a-38f379c5b1aa\") " Feb 26 17:37:36 crc kubenswrapper[4805]: I0226 17:37:36.375406 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/febba13f-b5af-4cb3-be2a-38f379c5b1aa-operator-scripts\") pod \"febba13f-b5af-4cb3-be2a-38f379c5b1aa\" (UID: \"febba13f-b5af-4cb3-be2a-38f379c5b1aa\") " Feb 26 17:37:36 crc kubenswrapper[4805]: I0226 17:37:36.376006 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/febba13f-b5af-4cb3-be2a-38f379c5b1aa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "febba13f-b5af-4cb3-be2a-38f379c5b1aa" (UID: "febba13f-b5af-4cb3-be2a-38f379c5b1aa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:37:36 crc kubenswrapper[4805]: I0226 17:37:36.391775 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/febba13f-b5af-4cb3-be2a-38f379c5b1aa-kube-api-access-xbqz4" (OuterVolumeSpecName: "kube-api-access-xbqz4") pod "febba13f-b5af-4cb3-be2a-38f379c5b1aa" (UID: "febba13f-b5af-4cb3-be2a-38f379c5b1aa"). InnerVolumeSpecName "kube-api-access-xbqz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:37:36 crc kubenswrapper[4805]: I0226 17:37:36.477476 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbqz4\" (UniqueName: \"kubernetes.io/projected/febba13f-b5af-4cb3-be2a-38f379c5b1aa-kube-api-access-xbqz4\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:36 crc kubenswrapper[4805]: I0226 17:37:36.477528 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/febba13f-b5af-4cb3-be2a-38f379c5b1aa-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:36 crc kubenswrapper[4805]: I0226 17:37:36.718144 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5832-account-create-update-dk78q" event={"ID":"78b81ab7-3249-4960-b05c-7c75f88ed845","Type":"ContainerDied","Data":"3fa831e460531e02e0847dd12193bc493d159db8abeb288f344138d8ef99f484"} Feb 26 17:37:36 crc kubenswrapper[4805]: I0226 17:37:36.718195 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fa831e460531e02e0847dd12193bc493d159db8abeb288f344138d8ef99f484" Feb 26 17:37:36 crc kubenswrapper[4805]: I0226 17:37:36.718163 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5832-account-create-update-dk78q" Feb 26 17:37:36 crc kubenswrapper[4805]: I0226 17:37:36.719958 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-6bb0-account-create-update-kjgd9" event={"ID":"febba13f-b5af-4cb3-be2a-38f379c5b1aa","Type":"ContainerDied","Data":"1ffd397df81fe79de4a3c442405a7fd57fa3cea8714afd574317b49a1c608e75"} Feb 26 17:37:36 crc kubenswrapper[4805]: I0226 17:37:36.719987 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-6bb0-account-create-update-kjgd9" Feb 26 17:37:36 crc kubenswrapper[4805]: I0226 17:37:36.720001 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ffd397df81fe79de4a3c442405a7fd57fa3cea8714afd574317b49a1c608e75" Feb 26 17:37:37 crc kubenswrapper[4805]: I0226 17:37:37.071310 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-mncb5"] Feb 26 17:37:37 crc kubenswrapper[4805]: I0226 17:37:37.081517 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-mncb5"] Feb 26 17:37:37 crc kubenswrapper[4805]: I0226 17:37:37.281612 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:37 crc kubenswrapper[4805]: I0226 17:37:37.281673 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:37 crc kubenswrapper[4805]: I0226 17:37:37.284586 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:37 crc kubenswrapper[4805]: I0226 17:37:37.730029 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:38 crc kubenswrapper[4805]: I0226 17:37:38.687118 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 26 17:37:38 crc kubenswrapper[4805]: I0226 17:37:38.963432 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79c16d87-0988-452a-99cc-559a6db293d4" path="/var/lib/kubelet/pods/79c16d87-0988-452a-99cc-559a6db293d4/volumes" Feb 26 17:37:40 crc kubenswrapper[4805]: I0226 17:37:40.121401 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 26 17:37:40 crc kubenswrapper[4805]: I0226 17:37:40.122248 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="657f7632-1861-4399-9731-81e9977c7640" containerName="config-reloader" containerID="cri-o://0a6fc87c6e1358119cfe76bc0ee3e0b170d487e30111a2a88de2ff8cb335f1b0" gracePeriod=600 Feb 26 17:37:40 crc kubenswrapper[4805]: I0226 17:37:40.122352 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="657f7632-1861-4399-9731-81e9977c7640" containerName="prometheus" containerID="cri-o://cfd7f4516b1e377f9ae6dd83a9a71bee2ee6a82f821e09f6f6ccff2dcb87d2e6" gracePeriod=600 Feb 26 17:37:40 crc kubenswrapper[4805]: I0226 17:37:40.122517 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="657f7632-1861-4399-9731-81e9977c7640" containerName="thanos-sidecar" containerID="cri-o://759f92c87f54e0f15f61adc82ebf04171eead97bed3a72586349d2316cd0318a" gracePeriod=600 Feb 26 17:37:40 crc kubenswrapper[4805]: I0226 17:37:40.760490 4805 generic.go:334] "Generic (PLEG): container finished" podID="657f7632-1861-4399-9731-81e9977c7640" containerID="cfd7f4516b1e377f9ae6dd83a9a71bee2ee6a82f821e09f6f6ccff2dcb87d2e6" exitCode=0 Feb 26 17:37:40 crc kubenswrapper[4805]: I0226 17:37:40.760777 4805 generic.go:334] "Generic (PLEG): container finished" podID="657f7632-1861-4399-9731-81e9977c7640" containerID="759f92c87f54e0f15f61adc82ebf04171eead97bed3a72586349d2316cd0318a" exitCode=0 Feb 26 17:37:40 crc kubenswrapper[4805]: I0226 17:37:40.760785 4805 generic.go:334] "Generic (PLEG): container finished" podID="657f7632-1861-4399-9731-81e9977c7640" containerID="0a6fc87c6e1358119cfe76bc0ee3e0b170d487e30111a2a88de2ff8cb335f1b0" exitCode=0 Feb 26 17:37:40 crc kubenswrapper[4805]: I0226 17:37:40.760559 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"657f7632-1861-4399-9731-81e9977c7640","Type":"ContainerDied","Data":"cfd7f4516b1e377f9ae6dd83a9a71bee2ee6a82f821e09f6f6ccff2dcb87d2e6"} Feb 26 17:37:40 crc kubenswrapper[4805]: I0226 17:37:40.760821 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"657f7632-1861-4399-9731-81e9977c7640","Type":"ContainerDied","Data":"759f92c87f54e0f15f61adc82ebf04171eead97bed3a72586349d2316cd0318a"} Feb 26 17:37:40 crc kubenswrapper[4805]: I0226 17:37:40.760835 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"657f7632-1861-4399-9731-81e9977c7640","Type":"ContainerDied","Data":"0a6fc87c6e1358119cfe76bc0ee3e0b170d487e30111a2a88de2ff8cb335f1b0"} Feb 26 17:37:41 crc kubenswrapper[4805]: I0226 17:37:41.773787 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"657f7632-1861-4399-9731-81e9977c7640","Type":"ContainerDied","Data":"be34f6124874aca36f8ef9de9381bfa07f028b4c95e6aca832e61d23c528dcfc"} Feb 26 17:37:41 crc kubenswrapper[4805]: I0226 17:37:41.774122 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be34f6124874aca36f8ef9de9381bfa07f028b4c95e6aca832e61d23c528dcfc" Feb 26 17:37:41 crc kubenswrapper[4805]: I0226 17:37:41.854480 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:41 crc kubenswrapper[4805]: I0226 17:37:41.977073 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/657f7632-1861-4399-9731-81e9977c7640-prometheus-metric-storage-rulefiles-1\") pod \"657f7632-1861-4399-9731-81e9977c7640\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") " Feb 26 17:37:41 crc kubenswrapper[4805]: I0226 17:37:41.977185 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/657f7632-1861-4399-9731-81e9977c7640-web-config\") pod \"657f7632-1861-4399-9731-81e9977c7640\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") " Feb 26 17:37:41 crc kubenswrapper[4805]: I0226 17:37:41.977232 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/657f7632-1861-4399-9731-81e9977c7640-prometheus-metric-storage-rulefiles-0\") pod \"657f7632-1861-4399-9731-81e9977c7640\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") " Feb 26 17:37:41 crc kubenswrapper[4805]: I0226 17:37:41.977293 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/657f7632-1861-4399-9731-81e9977c7640-thanos-prometheus-http-client-file\") pod \"657f7632-1861-4399-9731-81e9977c7640\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") " Feb 26 17:37:41 crc kubenswrapper[4805]: I0226 17:37:41.977330 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/657f7632-1861-4399-9731-81e9977c7640-prometheus-metric-storage-rulefiles-2\") pod \"657f7632-1861-4399-9731-81e9977c7640\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") " Feb 26 17:37:41 crc kubenswrapper[4805]: I0226 17:37:41.977359 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/657f7632-1861-4399-9731-81e9977c7640-tls-assets\") pod \"657f7632-1861-4399-9731-81e9977c7640\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") " Feb 26 17:37:41 crc kubenswrapper[4805]: I0226 17:37:41.977448 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d67964b-5983-4e73-ac71-321d1deac404\") pod \"657f7632-1861-4399-9731-81e9977c7640\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") " Feb 26 17:37:41 crc kubenswrapper[4805]: I0226 17:37:41.977481 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/657f7632-1861-4399-9731-81e9977c7640-config-out\") pod \"657f7632-1861-4399-9731-81e9977c7640\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") " Feb 26 17:37:41 crc kubenswrapper[4805]: I0226 17:37:41.977536 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/657f7632-1861-4399-9731-81e9977c7640-config\") pod \"657f7632-1861-4399-9731-81e9977c7640\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") " Feb 26 17:37:41 crc kubenswrapper[4805]: I0226 17:37:41.977562 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g65c7\" (UniqueName: \"kubernetes.io/projected/657f7632-1861-4399-9731-81e9977c7640-kube-api-access-g65c7\") pod \"657f7632-1861-4399-9731-81e9977c7640\" (UID: \"657f7632-1861-4399-9731-81e9977c7640\") " Feb 26 17:37:41 crc kubenswrapper[4805]: I0226 17:37:41.977825 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/657f7632-1861-4399-9731-81e9977c7640-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "657f7632-1861-4399-9731-81e9977c7640" (UID: "657f7632-1861-4399-9731-81e9977c7640"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:37:41 crc kubenswrapper[4805]: I0226 17:37:41.977861 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/657f7632-1861-4399-9731-81e9977c7640-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "657f7632-1861-4399-9731-81e9977c7640" (UID: "657f7632-1861-4399-9731-81e9977c7640"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:37:41 crc kubenswrapper[4805]: I0226 17:37:41.978092 4805 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/657f7632-1861-4399-9731-81e9977c7640-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:41 crc kubenswrapper[4805]: I0226 17:37:41.978665 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/657f7632-1861-4399-9731-81e9977c7640-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "657f7632-1861-4399-9731-81e9977c7640" (UID: "657f7632-1861-4399-9731-81e9977c7640"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:37:41 crc kubenswrapper[4805]: I0226 17:37:41.982975 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/657f7632-1861-4399-9731-81e9977c7640-config" (OuterVolumeSpecName: "config") pod "657f7632-1861-4399-9731-81e9977c7640" (UID: "657f7632-1861-4399-9731-81e9977c7640"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:37:41 crc kubenswrapper[4805]: I0226 17:37:41.984097 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/657f7632-1861-4399-9731-81e9977c7640-kube-api-access-g65c7" (OuterVolumeSpecName: "kube-api-access-g65c7") pod "657f7632-1861-4399-9731-81e9977c7640" (UID: "657f7632-1861-4399-9731-81e9977c7640"). InnerVolumeSpecName "kube-api-access-g65c7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:37:41 crc kubenswrapper[4805]: I0226 17:37:41.985404 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/657f7632-1861-4399-9731-81e9977c7640-config-out" (OuterVolumeSpecName: "config-out") pod "657f7632-1861-4399-9731-81e9977c7640" (UID: "657f7632-1861-4399-9731-81e9977c7640"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:37:41 crc kubenswrapper[4805]: I0226 17:37:41.985896 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/657f7632-1861-4399-9731-81e9977c7640-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "657f7632-1861-4399-9731-81e9977c7640" (UID: "657f7632-1861-4399-9731-81e9977c7640"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:37:41 crc kubenswrapper[4805]: I0226 17:37:41.991655 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/657f7632-1861-4399-9731-81e9977c7640-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "657f7632-1861-4399-9731-81e9977c7640" (UID: "657f7632-1861-4399-9731-81e9977c7640"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.005901 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/657f7632-1861-4399-9731-81e9977c7640-web-config" (OuterVolumeSpecName: "web-config") pod "657f7632-1861-4399-9731-81e9977c7640" (UID: "657f7632-1861-4399-9731-81e9977c7640"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.008714 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d67964b-5983-4e73-ac71-321d1deac404" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "657f7632-1861-4399-9731-81e9977c7640" (UID: "657f7632-1861-4399-9731-81e9977c7640"). InnerVolumeSpecName "pvc-0d67964b-5983-4e73-ac71-321d1deac404". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.079660 4805 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/657f7632-1861-4399-9731-81e9977c7640-web-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.079689 4805 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/657f7632-1861-4399-9731-81e9977c7640-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.079704 4805 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/657f7632-1861-4399-9731-81e9977c7640-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.079715 4805 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/657f7632-1861-4399-9731-81e9977c7640-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.079726 4805 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/657f7632-1861-4399-9731-81e9977c7640-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.079753 4805 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0d67964b-5983-4e73-ac71-321d1deac404\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d67964b-5983-4e73-ac71-321d1deac404\") on node \"crc\" " Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.079765 4805 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/657f7632-1861-4399-9731-81e9977c7640-config-out\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.079776 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/657f7632-1861-4399-9731-81e9977c7640-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.079785 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g65c7\" (UniqueName: \"kubernetes.io/projected/657f7632-1861-4399-9731-81e9977c7640-kube-api-access-g65c7\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.133355 4805 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.133608 4805 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0d67964b-5983-4e73-ac71-321d1deac404" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d67964b-5983-4e73-ac71-321d1deac404") on node "crc" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.161388 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-fqghq"] Feb 26 17:37:42 crc kubenswrapper[4805]: E0226 17:37:42.161876 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb840ba-94cc-4a7a-beea-6505c3f54f5d" containerName="mariadb-account-create-update" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.161901 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb840ba-94cc-4a7a-beea-6505c3f54f5d" containerName="mariadb-account-create-update" Feb 26 17:37:42 crc kubenswrapper[4805]: E0226 17:37:42.161911 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="657f7632-1861-4399-9731-81e9977c7640" containerName="prometheus" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.161922 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="657f7632-1861-4399-9731-81e9977c7640" containerName="prometheus" Feb 26 17:37:42 crc kubenswrapper[4805]: E0226 17:37:42.161940 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7f037b7-d4fa-44cd-a90e-3ba04fd196dc" containerName="mariadb-account-create-update" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.161946 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7f037b7-d4fa-44cd-a90e-3ba04fd196dc" containerName="mariadb-account-create-update" Feb 26 17:37:42 crc kubenswrapper[4805]: E0226 17:37:42.161954 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d63385b-f9af-42fb-b2eb-b1e3e8975f44" containerName="mariadb-database-create" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.161960 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d63385b-f9af-42fb-b2eb-b1e3e8975f44" containerName="mariadb-database-create" Feb 26 17:37:42 crc kubenswrapper[4805]: E0226 17:37:42.161978 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92d622ea-77ad-4464-8359-f8e53216ebe8" containerName="mariadb-database-create" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.161985 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="92d622ea-77ad-4464-8359-f8e53216ebe8" containerName="mariadb-database-create" Feb 26 17:37:42 crc kubenswrapper[4805]: E0226 17:37:42.162001 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="657f7632-1861-4399-9731-81e9977c7640" containerName="thanos-sidecar" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.162008 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="657f7632-1861-4399-9731-81e9977c7640" containerName="thanos-sidecar" Feb 26 17:37:42 crc kubenswrapper[4805]: E0226 17:37:42.162043 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2230e6b5-fd24-44a3-8426-c45cd271d04c" containerName="ovn-config" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.162058 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2230e6b5-fd24-44a3-8426-c45cd271d04c" containerName="ovn-config" Feb 26 17:37:42 crc kubenswrapper[4805]: E0226 17:37:42.162078 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79c16d87-0988-452a-99cc-559a6db293d4" containerName="mariadb-account-create-update" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.162094 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="79c16d87-0988-452a-99cc-559a6db293d4" containerName="mariadb-account-create-update" Feb 26 17:37:42 crc kubenswrapper[4805]: E0226 17:37:42.162103 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78b81ab7-3249-4960-b05c-7c75f88ed845" containerName="mariadb-account-create-update" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.162109 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="78b81ab7-3249-4960-b05c-7c75f88ed845" containerName="mariadb-account-create-update" Feb 26 17:37:42 crc kubenswrapper[4805]: E0226 17:37:42.162132 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="febba13f-b5af-4cb3-be2a-38f379c5b1aa" containerName="mariadb-account-create-update" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.162138 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="febba13f-b5af-4cb3-be2a-38f379c5b1aa" containerName="mariadb-account-create-update" Feb 26 17:37:42 crc kubenswrapper[4805]: E0226 17:37:42.162156 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10b4ec10-7e0f-4640-be48-7ed1584ff69f" containerName="mariadb-database-create" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.162164 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="10b4ec10-7e0f-4640-be48-7ed1584ff69f" containerName="mariadb-database-create" Feb 26 17:37:42 crc kubenswrapper[4805]: E0226 17:37:42.162178 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="657f7632-1861-4399-9731-81e9977c7640" containerName="init-config-reloader" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.162185 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="657f7632-1861-4399-9731-81e9977c7640" containerName="init-config-reloader" Feb 26 17:37:42 crc kubenswrapper[4805]: E0226 17:37:42.162199 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db511e1-e71e-41b9-98d7-e2e3b3a74f8c" containerName="mariadb-database-create" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.162206 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db511e1-e71e-41b9-98d7-e2e3b3a74f8c" containerName="mariadb-database-create" Feb 26 17:37:42 crc kubenswrapper[4805]: E0226 17:37:42.162216 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="657f7632-1861-4399-9731-81e9977c7640" containerName="config-reloader" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.162224 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="657f7632-1861-4399-9731-81e9977c7640" containerName="config-reloader" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.162514 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="657f7632-1861-4399-9731-81e9977c7640" containerName="config-reloader" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.162568 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="657f7632-1861-4399-9731-81e9977c7640" containerName="prometheus" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.162579 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcb840ba-94cc-4a7a-beea-6505c3f54f5d" containerName="mariadb-account-create-update" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.162601 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="10b4ec10-7e0f-4640-be48-7ed1584ff69f" containerName="mariadb-database-create" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.162625 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="657f7632-1861-4399-9731-81e9977c7640" containerName="thanos-sidecar" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.162639 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="78b81ab7-3249-4960-b05c-7c75f88ed845" containerName="mariadb-account-create-update" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.162648 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="febba13f-b5af-4cb3-be2a-38f379c5b1aa" containerName="mariadb-account-create-update" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.162669 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="5db511e1-e71e-41b9-98d7-e2e3b3a74f8c" containerName="mariadb-database-create" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.162681 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d63385b-f9af-42fb-b2eb-b1e3e8975f44" containerName="mariadb-database-create" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.162687 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="79c16d87-0988-452a-99cc-559a6db293d4" containerName="mariadb-account-create-update" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.162696 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2230e6b5-fd24-44a3-8426-c45cd271d04c" containerName="ovn-config" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.162704 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="92d622ea-77ad-4464-8359-f8e53216ebe8" containerName="mariadb-database-create" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.162719 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7f037b7-d4fa-44cd-a90e-3ba04fd196dc" containerName="mariadb-account-create-update" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.163566 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fqghq" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.165885 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.171735 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-fqghq"] Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.181129 4805 reconciler_common.go:293] "Volume detached for volume \"pvc-0d67964b-5983-4e73-ac71-321d1deac404\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d67964b-5983-4e73-ac71-321d1deac404\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.283079 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89ecd16e-4963-4856-88c2-5920a4d78948-operator-scripts\") pod \"root-account-create-update-fqghq\" (UID: \"89ecd16e-4963-4856-88c2-5920a4d78948\") " pod="openstack/root-account-create-update-fqghq" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.283134 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg2qt\" (UniqueName: \"kubernetes.io/projected/89ecd16e-4963-4856-88c2-5920a4d78948-kube-api-access-tg2qt\") pod \"root-account-create-update-fqghq\" (UID: \"89ecd16e-4963-4856-88c2-5920a4d78948\") " pod="openstack/root-account-create-update-fqghq" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.384573 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89ecd16e-4963-4856-88c2-5920a4d78948-operator-scripts\") pod \"root-account-create-update-fqghq\" (UID: \"89ecd16e-4963-4856-88c2-5920a4d78948\") " pod="openstack/root-account-create-update-fqghq" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.384866 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg2qt\" (UniqueName: \"kubernetes.io/projected/89ecd16e-4963-4856-88c2-5920a4d78948-kube-api-access-tg2qt\") pod \"root-account-create-update-fqghq\" (UID: \"89ecd16e-4963-4856-88c2-5920a4d78948\") " pod="openstack/root-account-create-update-fqghq" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.385438 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89ecd16e-4963-4856-88c2-5920a4d78948-operator-scripts\") pod \"root-account-create-update-fqghq\" (UID: \"89ecd16e-4963-4856-88c2-5920a4d78948\") " pod="openstack/root-account-create-update-fqghq" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.402117 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg2qt\" (UniqueName: \"kubernetes.io/projected/89ecd16e-4963-4856-88c2-5920a4d78948-kube-api-access-tg2qt\") pod \"root-account-create-update-fqghq\" (UID: \"89ecd16e-4963-4856-88c2-5920a4d78948\") " pod="openstack/root-account-create-update-fqghq" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.480998 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fqghq" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.790891 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.794954 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-xbnw6" event={"ID":"570af5c6-aef6-4d10-93c9-439dfb9695ee","Type":"ContainerStarted","Data":"2c7a2a2bccf1b8f758badbc3ec907abf4c7dc3d9d49779bf8bf90daec53c3b63"} Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.831969 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-xbnw6" podStartSLOduration=2.532874235 podStartE2EDuration="11.831946776s" podCreationTimestamp="2026-02-26 17:37:31 +0000 UTC" firstStartedPulling="2026-02-26 17:37:32.400294947 +0000 UTC m=+1366.962049286" lastFinishedPulling="2026-02-26 17:37:41.699367488 +0000 UTC m=+1376.261121827" observedRunningTime="2026-02-26 17:37:42.819382119 +0000 UTC m=+1377.381136448" watchObservedRunningTime="2026-02-26 17:37:42.831946776 +0000 UTC m=+1377.393701115" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.845530 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.856500 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.882923 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.893303 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.894904 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.897813 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-f6mbc" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.897903 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.897822 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.897975 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.898091 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.898153 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.898330 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.899305 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.909200 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 26 17:37:42 crc kubenswrapper[4805]: I0226 17:37:42.967673 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="657f7632-1861-4399-9731-81e9977c7640" path="/var/lib/kubelet/pods/657f7632-1861-4399-9731-81e9977c7640/volumes" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.003637 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/596906a4-e4c6-4ede-826b-d349dc6a8dbf-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"596906a4-e4c6-4ede-826b-d349dc6a8dbf\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.003684 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0d67964b-5983-4e73-ac71-321d1deac404\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d67964b-5983-4e73-ac71-321d1deac404\") pod \"prometheus-metric-storage-0\" (UID: \"596906a4-e4c6-4ede-826b-d349dc6a8dbf\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.003746 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/596906a4-e4c6-4ede-826b-d349dc6a8dbf-config\") pod \"prometheus-metric-storage-0\" (UID: \"596906a4-e4c6-4ede-826b-d349dc6a8dbf\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.003768 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/596906a4-e4c6-4ede-826b-d349dc6a8dbf-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"596906a4-e4c6-4ede-826b-d349dc6a8dbf\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.003789 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/596906a4-e4c6-4ede-826b-d349dc6a8dbf-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"596906a4-e4c6-4ede-826b-d349dc6a8dbf\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.003815 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/596906a4-e4c6-4ede-826b-d349dc6a8dbf-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"596906a4-e4c6-4ede-826b-d349dc6a8dbf\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.003850 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/596906a4-e4c6-4ede-826b-d349dc6a8dbf-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"596906a4-e4c6-4ede-826b-d349dc6a8dbf\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.003869 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/596906a4-e4c6-4ede-826b-d349dc6a8dbf-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"596906a4-e4c6-4ede-826b-d349dc6a8dbf\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.003900 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/596906a4-e4c6-4ede-826b-d349dc6a8dbf-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"596906a4-e4c6-4ede-826b-d349dc6a8dbf\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.003923 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/596906a4-e4c6-4ede-826b-d349dc6a8dbf-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"596906a4-e4c6-4ede-826b-d349dc6a8dbf\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.003947 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/596906a4-e4c6-4ede-826b-d349dc6a8dbf-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"596906a4-e4c6-4ede-826b-d349dc6a8dbf\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.003969 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7k8q\" (UniqueName: \"kubernetes.io/projected/596906a4-e4c6-4ede-826b-d349dc6a8dbf-kube-api-access-r7k8q\") pod \"prometheus-metric-storage-0\" (UID: \"596906a4-e4c6-4ede-826b-d349dc6a8dbf\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.003984 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/596906a4-e4c6-4ede-826b-d349dc6a8dbf-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"596906a4-e4c6-4ede-826b-d349dc6a8dbf\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.105223 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/596906a4-e4c6-4ede-826b-d349dc6a8dbf-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"596906a4-e4c6-4ede-826b-d349dc6a8dbf\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.105274 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/596906a4-e4c6-4ede-826b-d349dc6a8dbf-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"596906a4-e4c6-4ede-826b-d349dc6a8dbf\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.105336 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/596906a4-e4c6-4ede-826b-d349dc6a8dbf-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"596906a4-e4c6-4ede-826b-d349dc6a8dbf\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.105372 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/596906a4-e4c6-4ede-826b-d349dc6a8dbf-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"596906a4-e4c6-4ede-826b-d349dc6a8dbf\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.105414 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/596906a4-e4c6-4ede-826b-d349dc6a8dbf-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"596906a4-e4c6-4ede-826b-d349dc6a8dbf\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.105449 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7k8q\" (UniqueName: \"kubernetes.io/projected/596906a4-e4c6-4ede-826b-d349dc6a8dbf-kube-api-access-r7k8q\") pod \"prometheus-metric-storage-0\" (UID: \"596906a4-e4c6-4ede-826b-d349dc6a8dbf\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.105476 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/596906a4-e4c6-4ede-826b-d349dc6a8dbf-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"596906a4-e4c6-4ede-826b-d349dc6a8dbf\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.105542 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/596906a4-e4c6-4ede-826b-d349dc6a8dbf-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"596906a4-e4c6-4ede-826b-d349dc6a8dbf\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.105574 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0d67964b-5983-4e73-ac71-321d1deac404\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d67964b-5983-4e73-ac71-321d1deac404\") pod \"prometheus-metric-storage-0\" (UID: \"596906a4-e4c6-4ede-826b-d349dc6a8dbf\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.105663 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/596906a4-e4c6-4ede-826b-d349dc6a8dbf-config\") pod \"prometheus-metric-storage-0\" (UID: \"596906a4-e4c6-4ede-826b-d349dc6a8dbf\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.105691 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/596906a4-e4c6-4ede-826b-d349dc6a8dbf-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"596906a4-e4c6-4ede-826b-d349dc6a8dbf\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.105714 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/596906a4-e4c6-4ede-826b-d349dc6a8dbf-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"596906a4-e4c6-4ede-826b-d349dc6a8dbf\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.105740 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/596906a4-e4c6-4ede-826b-d349dc6a8dbf-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"596906a4-e4c6-4ede-826b-d349dc6a8dbf\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.106858 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/596906a4-e4c6-4ede-826b-d349dc6a8dbf-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"596906a4-e4c6-4ede-826b-d349dc6a8dbf\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.106923 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/596906a4-e4c6-4ede-826b-d349dc6a8dbf-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"596906a4-e4c6-4ede-826b-d349dc6a8dbf\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.107578 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/596906a4-e4c6-4ede-826b-d349dc6a8dbf-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"596906a4-e4c6-4ede-826b-d349dc6a8dbf\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.109592 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.109630 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0d67964b-5983-4e73-ac71-321d1deac404\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d67964b-5983-4e73-ac71-321d1deac404\") pod \"prometheus-metric-storage-0\" (UID: \"596906a4-e4c6-4ede-826b-d349dc6a8dbf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/46c8a399786a5d13d427e175cc49bbe19d4e67b986514e0609ea6d8887bfe9ac/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.110165 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/596906a4-e4c6-4ede-826b-d349dc6a8dbf-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"596906a4-e4c6-4ede-826b-d349dc6a8dbf\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.111665 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/596906a4-e4c6-4ede-826b-d349dc6a8dbf-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"596906a4-e4c6-4ede-826b-d349dc6a8dbf\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.113219 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/596906a4-e4c6-4ede-826b-d349dc6a8dbf-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"596906a4-e4c6-4ede-826b-d349dc6a8dbf\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.113816 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/596906a4-e4c6-4ede-826b-d349dc6a8dbf-config\") pod \"prometheus-metric-storage-0\" (UID: \"596906a4-e4c6-4ede-826b-d349dc6a8dbf\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.115469 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/596906a4-e4c6-4ede-826b-d349dc6a8dbf-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"596906a4-e4c6-4ede-826b-d349dc6a8dbf\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.118624 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/596906a4-e4c6-4ede-826b-d349dc6a8dbf-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"596906a4-e4c6-4ede-826b-d349dc6a8dbf\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.119816 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/596906a4-e4c6-4ede-826b-d349dc6a8dbf-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"596906a4-e4c6-4ede-826b-d349dc6a8dbf\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.134597 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7k8q\" (UniqueName: \"kubernetes.io/projected/596906a4-e4c6-4ede-826b-d349dc6a8dbf-kube-api-access-r7k8q\") pod \"prometheus-metric-storage-0\" (UID: \"596906a4-e4c6-4ede-826b-d349dc6a8dbf\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.137502 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/596906a4-e4c6-4ede-826b-d349dc6a8dbf-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"596906a4-e4c6-4ede-826b-d349dc6a8dbf\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.152240 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0d67964b-5983-4e73-ac71-321d1deac404\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0d67964b-5983-4e73-ac71-321d1deac404\") pod \"prometheus-metric-storage-0\" (UID: \"596906a4-e4c6-4ede-826b-d349dc6a8dbf\") " pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.185502 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-fqghq"] Feb 26 17:37:43 crc kubenswrapper[4805]: W0226 17:37:43.189046 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89ecd16e_4963_4856_88c2_5920a4d78948.slice/crio-b1725f89b3e5aa472beb4a917e3ea45c15ce70f6056384e2656ff6e053784862 WatchSource:0}: Error finding container b1725f89b3e5aa472beb4a917e3ea45c15ce70f6056384e2656ff6e053784862: Status 404 returned error can't find the container with id b1725f89b3e5aa472beb4a917e3ea45c15ce70f6056384e2656ff6e053784862 Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.233704 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.674428 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.802370 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"596906a4-e4c6-4ede-826b-d349dc6a8dbf","Type":"ContainerStarted","Data":"ba70e2f651249ebb35870262e86986f7e41ec97910aa8b6787a70ae1fe7e439d"} Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.805057 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-sqks9" event={"ID":"f21b0b57-d027-42a1-a3c9-b4030f589db8","Type":"ContainerStarted","Data":"9faab5d1fbc11ac72a3512734c8b3416acd207ea773e248b500ca5d9782edac5"} Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.808682 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fqghq" event={"ID":"89ecd16e-4963-4856-88c2-5920a4d78948","Type":"ContainerStarted","Data":"2c7cb108bd257ac156539abc7b21657be86edcb446efe7138b0ce22e88549bf2"} Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.808714 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fqghq" event={"ID":"89ecd16e-4963-4856-88c2-5920a4d78948","Type":"ContainerStarted","Data":"b1725f89b3e5aa472beb4a917e3ea45c15ce70f6056384e2656ff6e053784862"} Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.836448 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-sqks9" podStartSLOduration=9.059339263 podStartE2EDuration="39.836428657s" podCreationTimestamp="2026-02-26 17:37:04 +0000 UTC" firstStartedPulling="2026-02-26 17:37:11.884696831 +0000 UTC m=+1346.446451170" lastFinishedPulling="2026-02-26 17:37:42.661786225 +0000 UTC m=+1377.223540564" observedRunningTime="2026-02-26 17:37:43.820464963 +0000 UTC m=+1378.382219312" watchObservedRunningTime="2026-02-26 17:37:43.836428657 +0000 UTC m=+1378.398182996" Feb 26 17:37:43 crc kubenswrapper[4805]: I0226 17:37:43.843355 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-fqghq" podStartSLOduration=1.843332261 podStartE2EDuration="1.843332261s" podCreationTimestamp="2026-02-26 17:37:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:37:43.84132454 +0000 UTC m=+1378.403078889" watchObservedRunningTime="2026-02-26 17:37:43.843332261 +0000 UTC m=+1378.405086600" Feb 26 17:37:48 crc kubenswrapper[4805]: I0226 17:37:48.864174 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"596906a4-e4c6-4ede-826b-d349dc6a8dbf","Type":"ContainerStarted","Data":"4f20e19038cbe4a006378e2f9e108f3dde8a6781133c2f6de6e87cab41c60eac"} Feb 26 17:37:49 crc kubenswrapper[4805]: I0226 17:37:49.873735 4805 generic.go:334] "Generic (PLEG): container finished" podID="89ecd16e-4963-4856-88c2-5920a4d78948" containerID="2c7cb108bd257ac156539abc7b21657be86edcb446efe7138b0ce22e88549bf2" exitCode=0 Feb 26 17:37:49 crc kubenswrapper[4805]: I0226 17:37:49.873829 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fqghq" event={"ID":"89ecd16e-4963-4856-88c2-5920a4d78948","Type":"ContainerDied","Data":"2c7cb108bd257ac156539abc7b21657be86edcb446efe7138b0ce22e88549bf2"} Feb 26 17:37:51 crc kubenswrapper[4805]: I0226 17:37:51.253525 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fqghq" Feb 26 17:37:51 crc kubenswrapper[4805]: I0226 17:37:51.379731 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg2qt\" (UniqueName: \"kubernetes.io/projected/89ecd16e-4963-4856-88c2-5920a4d78948-kube-api-access-tg2qt\") pod \"89ecd16e-4963-4856-88c2-5920a4d78948\" (UID: \"89ecd16e-4963-4856-88c2-5920a4d78948\") " Feb 26 17:37:51 crc kubenswrapper[4805]: I0226 17:37:51.380038 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89ecd16e-4963-4856-88c2-5920a4d78948-operator-scripts\") pod \"89ecd16e-4963-4856-88c2-5920a4d78948\" (UID: \"89ecd16e-4963-4856-88c2-5920a4d78948\") " Feb 26 17:37:51 crc kubenswrapper[4805]: I0226 17:37:51.380648 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89ecd16e-4963-4856-88c2-5920a4d78948-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "89ecd16e-4963-4856-88c2-5920a4d78948" (UID: "89ecd16e-4963-4856-88c2-5920a4d78948"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:37:51 crc kubenswrapper[4805]: I0226 17:37:51.385212 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89ecd16e-4963-4856-88c2-5920a4d78948-kube-api-access-tg2qt" (OuterVolumeSpecName: "kube-api-access-tg2qt") pod "89ecd16e-4963-4856-88c2-5920a4d78948" (UID: "89ecd16e-4963-4856-88c2-5920a4d78948"). InnerVolumeSpecName "kube-api-access-tg2qt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:37:51 crc kubenswrapper[4805]: I0226 17:37:51.483838 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89ecd16e-4963-4856-88c2-5920a4d78948-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:51 crc kubenswrapper[4805]: I0226 17:37:51.483886 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg2qt\" (UniqueName: \"kubernetes.io/projected/89ecd16e-4963-4856-88c2-5920a4d78948-kube-api-access-tg2qt\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:51 crc kubenswrapper[4805]: I0226 17:37:51.900846 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fqghq" event={"ID":"89ecd16e-4963-4856-88c2-5920a4d78948","Type":"ContainerDied","Data":"b1725f89b3e5aa472beb4a917e3ea45c15ce70f6056384e2656ff6e053784862"} Feb 26 17:37:51 crc kubenswrapper[4805]: I0226 17:37:51.900894 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1725f89b3e5aa472beb4a917e3ea45c15ce70f6056384e2656ff6e053784862" Feb 26 17:37:51 crc kubenswrapper[4805]: I0226 17:37:51.900929 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fqghq" Feb 26 17:37:51 crc kubenswrapper[4805]: I0226 17:37:51.997686 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a466ee40-e6ef-4a36-96c6-88e7ce00a28c-etc-swift\") pod \"swift-storage-0\" (UID: \"a466ee40-e6ef-4a36-96c6-88e7ce00a28c\") " pod="openstack/swift-storage-0" Feb 26 17:37:52 crc kubenswrapper[4805]: I0226 17:37:52.002739 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a466ee40-e6ef-4a36-96c6-88e7ce00a28c-etc-swift\") pod \"swift-storage-0\" (UID: \"a466ee40-e6ef-4a36-96c6-88e7ce00a28c\") " pod="openstack/swift-storage-0" Feb 26 17:37:52 crc kubenswrapper[4805]: I0226 17:37:52.161263 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 26 17:37:52 crc kubenswrapper[4805]: I0226 17:37:52.739111 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 26 17:37:52 crc kubenswrapper[4805]: I0226 17:37:52.910884 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a466ee40-e6ef-4a36-96c6-88e7ce00a28c","Type":"ContainerStarted","Data":"b7e1fef9a0903e5028deb2e429e6be51344989ce293f6853b74153e58d8b7b8b"} Feb 26 17:37:53 crc kubenswrapper[4805]: I0226 17:37:53.924955 4805 generic.go:334] "Generic (PLEG): container finished" podID="570af5c6-aef6-4d10-93c9-439dfb9695ee" containerID="2c7a2a2bccf1b8f758badbc3ec907abf4c7dc3d9d49779bf8bf90daec53c3b63" exitCode=0 Feb 26 17:37:53 crc kubenswrapper[4805]: I0226 17:37:53.925143 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-xbnw6" event={"ID":"570af5c6-aef6-4d10-93c9-439dfb9695ee","Type":"ContainerDied","Data":"2c7a2a2bccf1b8f758badbc3ec907abf4c7dc3d9d49779bf8bf90daec53c3b63"} Feb 26 17:37:54 crc kubenswrapper[4805]: I0226 17:37:54.937916 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a466ee40-e6ef-4a36-96c6-88e7ce00a28c","Type":"ContainerStarted","Data":"c00c1a76c7a6e1646a836447711171fd233a1bae12a3877044b11c567fe2fbc2"} Feb 26 17:37:54 crc kubenswrapper[4805]: I0226 17:37:54.940502 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a466ee40-e6ef-4a36-96c6-88e7ce00a28c","Type":"ContainerStarted","Data":"e9c0a965b42651808e325eecf66e8cb8c4e6f434403b12c7bc2994304dc5515f"} Feb 26 17:37:54 crc kubenswrapper[4805]: I0226 17:37:54.940634 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a466ee40-e6ef-4a36-96c6-88e7ce00a28c","Type":"ContainerStarted","Data":"09507fda606530fba8fefe30d3d87ac139e6f0d9938c99f9050e848a5959a6c0"} Feb 26 17:37:54 crc kubenswrapper[4805]: I0226 17:37:54.940737 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a466ee40-e6ef-4a36-96c6-88e7ce00a28c","Type":"ContainerStarted","Data":"6b381c22e035df90f14b1fd84e0f12b4e471143b0f833cf77926a89f8190e972"} Feb 26 17:37:55 crc kubenswrapper[4805]: I0226 17:37:55.288333 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-xbnw6" Feb 26 17:37:55 crc kubenswrapper[4805]: I0226 17:37:55.371095 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcdk4\" (UniqueName: \"kubernetes.io/projected/570af5c6-aef6-4d10-93c9-439dfb9695ee-kube-api-access-xcdk4\") pod \"570af5c6-aef6-4d10-93c9-439dfb9695ee\" (UID: \"570af5c6-aef6-4d10-93c9-439dfb9695ee\") " Feb 26 17:37:55 crc kubenswrapper[4805]: I0226 17:37:55.371454 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/570af5c6-aef6-4d10-93c9-439dfb9695ee-config-data\") pod \"570af5c6-aef6-4d10-93c9-439dfb9695ee\" (UID: \"570af5c6-aef6-4d10-93c9-439dfb9695ee\") " Feb 26 17:37:55 crc kubenswrapper[4805]: I0226 17:37:55.371661 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/570af5c6-aef6-4d10-93c9-439dfb9695ee-combined-ca-bundle\") pod \"570af5c6-aef6-4d10-93c9-439dfb9695ee\" (UID: \"570af5c6-aef6-4d10-93c9-439dfb9695ee\") " Feb 26 17:37:55 crc kubenswrapper[4805]: I0226 17:37:55.377636 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/570af5c6-aef6-4d10-93c9-439dfb9695ee-kube-api-access-xcdk4" (OuterVolumeSpecName: "kube-api-access-xcdk4") pod "570af5c6-aef6-4d10-93c9-439dfb9695ee" (UID: "570af5c6-aef6-4d10-93c9-439dfb9695ee"). InnerVolumeSpecName "kube-api-access-xcdk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:37:55 crc kubenswrapper[4805]: I0226 17:37:55.396540 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/570af5c6-aef6-4d10-93c9-439dfb9695ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "570af5c6-aef6-4d10-93c9-439dfb9695ee" (UID: "570af5c6-aef6-4d10-93c9-439dfb9695ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:37:55 crc kubenswrapper[4805]: I0226 17:37:55.417918 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/570af5c6-aef6-4d10-93c9-439dfb9695ee-config-data" (OuterVolumeSpecName: "config-data") pod "570af5c6-aef6-4d10-93c9-439dfb9695ee" (UID: "570af5c6-aef6-4d10-93c9-439dfb9695ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:37:55 crc kubenswrapper[4805]: I0226 17:37:55.473971 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcdk4\" (UniqueName: \"kubernetes.io/projected/570af5c6-aef6-4d10-93c9-439dfb9695ee-kube-api-access-xcdk4\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:55 crc kubenswrapper[4805]: I0226 17:37:55.474147 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/570af5c6-aef6-4d10-93c9-439dfb9695ee-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:55 crc kubenswrapper[4805]: I0226 17:37:55.474241 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/570af5c6-aef6-4d10-93c9-439dfb9695ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:37:55 crc kubenswrapper[4805]: I0226 17:37:55.951170 4805 generic.go:334] "Generic (PLEG): container finished" podID="596906a4-e4c6-4ede-826b-d349dc6a8dbf" containerID="4f20e19038cbe4a006378e2f9e108f3dde8a6781133c2f6de6e87cab41c60eac" exitCode=0 Feb 26 17:37:55 crc kubenswrapper[4805]: I0226 17:37:55.951276 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"596906a4-e4c6-4ede-826b-d349dc6a8dbf","Type":"ContainerDied","Data":"4f20e19038cbe4a006378e2f9e108f3dde8a6781133c2f6de6e87cab41c60eac"} Feb 26 17:37:55 crc kubenswrapper[4805]: I0226 17:37:55.956250 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-xbnw6" event={"ID":"570af5c6-aef6-4d10-93c9-439dfb9695ee","Type":"ContainerDied","Data":"cf2cd1dd4246972ada24759ba4510dd670fa1507db64fecfe0c3009c37be2bc7"} Feb 26 17:37:55 crc kubenswrapper[4805]: I0226 17:37:55.956290 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf2cd1dd4246972ada24759ba4510dd670fa1507db64fecfe0c3009c37be2bc7" Feb 26 17:37:55 crc kubenswrapper[4805]: I0226 17:37:55.956333 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-xbnw6" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.360433 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-xldbg"] Feb 26 17:37:56 crc kubenswrapper[4805]: E0226 17:37:56.361570 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89ecd16e-4963-4856-88c2-5920a4d78948" containerName="mariadb-account-create-update" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.361598 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="89ecd16e-4963-4856-88c2-5920a4d78948" containerName="mariadb-account-create-update" Feb 26 17:37:56 crc kubenswrapper[4805]: E0226 17:37:56.361628 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="570af5c6-aef6-4d10-93c9-439dfb9695ee" containerName="keystone-db-sync" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.361635 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="570af5c6-aef6-4d10-93c9-439dfb9695ee" containerName="keystone-db-sync" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.361854 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="89ecd16e-4963-4856-88c2-5920a4d78948" containerName="mariadb-account-create-update" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.361884 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="570af5c6-aef6-4d10-93c9-439dfb9695ee" containerName="keystone-db-sync" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.362783 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xldbg" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.371627 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.372262 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.372417 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.372787 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bvnnf" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.372856 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.398047 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xldbg"] Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.436234 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-zksjg"] Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.438126 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-zksjg" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.488610 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-zksjg"] Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.496688 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/89a5f81d-d1f1-4bed-8df9-82c5fab4274e-fernet-keys\") pod \"keystone-bootstrap-xldbg\" (UID: \"89a5f81d-d1f1-4bed-8df9-82c5fab4274e\") " pod="openstack/keystone-bootstrap-xldbg" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.496736 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sc9f\" (UniqueName: \"kubernetes.io/projected/11e19e55-a5cb-4119-b0c4-f51c5eae979d-kube-api-access-2sc9f\") pod \"dnsmasq-dns-5c9d85d47c-zksjg\" (UID: \"11e19e55-a5cb-4119-b0c4-f51c5eae979d\") " pod="openstack/dnsmasq-dns-5c9d85d47c-zksjg" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.496791 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11e19e55-a5cb-4119-b0c4-f51c5eae979d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9d85d47c-zksjg\" (UID: \"11e19e55-a5cb-4119-b0c4-f51c5eae979d\") " pod="openstack/dnsmasq-dns-5c9d85d47c-zksjg" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.496814 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11e19e55-a5cb-4119-b0c4-f51c5eae979d-config\") pod \"dnsmasq-dns-5c9d85d47c-zksjg\" (UID: \"11e19e55-a5cb-4119-b0c4-f51c5eae979d\") " pod="openstack/dnsmasq-dns-5c9d85d47c-zksjg" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.496856 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r55nn\" (UniqueName: \"kubernetes.io/projected/89a5f81d-d1f1-4bed-8df9-82c5fab4274e-kube-api-access-r55nn\") pod \"keystone-bootstrap-xldbg\" (UID: \"89a5f81d-d1f1-4bed-8df9-82c5fab4274e\") " pod="openstack/keystone-bootstrap-xldbg" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.496881 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11e19e55-a5cb-4119-b0c4-f51c5eae979d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9d85d47c-zksjg\" (UID: \"11e19e55-a5cb-4119-b0c4-f51c5eae979d\") " pod="openstack/dnsmasq-dns-5c9d85d47c-zksjg" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.496901 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89a5f81d-d1f1-4bed-8df9-82c5fab4274e-combined-ca-bundle\") pod \"keystone-bootstrap-xldbg\" (UID: \"89a5f81d-d1f1-4bed-8df9-82c5fab4274e\") " pod="openstack/keystone-bootstrap-xldbg" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.496916 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11e19e55-a5cb-4119-b0c4-f51c5eae979d-dns-svc\") pod \"dnsmasq-dns-5c9d85d47c-zksjg\" (UID: \"11e19e55-a5cb-4119-b0c4-f51c5eae979d\") " pod="openstack/dnsmasq-dns-5c9d85d47c-zksjg" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.496936 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89a5f81d-d1f1-4bed-8df9-82c5fab4274e-scripts\") pod \"keystone-bootstrap-xldbg\" (UID: \"89a5f81d-d1f1-4bed-8df9-82c5fab4274e\") " pod="openstack/keystone-bootstrap-xldbg" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.496954 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/89a5f81d-d1f1-4bed-8df9-82c5fab4274e-credential-keys\") pod \"keystone-bootstrap-xldbg\" (UID: \"89a5f81d-d1f1-4bed-8df9-82c5fab4274e\") " pod="openstack/keystone-bootstrap-xldbg" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.496991 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89a5f81d-d1f1-4bed-8df9-82c5fab4274e-config-data\") pod \"keystone-bootstrap-xldbg\" (UID: \"89a5f81d-d1f1-4bed-8df9-82c5fab4274e\") " pod="openstack/keystone-bootstrap-xldbg" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.598884 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/89a5f81d-d1f1-4bed-8df9-82c5fab4274e-fernet-keys\") pod \"keystone-bootstrap-xldbg\" (UID: \"89a5f81d-d1f1-4bed-8df9-82c5fab4274e\") " pod="openstack/keystone-bootstrap-xldbg" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.598929 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sc9f\" (UniqueName: \"kubernetes.io/projected/11e19e55-a5cb-4119-b0c4-f51c5eae979d-kube-api-access-2sc9f\") pod \"dnsmasq-dns-5c9d85d47c-zksjg\" (UID: \"11e19e55-a5cb-4119-b0c4-f51c5eae979d\") " pod="openstack/dnsmasq-dns-5c9d85d47c-zksjg" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.598981 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11e19e55-a5cb-4119-b0c4-f51c5eae979d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9d85d47c-zksjg\" (UID: \"11e19e55-a5cb-4119-b0c4-f51c5eae979d\") " pod="openstack/dnsmasq-dns-5c9d85d47c-zksjg" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.599004 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11e19e55-a5cb-4119-b0c4-f51c5eae979d-config\") pod \"dnsmasq-dns-5c9d85d47c-zksjg\" (UID: \"11e19e55-a5cb-4119-b0c4-f51c5eae979d\") " pod="openstack/dnsmasq-dns-5c9d85d47c-zksjg" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.599058 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r55nn\" (UniqueName: \"kubernetes.io/projected/89a5f81d-d1f1-4bed-8df9-82c5fab4274e-kube-api-access-r55nn\") pod \"keystone-bootstrap-xldbg\" (UID: \"89a5f81d-d1f1-4bed-8df9-82c5fab4274e\") " pod="openstack/keystone-bootstrap-xldbg" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.599088 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11e19e55-a5cb-4119-b0c4-f51c5eae979d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9d85d47c-zksjg\" (UID: \"11e19e55-a5cb-4119-b0c4-f51c5eae979d\") " pod="openstack/dnsmasq-dns-5c9d85d47c-zksjg" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.599105 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89a5f81d-d1f1-4bed-8df9-82c5fab4274e-combined-ca-bundle\") pod \"keystone-bootstrap-xldbg\" (UID: \"89a5f81d-d1f1-4bed-8df9-82c5fab4274e\") " pod="openstack/keystone-bootstrap-xldbg" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.599127 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11e19e55-a5cb-4119-b0c4-f51c5eae979d-dns-svc\") pod \"dnsmasq-dns-5c9d85d47c-zksjg\" (UID: \"11e19e55-a5cb-4119-b0c4-f51c5eae979d\") " pod="openstack/dnsmasq-dns-5c9d85d47c-zksjg" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.599149 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89a5f81d-d1f1-4bed-8df9-82c5fab4274e-scripts\") pod \"keystone-bootstrap-xldbg\" (UID: \"89a5f81d-d1f1-4bed-8df9-82c5fab4274e\") " pod="openstack/keystone-bootstrap-xldbg" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.599166 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/89a5f81d-d1f1-4bed-8df9-82c5fab4274e-credential-keys\") pod \"keystone-bootstrap-xldbg\" (UID: \"89a5f81d-d1f1-4bed-8df9-82c5fab4274e\") " pod="openstack/keystone-bootstrap-xldbg" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.599199 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89a5f81d-d1f1-4bed-8df9-82c5fab4274e-config-data\") pod \"keystone-bootstrap-xldbg\" (UID: \"89a5f81d-d1f1-4bed-8df9-82c5fab4274e\") " pod="openstack/keystone-bootstrap-xldbg" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.603903 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/89a5f81d-d1f1-4bed-8df9-82c5fab4274e-fernet-keys\") pod \"keystone-bootstrap-xldbg\" (UID: \"89a5f81d-d1f1-4bed-8df9-82c5fab4274e\") " pod="openstack/keystone-bootstrap-xldbg" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.605167 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11e19e55-a5cb-4119-b0c4-f51c5eae979d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9d85d47c-zksjg\" (UID: \"11e19e55-a5cb-4119-b0c4-f51c5eae979d\") " pod="openstack/dnsmasq-dns-5c9d85d47c-zksjg" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.606066 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11e19e55-a5cb-4119-b0c4-f51c5eae979d-config\") pod \"dnsmasq-dns-5c9d85d47c-zksjg\" (UID: \"11e19e55-a5cb-4119-b0c4-f51c5eae979d\") " pod="openstack/dnsmasq-dns-5c9d85d47c-zksjg" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.613304 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11e19e55-a5cb-4119-b0c4-f51c5eae979d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9d85d47c-zksjg\" (UID: \"11e19e55-a5cb-4119-b0c4-f51c5eae979d\") " pod="openstack/dnsmasq-dns-5c9d85d47c-zksjg" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.614617 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89a5f81d-d1f1-4bed-8df9-82c5fab4274e-scripts\") pod \"keystone-bootstrap-xldbg\" (UID: \"89a5f81d-d1f1-4bed-8df9-82c5fab4274e\") " pod="openstack/keystone-bootstrap-xldbg" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.615795 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11e19e55-a5cb-4119-b0c4-f51c5eae979d-dns-svc\") pod \"dnsmasq-dns-5c9d85d47c-zksjg\" (UID: \"11e19e55-a5cb-4119-b0c4-f51c5eae979d\") " pod="openstack/dnsmasq-dns-5c9d85d47c-zksjg" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.636951 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/89a5f81d-d1f1-4bed-8df9-82c5fab4274e-credential-keys\") pod \"keystone-bootstrap-xldbg\" (UID: \"89a5f81d-d1f1-4bed-8df9-82c5fab4274e\") " pod="openstack/keystone-bootstrap-xldbg" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.658107 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89a5f81d-d1f1-4bed-8df9-82c5fab4274e-combined-ca-bundle\") pod \"keystone-bootstrap-xldbg\" (UID: \"89a5f81d-d1f1-4bed-8df9-82c5fab4274e\") " pod="openstack/keystone-bootstrap-xldbg" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.662663 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89a5f81d-d1f1-4bed-8df9-82c5fab4274e-config-data\") pod \"keystone-bootstrap-xldbg\" (UID: \"89a5f81d-d1f1-4bed-8df9-82c5fab4274e\") " pod="openstack/keystone-bootstrap-xldbg" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.673822 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sc9f\" (UniqueName: \"kubernetes.io/projected/11e19e55-a5cb-4119-b0c4-f51c5eae979d-kube-api-access-2sc9f\") pod \"dnsmasq-dns-5c9d85d47c-zksjg\" (UID: \"11e19e55-a5cb-4119-b0c4-f51c5eae979d\") " pod="openstack/dnsmasq-dns-5c9d85d47c-zksjg" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.677820 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r55nn\" (UniqueName: \"kubernetes.io/projected/89a5f81d-d1f1-4bed-8df9-82c5fab4274e-kube-api-access-r55nn\") pod \"keystone-bootstrap-xldbg\" (UID: \"89a5f81d-d1f1-4bed-8df9-82c5fab4274e\") " pod="openstack/keystone-bootstrap-xldbg" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.786491 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xldbg" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.809786 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.828673 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.828772 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-zksjg" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.853528 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.853901 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.871056 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.912095 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/137f038e-91ff-44e0-9f5c-616765295b23-run-httpd\") pod \"ceilometer-0\" (UID: \"137f038e-91ff-44e0-9f5c-616765295b23\") " pod="openstack/ceilometer-0" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.912167 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/137f038e-91ff-44e0-9f5c-616765295b23-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"137f038e-91ff-44e0-9f5c-616765295b23\") " pod="openstack/ceilometer-0" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.912221 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/137f038e-91ff-44e0-9f5c-616765295b23-log-httpd\") pod \"ceilometer-0\" (UID: \"137f038e-91ff-44e0-9f5c-616765295b23\") " pod="openstack/ceilometer-0" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.912249 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgp7k\" (UniqueName: \"kubernetes.io/projected/137f038e-91ff-44e0-9f5c-616765295b23-kube-api-access-rgp7k\") pod \"ceilometer-0\" (UID: \"137f038e-91ff-44e0-9f5c-616765295b23\") " pod="openstack/ceilometer-0" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.912287 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/137f038e-91ff-44e0-9f5c-616765295b23-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"137f038e-91ff-44e0-9f5c-616765295b23\") " pod="openstack/ceilometer-0" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.912338 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/137f038e-91ff-44e0-9f5c-616765295b23-config-data\") pod \"ceilometer-0\" (UID: \"137f038e-91ff-44e0-9f5c-616765295b23\") " pod="openstack/ceilometer-0" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.912366 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/137f038e-91ff-44e0-9f5c-616765295b23-scripts\") pod \"ceilometer-0\" (UID: \"137f038e-91ff-44e0-9f5c-616765295b23\") " pod="openstack/ceilometer-0" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.921987 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-zksjg"] Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.954875 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-5v7qp"] Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.956079 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-5v7qp" Feb 26 17:37:56 crc kubenswrapper[4805]: I0226 17:37:56.965562 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.135818 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/137f038e-91ff-44e0-9f5c-616765295b23-log-httpd\") pod \"ceilometer-0\" (UID: \"137f038e-91ff-44e0-9f5c-616765295b23\") " pod="openstack/ceilometer-0" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.135917 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgp7k\" (UniqueName: \"kubernetes.io/projected/137f038e-91ff-44e0-9f5c-616765295b23-kube-api-access-rgp7k\") pod \"ceilometer-0\" (UID: \"137f038e-91ff-44e0-9f5c-616765295b23\") " pod="openstack/ceilometer-0" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.136046 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/137f038e-91ff-44e0-9f5c-616765295b23-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"137f038e-91ff-44e0-9f5c-616765295b23\") " pod="openstack/ceilometer-0" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.136203 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/137f038e-91ff-44e0-9f5c-616765295b23-config-data\") pod \"ceilometer-0\" (UID: \"137f038e-91ff-44e0-9f5c-616765295b23\") " pod="openstack/ceilometer-0" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.136296 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/137f038e-91ff-44e0-9f5c-616765295b23-scripts\") pod \"ceilometer-0\" (UID: \"137f038e-91ff-44e0-9f5c-616765295b23\") " pod="openstack/ceilometer-0" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.136367 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/137f038e-91ff-44e0-9f5c-616765295b23-run-httpd\") pod \"ceilometer-0\" (UID: \"137f038e-91ff-44e0-9f5c-616765295b23\") " pod="openstack/ceilometer-0" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.136437 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/137f038e-91ff-44e0-9f5c-616765295b23-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"137f038e-91ff-44e0-9f5c-616765295b23\") " pod="openstack/ceilometer-0" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.138273 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-5fk5n" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.207231 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.208974 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/137f038e-91ff-44e0-9f5c-616765295b23-log-httpd\") pod \"ceilometer-0\" (UID: \"137f038e-91ff-44e0-9f5c-616765295b23\") " pod="openstack/ceilometer-0" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.209257 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/137f038e-91ff-44e0-9f5c-616765295b23-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"137f038e-91ff-44e0-9f5c-616765295b23\") " pod="openstack/ceilometer-0" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.210909 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/137f038e-91ff-44e0-9f5c-616765295b23-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"137f038e-91ff-44e0-9f5c-616765295b23\") " pod="openstack/ceilometer-0" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.223836 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/137f038e-91ff-44e0-9f5c-616765295b23-run-httpd\") pod \"ceilometer-0\" (UID: \"137f038e-91ff-44e0-9f5c-616765295b23\") " pod="openstack/ceilometer-0" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.225254 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/137f038e-91ff-44e0-9f5c-616765295b23-scripts\") pod \"ceilometer-0\" (UID: \"137f038e-91ff-44e0-9f5c-616765295b23\") " pod="openstack/ceilometer-0" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.225455 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"596906a4-e4c6-4ede-826b-d349dc6a8dbf","Type":"ContainerStarted","Data":"df788a6a9006b5df707b4b5a95ecb7a92d4aadd164df5395ab0a5bbb6b87a418"} Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.238905 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-xgqc6"] Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.250149 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-dgxjz"] Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.252378 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dgxjz" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.257593 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-79wtt"] Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.259894 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-79wtt" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.264221 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-xgqc6" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.268153 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b61d3acd-4133-4fe1-82b0-05036641ed78-config\") pod \"neutron-db-sync-5v7qp\" (UID: \"b61d3acd-4133-4fe1-82b0-05036641ed78\") " pod="openstack/neutron-db-sync-5v7qp" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.268325 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b61d3acd-4133-4fe1-82b0-05036641ed78-combined-ca-bundle\") pod \"neutron-db-sync-5v7qp\" (UID: \"b61d3acd-4133-4fe1-82b0-05036641ed78\") " pod="openstack/neutron-db-sync-5v7qp" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.268370 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n9n5\" (UniqueName: \"kubernetes.io/projected/b61d3acd-4133-4fe1-82b0-05036641ed78-kube-api-access-6n9n5\") pod \"neutron-db-sync-5v7qp\" (UID: \"b61d3acd-4133-4fe1-82b0-05036641ed78\") " pod="openstack/neutron-db-sync-5v7qp" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.268959 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.269258 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.269328 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-5v7qp"] Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.269912 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a466ee40-e6ef-4a36-96c6-88e7ce00a28c","Type":"ContainerStarted","Data":"3bbe353d2dcce19e65927a3fd7c9188d4d3f33e0dd35225a8d0522eb0c654657"} Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.280783 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.280966 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-fdwmg" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.281234 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-ttgs5" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.281333 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-rw98k" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.281425 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.281570 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.281749 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.282004 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/137f038e-91ff-44e0-9f5c-616765295b23-config-data\") pod \"ceilometer-0\" (UID: \"137f038e-91ff-44e0-9f5c-616765295b23\") " pod="openstack/ceilometer-0" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.282229 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.282799 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-xgqc6"] Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.290584 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-dgxjz"] Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.298883 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-79wtt"] Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.307990 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgp7k\" (UniqueName: \"kubernetes.io/projected/137f038e-91ff-44e0-9f5c-616765295b23-kube-api-access-rgp7k\") pod \"ceilometer-0\" (UID: \"137f038e-91ff-44e0-9f5c-616765295b23\") " pod="openstack/ceilometer-0" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.321203 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-79b2z"] Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.323154 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-79b2z" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.348370 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-wg9r2"] Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.349542 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wg9r2" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.351708 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-nxm6r" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.351948 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.360145 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-wg9r2"] Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.370238 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gf7p\" (UniqueName: \"kubernetes.io/projected/96d28605-2282-4e5f-93d6-5a3023c7bc9c-kube-api-access-9gf7p\") pod \"cloudkitty-db-sync-xgqc6\" (UID: \"96d28605-2282-4e5f-93d6-5a3023c7bc9c\") " pod="openstack/cloudkitty-db-sync-xgqc6" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.370283 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8969f13b-8f7b-4e4e-a891-eac8a978bb42-combined-ca-bundle\") pod \"cinder-db-sync-dgxjz\" (UID: \"8969f13b-8f7b-4e4e-a891-eac8a978bb42\") " pod="openstack/cinder-db-sync-dgxjz" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.370309 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c0f3346-b7f3-48c3-b177-86d8adb7d190-logs\") pod \"placement-db-sync-79wtt\" (UID: \"5c0f3346-b7f3-48c3-b177-86d8adb7d190\") " pod="openstack/placement-db-sync-79wtt" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.370354 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9m9l\" (UniqueName: \"kubernetes.io/projected/5c0f3346-b7f3-48c3-b177-86d8adb7d190-kube-api-access-r9m9l\") pod \"placement-db-sync-79wtt\" (UID: \"5c0f3346-b7f3-48c3-b177-86d8adb7d190\") " pod="openstack/placement-db-sync-79wtt" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.370383 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c0f3346-b7f3-48c3-b177-86d8adb7d190-config-data\") pod \"placement-db-sync-79wtt\" (UID: \"5c0f3346-b7f3-48c3-b177-86d8adb7d190\") " pod="openstack/placement-db-sync-79wtt" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.370400 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/96d28605-2282-4e5f-93d6-5a3023c7bc9c-certs\") pod \"cloudkitty-db-sync-xgqc6\" (UID: \"96d28605-2282-4e5f-93d6-5a3023c7bc9c\") " pod="openstack/cloudkitty-db-sync-xgqc6" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.370421 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c0f3346-b7f3-48c3-b177-86d8adb7d190-scripts\") pod \"placement-db-sync-79wtt\" (UID: \"5c0f3346-b7f3-48c3-b177-86d8adb7d190\") " pod="openstack/placement-db-sync-79wtt" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.370452 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b61d3acd-4133-4fe1-82b0-05036641ed78-config\") pod \"neutron-db-sync-5v7qp\" (UID: \"b61d3acd-4133-4fe1-82b0-05036641ed78\") " pod="openstack/neutron-db-sync-5v7qp" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.370473 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96d28605-2282-4e5f-93d6-5a3023c7bc9c-config-data\") pod \"cloudkitty-db-sync-xgqc6\" (UID: \"96d28605-2282-4e5f-93d6-5a3023c7bc9c\") " pod="openstack/cloudkitty-db-sync-xgqc6" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.370512 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c0f3346-b7f3-48c3-b177-86d8adb7d190-combined-ca-bundle\") pod \"placement-db-sync-79wtt\" (UID: \"5c0f3346-b7f3-48c3-b177-86d8adb7d190\") " pod="openstack/placement-db-sync-79wtt" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.370528 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n58gb\" (UniqueName: \"kubernetes.io/projected/8969f13b-8f7b-4e4e-a891-eac8a978bb42-kube-api-access-n58gb\") pod \"cinder-db-sync-dgxjz\" (UID: \"8969f13b-8f7b-4e4e-a891-eac8a978bb42\") " pod="openstack/cinder-db-sync-dgxjz" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.370549 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96d28605-2282-4e5f-93d6-5a3023c7bc9c-combined-ca-bundle\") pod \"cloudkitty-db-sync-xgqc6\" (UID: \"96d28605-2282-4e5f-93d6-5a3023c7bc9c\") " pod="openstack/cloudkitty-db-sync-xgqc6" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.370579 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8969f13b-8f7b-4e4e-a891-eac8a978bb42-etc-machine-id\") pod \"cinder-db-sync-dgxjz\" (UID: \"8969f13b-8f7b-4e4e-a891-eac8a978bb42\") " pod="openstack/cinder-db-sync-dgxjz" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.370624 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8969f13b-8f7b-4e4e-a891-eac8a978bb42-scripts\") pod \"cinder-db-sync-dgxjz\" (UID: \"8969f13b-8f7b-4e4e-a891-eac8a978bb42\") " pod="openstack/cinder-db-sync-dgxjz" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.370647 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8969f13b-8f7b-4e4e-a891-eac8a978bb42-config-data\") pod \"cinder-db-sync-dgxjz\" (UID: \"8969f13b-8f7b-4e4e-a891-eac8a978bb42\") " pod="openstack/cinder-db-sync-dgxjz" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.370666 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b61d3acd-4133-4fe1-82b0-05036641ed78-combined-ca-bundle\") pod \"neutron-db-sync-5v7qp\" (UID: \"b61d3acd-4133-4fe1-82b0-05036641ed78\") " pod="openstack/neutron-db-sync-5v7qp" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.370701 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n9n5\" (UniqueName: \"kubernetes.io/projected/b61d3acd-4133-4fe1-82b0-05036641ed78-kube-api-access-6n9n5\") pod \"neutron-db-sync-5v7qp\" (UID: \"b61d3acd-4133-4fe1-82b0-05036641ed78\") " pod="openstack/neutron-db-sync-5v7qp" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.370724 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96d28605-2282-4e5f-93d6-5a3023c7bc9c-scripts\") pod \"cloudkitty-db-sync-xgqc6\" (UID: \"96d28605-2282-4e5f-93d6-5a3023c7bc9c\") " pod="openstack/cloudkitty-db-sync-xgqc6" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.370748 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8969f13b-8f7b-4e4e-a891-eac8a978bb42-db-sync-config-data\") pod \"cinder-db-sync-dgxjz\" (UID: \"8969f13b-8f7b-4e4e-a891-eac8a978bb42\") " pod="openstack/cinder-db-sync-dgxjz" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.382768 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-79b2z"] Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.385437 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b61d3acd-4133-4fe1-82b0-05036641ed78-config\") pod \"neutron-db-sync-5v7qp\" (UID: \"b61d3acd-4133-4fe1-82b0-05036641ed78\") " pod="openstack/neutron-db-sync-5v7qp" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.389950 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b61d3acd-4133-4fe1-82b0-05036641ed78-combined-ca-bundle\") pod \"neutron-db-sync-5v7qp\" (UID: \"b61d3acd-4133-4fe1-82b0-05036641ed78\") " pod="openstack/neutron-db-sync-5v7qp" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.391714 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n9n5\" (UniqueName: \"kubernetes.io/projected/b61d3acd-4133-4fe1-82b0-05036641ed78-kube-api-access-6n9n5\") pod \"neutron-db-sync-5v7qp\" (UID: \"b61d3acd-4133-4fe1-82b0-05036641ed78\") " pod="openstack/neutron-db-sync-5v7qp" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.458904 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-5v7qp" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.472899 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96d28605-2282-4e5f-93d6-5a3023c7bc9c-config-data\") pod \"cloudkitty-db-sync-xgqc6\" (UID: \"96d28605-2282-4e5f-93d6-5a3023c7bc9c\") " pod="openstack/cloudkitty-db-sync-xgqc6" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.472970 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d97c6376-ee3c-4aff-b2d5-0ba210dc1771-config\") pod \"dnsmasq-dns-6ffb94d8ff-79b2z\" (UID: \"d97c6376-ee3c-4aff-b2d5-0ba210dc1771\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-79b2z" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.473007 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c0f3346-b7f3-48c3-b177-86d8adb7d190-combined-ca-bundle\") pod \"placement-db-sync-79wtt\" (UID: \"5c0f3346-b7f3-48c3-b177-86d8adb7d190\") " pod="openstack/placement-db-sync-79wtt" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.473049 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n58gb\" (UniqueName: \"kubernetes.io/projected/8969f13b-8f7b-4e4e-a891-eac8a978bb42-kube-api-access-n58gb\") pod \"cinder-db-sync-dgxjz\" (UID: \"8969f13b-8f7b-4e4e-a891-eac8a978bb42\") " pod="openstack/cinder-db-sync-dgxjz" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.473081 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96d28605-2282-4e5f-93d6-5a3023c7bc9c-combined-ca-bundle\") pod \"cloudkitty-db-sync-xgqc6\" (UID: \"96d28605-2282-4e5f-93d6-5a3023c7bc9c\") " pod="openstack/cloudkitty-db-sync-xgqc6" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.473116 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d97c6376-ee3c-4aff-b2d5-0ba210dc1771-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffb94d8ff-79b2z\" (UID: \"d97c6376-ee3c-4aff-b2d5-0ba210dc1771\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-79b2z" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.473162 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp4cx\" (UniqueName: \"kubernetes.io/projected/1627cc17-6ee2-4176-b719-aa04e00aa881-kube-api-access-wp4cx\") pod \"barbican-db-sync-wg9r2\" (UID: \"1627cc17-6ee2-4176-b719-aa04e00aa881\") " pod="openstack/barbican-db-sync-wg9r2" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.473188 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8969f13b-8f7b-4e4e-a891-eac8a978bb42-etc-machine-id\") pod \"cinder-db-sync-dgxjz\" (UID: \"8969f13b-8f7b-4e4e-a891-eac8a978bb42\") " pod="openstack/cinder-db-sync-dgxjz" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.473214 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1627cc17-6ee2-4176-b719-aa04e00aa881-db-sync-config-data\") pod \"barbican-db-sync-wg9r2\" (UID: \"1627cc17-6ee2-4176-b719-aa04e00aa881\") " pod="openstack/barbican-db-sync-wg9r2" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.473270 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8969f13b-8f7b-4e4e-a891-eac8a978bb42-scripts\") pod \"cinder-db-sync-dgxjz\" (UID: \"8969f13b-8f7b-4e4e-a891-eac8a978bb42\") " pod="openstack/cinder-db-sync-dgxjz" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.473305 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8969f13b-8f7b-4e4e-a891-eac8a978bb42-config-data\") pod \"cinder-db-sync-dgxjz\" (UID: \"8969f13b-8f7b-4e4e-a891-eac8a978bb42\") " pod="openstack/cinder-db-sync-dgxjz" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.473343 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thsjp\" (UniqueName: \"kubernetes.io/projected/d97c6376-ee3c-4aff-b2d5-0ba210dc1771-kube-api-access-thsjp\") pod \"dnsmasq-dns-6ffb94d8ff-79b2z\" (UID: \"d97c6376-ee3c-4aff-b2d5-0ba210dc1771\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-79b2z" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.473376 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1627cc17-6ee2-4176-b719-aa04e00aa881-combined-ca-bundle\") pod \"barbican-db-sync-wg9r2\" (UID: \"1627cc17-6ee2-4176-b719-aa04e00aa881\") " pod="openstack/barbican-db-sync-wg9r2" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.473410 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96d28605-2282-4e5f-93d6-5a3023c7bc9c-scripts\") pod \"cloudkitty-db-sync-xgqc6\" (UID: \"96d28605-2282-4e5f-93d6-5a3023c7bc9c\") " pod="openstack/cloudkitty-db-sync-xgqc6" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.473427 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d97c6376-ee3c-4aff-b2d5-0ba210dc1771-dns-svc\") pod \"dnsmasq-dns-6ffb94d8ff-79b2z\" (UID: \"d97c6376-ee3c-4aff-b2d5-0ba210dc1771\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-79b2z" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.473459 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8969f13b-8f7b-4e4e-a891-eac8a978bb42-db-sync-config-data\") pod \"cinder-db-sync-dgxjz\" (UID: \"8969f13b-8f7b-4e4e-a891-eac8a978bb42\") " pod="openstack/cinder-db-sync-dgxjz" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.473487 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gf7p\" (UniqueName: \"kubernetes.io/projected/96d28605-2282-4e5f-93d6-5a3023c7bc9c-kube-api-access-9gf7p\") pod \"cloudkitty-db-sync-xgqc6\" (UID: \"96d28605-2282-4e5f-93d6-5a3023c7bc9c\") " pod="openstack/cloudkitty-db-sync-xgqc6" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.473506 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8969f13b-8f7b-4e4e-a891-eac8a978bb42-combined-ca-bundle\") pod \"cinder-db-sync-dgxjz\" (UID: \"8969f13b-8f7b-4e4e-a891-eac8a978bb42\") " pod="openstack/cinder-db-sync-dgxjz" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.473531 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c0f3346-b7f3-48c3-b177-86d8adb7d190-logs\") pod \"placement-db-sync-79wtt\" (UID: \"5c0f3346-b7f3-48c3-b177-86d8adb7d190\") " pod="openstack/placement-db-sync-79wtt" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.473579 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9m9l\" (UniqueName: \"kubernetes.io/projected/5c0f3346-b7f3-48c3-b177-86d8adb7d190-kube-api-access-r9m9l\") pod \"placement-db-sync-79wtt\" (UID: \"5c0f3346-b7f3-48c3-b177-86d8adb7d190\") " pod="openstack/placement-db-sync-79wtt" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.473602 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c0f3346-b7f3-48c3-b177-86d8adb7d190-config-data\") pod \"placement-db-sync-79wtt\" (UID: \"5c0f3346-b7f3-48c3-b177-86d8adb7d190\") " pod="openstack/placement-db-sync-79wtt" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.473633 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/96d28605-2282-4e5f-93d6-5a3023c7bc9c-certs\") pod \"cloudkitty-db-sync-xgqc6\" (UID: \"96d28605-2282-4e5f-93d6-5a3023c7bc9c\") " pod="openstack/cloudkitty-db-sync-xgqc6" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.473668 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d97c6376-ee3c-4aff-b2d5-0ba210dc1771-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffb94d8ff-79b2z\" (UID: \"d97c6376-ee3c-4aff-b2d5-0ba210dc1771\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-79b2z" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.473691 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c0f3346-b7f3-48c3-b177-86d8adb7d190-scripts\") pod \"placement-db-sync-79wtt\" (UID: \"5c0f3346-b7f3-48c3-b177-86d8adb7d190\") " pod="openstack/placement-db-sync-79wtt" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.475899 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8969f13b-8f7b-4e4e-a891-eac8a978bb42-etc-machine-id\") pod \"cinder-db-sync-dgxjz\" (UID: \"8969f13b-8f7b-4e4e-a891-eac8a978bb42\") " pod="openstack/cinder-db-sync-dgxjz" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.479550 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c0f3346-b7f3-48c3-b177-86d8adb7d190-logs\") pod \"placement-db-sync-79wtt\" (UID: \"5c0f3346-b7f3-48c3-b177-86d8adb7d190\") " pod="openstack/placement-db-sync-79wtt" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.494889 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c0f3346-b7f3-48c3-b177-86d8adb7d190-config-data\") pod \"placement-db-sync-79wtt\" (UID: \"5c0f3346-b7f3-48c3-b177-86d8adb7d190\") " pod="openstack/placement-db-sync-79wtt" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.495750 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96d28605-2282-4e5f-93d6-5a3023c7bc9c-scripts\") pod \"cloudkitty-db-sync-xgqc6\" (UID: \"96d28605-2282-4e5f-93d6-5a3023c7bc9c\") " pod="openstack/cloudkitty-db-sync-xgqc6" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.496602 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c0f3346-b7f3-48c3-b177-86d8adb7d190-scripts\") pod \"placement-db-sync-79wtt\" (UID: \"5c0f3346-b7f3-48c3-b177-86d8adb7d190\") " pod="openstack/placement-db-sync-79wtt" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.497241 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8969f13b-8f7b-4e4e-a891-eac8a978bb42-db-sync-config-data\") pod \"cinder-db-sync-dgxjz\" (UID: \"8969f13b-8f7b-4e4e-a891-eac8a978bb42\") " pod="openstack/cinder-db-sync-dgxjz" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.497449 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8969f13b-8f7b-4e4e-a891-eac8a978bb42-scripts\") pod \"cinder-db-sync-dgxjz\" (UID: \"8969f13b-8f7b-4e4e-a891-eac8a978bb42\") " pod="openstack/cinder-db-sync-dgxjz" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.497635 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/96d28605-2282-4e5f-93d6-5a3023c7bc9c-certs\") pod \"cloudkitty-db-sync-xgqc6\" (UID: \"96d28605-2282-4e5f-93d6-5a3023c7bc9c\") " pod="openstack/cloudkitty-db-sync-xgqc6" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.497757 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8969f13b-8f7b-4e4e-a891-eac8a978bb42-combined-ca-bundle\") pod \"cinder-db-sync-dgxjz\" (UID: \"8969f13b-8f7b-4e4e-a891-eac8a978bb42\") " pod="openstack/cinder-db-sync-dgxjz" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.498266 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c0f3346-b7f3-48c3-b177-86d8adb7d190-combined-ca-bundle\") pod \"placement-db-sync-79wtt\" (UID: \"5c0f3346-b7f3-48c3-b177-86d8adb7d190\") " pod="openstack/placement-db-sync-79wtt" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.498856 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96d28605-2282-4e5f-93d6-5a3023c7bc9c-config-data\") pod \"cloudkitty-db-sync-xgqc6\" (UID: \"96d28605-2282-4e5f-93d6-5a3023c7bc9c\") " pod="openstack/cloudkitty-db-sync-xgqc6" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.499366 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9m9l\" (UniqueName: \"kubernetes.io/projected/5c0f3346-b7f3-48c3-b177-86d8adb7d190-kube-api-access-r9m9l\") pod \"placement-db-sync-79wtt\" (UID: \"5c0f3346-b7f3-48c3-b177-86d8adb7d190\") " pod="openstack/placement-db-sync-79wtt" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.499925 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8969f13b-8f7b-4e4e-a891-eac8a978bb42-config-data\") pod \"cinder-db-sync-dgxjz\" (UID: \"8969f13b-8f7b-4e4e-a891-eac8a978bb42\") " pod="openstack/cinder-db-sync-dgxjz" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.502426 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n58gb\" (UniqueName: \"kubernetes.io/projected/8969f13b-8f7b-4e4e-a891-eac8a978bb42-kube-api-access-n58gb\") pod \"cinder-db-sync-dgxjz\" (UID: \"8969f13b-8f7b-4e4e-a891-eac8a978bb42\") " pod="openstack/cinder-db-sync-dgxjz" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.503947 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.510295 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96d28605-2282-4e5f-93d6-5a3023c7bc9c-combined-ca-bundle\") pod \"cloudkitty-db-sync-xgqc6\" (UID: \"96d28605-2282-4e5f-93d6-5a3023c7bc9c\") " pod="openstack/cloudkitty-db-sync-xgqc6" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.551975 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gf7p\" (UniqueName: \"kubernetes.io/projected/96d28605-2282-4e5f-93d6-5a3023c7bc9c-kube-api-access-9gf7p\") pod \"cloudkitty-db-sync-xgqc6\" (UID: \"96d28605-2282-4e5f-93d6-5a3023c7bc9c\") " pod="openstack/cloudkitty-db-sync-xgqc6" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.575369 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thsjp\" (UniqueName: \"kubernetes.io/projected/d97c6376-ee3c-4aff-b2d5-0ba210dc1771-kube-api-access-thsjp\") pod \"dnsmasq-dns-6ffb94d8ff-79b2z\" (UID: \"d97c6376-ee3c-4aff-b2d5-0ba210dc1771\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-79b2z" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.575415 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d97c6376-ee3c-4aff-b2d5-0ba210dc1771-dns-svc\") pod \"dnsmasq-dns-6ffb94d8ff-79b2z\" (UID: \"d97c6376-ee3c-4aff-b2d5-0ba210dc1771\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-79b2z" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.575432 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1627cc17-6ee2-4176-b719-aa04e00aa881-combined-ca-bundle\") pod \"barbican-db-sync-wg9r2\" (UID: \"1627cc17-6ee2-4176-b719-aa04e00aa881\") " pod="openstack/barbican-db-sync-wg9r2" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.575547 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d97c6376-ee3c-4aff-b2d5-0ba210dc1771-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffb94d8ff-79b2z\" (UID: \"d97c6376-ee3c-4aff-b2d5-0ba210dc1771\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-79b2z" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.575602 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d97c6376-ee3c-4aff-b2d5-0ba210dc1771-config\") pod \"dnsmasq-dns-6ffb94d8ff-79b2z\" (UID: \"d97c6376-ee3c-4aff-b2d5-0ba210dc1771\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-79b2z" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.575661 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d97c6376-ee3c-4aff-b2d5-0ba210dc1771-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffb94d8ff-79b2z\" (UID: \"d97c6376-ee3c-4aff-b2d5-0ba210dc1771\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-79b2z" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.575686 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp4cx\" (UniqueName: \"kubernetes.io/projected/1627cc17-6ee2-4176-b719-aa04e00aa881-kube-api-access-wp4cx\") pod \"barbican-db-sync-wg9r2\" (UID: \"1627cc17-6ee2-4176-b719-aa04e00aa881\") " pod="openstack/barbican-db-sync-wg9r2" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.575707 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1627cc17-6ee2-4176-b719-aa04e00aa881-db-sync-config-data\") pod \"barbican-db-sync-wg9r2\" (UID: \"1627cc17-6ee2-4176-b719-aa04e00aa881\") " pod="openstack/barbican-db-sync-wg9r2" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.578211 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d97c6376-ee3c-4aff-b2d5-0ba210dc1771-dns-svc\") pod \"dnsmasq-dns-6ffb94d8ff-79b2z\" (UID: \"d97c6376-ee3c-4aff-b2d5-0ba210dc1771\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-79b2z" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.580709 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d97c6376-ee3c-4aff-b2d5-0ba210dc1771-ovsdbserver-sb\") pod \"dnsmasq-dns-6ffb94d8ff-79b2z\" (UID: \"d97c6376-ee3c-4aff-b2d5-0ba210dc1771\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-79b2z" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.581944 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d97c6376-ee3c-4aff-b2d5-0ba210dc1771-config\") pod \"dnsmasq-dns-6ffb94d8ff-79b2z\" (UID: \"d97c6376-ee3c-4aff-b2d5-0ba210dc1771\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-79b2z" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.583007 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d97c6376-ee3c-4aff-b2d5-0ba210dc1771-ovsdbserver-nb\") pod \"dnsmasq-dns-6ffb94d8ff-79b2z\" (UID: \"d97c6376-ee3c-4aff-b2d5-0ba210dc1771\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-79b2z" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.587203 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1627cc17-6ee2-4176-b719-aa04e00aa881-db-sync-config-data\") pod \"barbican-db-sync-wg9r2\" (UID: \"1627cc17-6ee2-4176-b719-aa04e00aa881\") " pod="openstack/barbican-db-sync-wg9r2" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.594642 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1627cc17-6ee2-4176-b719-aa04e00aa881-combined-ca-bundle\") pod \"barbican-db-sync-wg9r2\" (UID: \"1627cc17-6ee2-4176-b719-aa04e00aa881\") " pod="openstack/barbican-db-sync-wg9r2" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.605014 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thsjp\" (UniqueName: \"kubernetes.io/projected/d97c6376-ee3c-4aff-b2d5-0ba210dc1771-kube-api-access-thsjp\") pod \"dnsmasq-dns-6ffb94d8ff-79b2z\" (UID: \"d97c6376-ee3c-4aff-b2d5-0ba210dc1771\") " pod="openstack/dnsmasq-dns-6ffb94d8ff-79b2z" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.609900 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp4cx\" (UniqueName: \"kubernetes.io/projected/1627cc17-6ee2-4176-b719-aa04e00aa881-kube-api-access-wp4cx\") pod \"barbican-db-sync-wg9r2\" (UID: \"1627cc17-6ee2-4176-b719-aa04e00aa881\") " pod="openstack/barbican-db-sync-wg9r2" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.774688 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-zksjg"] Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.775092 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dgxjz" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.791359 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-79wtt" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.812292 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-xgqc6" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.824073 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-79b2z" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.841650 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wg9r2" Feb 26 17:37:57 crc kubenswrapper[4805]: I0226 17:37:57.919667 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xldbg"] Feb 26 17:37:58 crc kubenswrapper[4805]: I0226 17:37:58.045942 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-5v7qp"] Feb 26 17:37:58 crc kubenswrapper[4805]: I0226 17:37:58.110298 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:37:58 crc kubenswrapper[4805]: I0226 17:37:58.294385 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xldbg" event={"ID":"89a5f81d-d1f1-4bed-8df9-82c5fab4274e","Type":"ContainerStarted","Data":"b5feb4135848dceef55e2849b8592ac9ab40d09a59ac77e89d172a52cbfbe2cc"} Feb 26 17:37:58 crc kubenswrapper[4805]: I0226 17:37:58.296774 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9d85d47c-zksjg" event={"ID":"11e19e55-a5cb-4119-b0c4-f51c5eae979d","Type":"ContainerStarted","Data":"5ac57b41d160da5f42475d274d9d8f8d0c6610a01851296cf81ea3294a785156"} Feb 26 17:37:58 crc kubenswrapper[4805]: I0226 17:37:58.301731 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-5v7qp" event={"ID":"b61d3acd-4133-4fe1-82b0-05036641ed78","Type":"ContainerStarted","Data":"7e962d678086bc0bb48c0211cfee83c26c66c543ed43646368260ee139d58b58"} Feb 26 17:37:58 crc kubenswrapper[4805]: I0226 17:37:58.316213 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a466ee40-e6ef-4a36-96c6-88e7ce00a28c","Type":"ContainerStarted","Data":"c4732d6b95f166408d37e580630d9f10b5bf59838fc95f1ed0fd1972a2a6b435"} Feb 26 17:37:58 crc kubenswrapper[4805]: I0226 17:37:58.416734 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-dgxjz"] Feb 26 17:37:58 crc kubenswrapper[4805]: I0226 17:37:58.667370 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-xgqc6"] Feb 26 17:37:58 crc kubenswrapper[4805]: I0226 17:37:58.683849 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-79wtt"] Feb 26 17:37:58 crc kubenswrapper[4805]: I0226 17:37:58.834360 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-79b2z"] Feb 26 17:37:58 crc kubenswrapper[4805]: W0226 17:37:58.850675 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd97c6376_ee3c_4aff_b2d5_0ba210dc1771.slice/crio-b10b7bbd24ab0cf3739d3d557b510ba56d7cc3e1df46f3a8049a73298109d6d1 WatchSource:0}: Error finding container b10b7bbd24ab0cf3739d3d557b510ba56d7cc3e1df46f3a8049a73298109d6d1: Status 404 returned error can't find the container with id b10b7bbd24ab0cf3739d3d557b510ba56d7cc3e1df46f3a8049a73298109d6d1 Feb 26 17:37:58 crc kubenswrapper[4805]: I0226 17:37:58.945517 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-wg9r2"] Feb 26 17:37:58 crc kubenswrapper[4805]: W0226 17:37:58.953404 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1627cc17_6ee2_4176_b719_aa04e00aa881.slice/crio-1057c4aab3ef7e12bde5a49f07fe16f246c02dc63644f1ea10456957cd9f2312 WatchSource:0}: Error finding container 1057c4aab3ef7e12bde5a49f07fe16f246c02dc63644f1ea10456957cd9f2312: Status 404 returned error can't find the container with id 1057c4aab3ef7e12bde5a49f07fe16f246c02dc63644f1ea10456957cd9f2312 Feb 26 17:37:59 crc kubenswrapper[4805]: I0226 17:37:59.028447 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:37:59 crc kubenswrapper[4805]: I0226 17:37:59.329430 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-79wtt" event={"ID":"5c0f3346-b7f3-48c3-b177-86d8adb7d190","Type":"ContainerStarted","Data":"64f4d4735a14313fd70a0c8594bee07292a5a083dcf9c883c01b528909a5271c"} Feb 26 17:37:59 crc kubenswrapper[4805]: I0226 17:37:59.373418 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a466ee40-e6ef-4a36-96c6-88e7ce00a28c","Type":"ContainerStarted","Data":"7ef945207e66bd697f689a736cb753a4f0ea3f426ab29306083c0bf39361bc13"} Feb 26 17:37:59 crc kubenswrapper[4805]: I0226 17:37:59.375816 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"137f038e-91ff-44e0-9f5c-616765295b23","Type":"ContainerStarted","Data":"712e1d0d98e73b81e9376915ef3fa85b0c1af98dd1ec8c61f644473c5d562022"} Feb 26 17:37:59 crc kubenswrapper[4805]: I0226 17:37:59.379830 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xldbg" event={"ID":"89a5f81d-d1f1-4bed-8df9-82c5fab4274e","Type":"ContainerStarted","Data":"ba092aa320bc7b5360b30987fa561519721cdf764e9d7169e2e3f7bce8d0c26d"} Feb 26 17:37:59 crc kubenswrapper[4805]: I0226 17:37:59.410090 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dgxjz" event={"ID":"8969f13b-8f7b-4e4e-a891-eac8a978bb42","Type":"ContainerStarted","Data":"3e2376219ea3e4f9170fac82e02d4cae84985828c02abae3ee7f7d0983bbf932"} Feb 26 17:37:59 crc kubenswrapper[4805]: I0226 17:37:59.416412 4805 generic.go:334] "Generic (PLEG): container finished" podID="11e19e55-a5cb-4119-b0c4-f51c5eae979d" containerID="9b26719b879d98c7d6903a6b403cebb1d348011f76e5e3ae031ae83a4e40848a" exitCode=0 Feb 26 17:37:59 crc kubenswrapper[4805]: I0226 17:37:59.416517 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9d85d47c-zksjg" event={"ID":"11e19e55-a5cb-4119-b0c4-f51c5eae979d","Type":"ContainerDied","Data":"9b26719b879d98c7d6903a6b403cebb1d348011f76e5e3ae031ae83a4e40848a"} Feb 26 17:37:59 crc kubenswrapper[4805]: I0226 17:37:59.420326 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-79b2z" event={"ID":"d97c6376-ee3c-4aff-b2d5-0ba210dc1771","Type":"ContainerStarted","Data":"b10b7bbd24ab0cf3739d3d557b510ba56d7cc3e1df46f3a8049a73298109d6d1"} Feb 26 17:37:59 crc kubenswrapper[4805]: I0226 17:37:59.428243 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wg9r2" event={"ID":"1627cc17-6ee2-4176-b719-aa04e00aa881","Type":"ContainerStarted","Data":"1057c4aab3ef7e12bde5a49f07fe16f246c02dc63644f1ea10456957cd9f2312"} Feb 26 17:37:59 crc kubenswrapper[4805]: I0226 17:37:59.435383 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-5v7qp" event={"ID":"b61d3acd-4133-4fe1-82b0-05036641ed78","Type":"ContainerStarted","Data":"5c3805d730f539b9aaabdacf6e1567a961008f7e3d540451d5b5e7f9e27ded8d"} Feb 26 17:37:59 crc kubenswrapper[4805]: I0226 17:37:59.439521 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-xldbg" podStartSLOduration=3.4394835329999998 podStartE2EDuration="3.439483533s" podCreationTimestamp="2026-02-26 17:37:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:37:59.410282304 +0000 UTC m=+1393.972036643" watchObservedRunningTime="2026-02-26 17:37:59.439483533 +0000 UTC m=+1394.001237872" Feb 26 17:37:59 crc kubenswrapper[4805]: I0226 17:37:59.486014 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-5v7qp" podStartSLOduration=3.485983399 podStartE2EDuration="3.485983399s" podCreationTimestamp="2026-02-26 17:37:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:37:59.454431641 +0000 UTC m=+1394.016185980" watchObservedRunningTime="2026-02-26 17:37:59.485983399 +0000 UTC m=+1394.047737738" Feb 26 17:37:59 crc kubenswrapper[4805]: I0226 17:37:59.499376 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-xgqc6" event={"ID":"96d28605-2282-4e5f-93d6-5a3023c7bc9c","Type":"ContainerStarted","Data":"2dbd58277dae4a81b363eba2481ae68eb589de3e839b54475217bdc6f86b03c8"} Feb 26 17:37:59 crc kubenswrapper[4805]: I0226 17:37:59.838331 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-zksjg" Feb 26 17:37:59 crc kubenswrapper[4805]: I0226 17:37:59.970064 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sc9f\" (UniqueName: \"kubernetes.io/projected/11e19e55-a5cb-4119-b0c4-f51c5eae979d-kube-api-access-2sc9f\") pod \"11e19e55-a5cb-4119-b0c4-f51c5eae979d\" (UID: \"11e19e55-a5cb-4119-b0c4-f51c5eae979d\") " Feb 26 17:37:59 crc kubenswrapper[4805]: I0226 17:37:59.970187 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11e19e55-a5cb-4119-b0c4-f51c5eae979d-dns-svc\") pod \"11e19e55-a5cb-4119-b0c4-f51c5eae979d\" (UID: \"11e19e55-a5cb-4119-b0c4-f51c5eae979d\") " Feb 26 17:37:59 crc kubenswrapper[4805]: I0226 17:37:59.970340 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11e19e55-a5cb-4119-b0c4-f51c5eae979d-ovsdbserver-nb\") pod \"11e19e55-a5cb-4119-b0c4-f51c5eae979d\" (UID: \"11e19e55-a5cb-4119-b0c4-f51c5eae979d\") " Feb 26 17:37:59 crc kubenswrapper[4805]: I0226 17:37:59.970474 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11e19e55-a5cb-4119-b0c4-f51c5eae979d-ovsdbserver-sb\") pod \"11e19e55-a5cb-4119-b0c4-f51c5eae979d\" (UID: \"11e19e55-a5cb-4119-b0c4-f51c5eae979d\") " Feb 26 17:37:59 crc kubenswrapper[4805]: I0226 17:37:59.970494 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11e19e55-a5cb-4119-b0c4-f51c5eae979d-config\") pod \"11e19e55-a5cb-4119-b0c4-f51c5eae979d\" (UID: \"11e19e55-a5cb-4119-b0c4-f51c5eae979d\") " Feb 26 17:38:00 crc kubenswrapper[4805]: I0226 17:38:00.010215 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11e19e55-a5cb-4119-b0c4-f51c5eae979d-kube-api-access-2sc9f" (OuterVolumeSpecName: "kube-api-access-2sc9f") pod "11e19e55-a5cb-4119-b0c4-f51c5eae979d" (UID: "11e19e55-a5cb-4119-b0c4-f51c5eae979d"). InnerVolumeSpecName "kube-api-access-2sc9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:38:00 crc kubenswrapper[4805]: I0226 17:38:00.016000 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11e19e55-a5cb-4119-b0c4-f51c5eae979d-config" (OuterVolumeSpecName: "config") pod "11e19e55-a5cb-4119-b0c4-f51c5eae979d" (UID: "11e19e55-a5cb-4119-b0c4-f51c5eae979d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:38:00 crc kubenswrapper[4805]: I0226 17:38:00.031792 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11e19e55-a5cb-4119-b0c4-f51c5eae979d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "11e19e55-a5cb-4119-b0c4-f51c5eae979d" (UID: "11e19e55-a5cb-4119-b0c4-f51c5eae979d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:38:00 crc kubenswrapper[4805]: I0226 17:38:00.045362 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11e19e55-a5cb-4119-b0c4-f51c5eae979d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "11e19e55-a5cb-4119-b0c4-f51c5eae979d" (UID: "11e19e55-a5cb-4119-b0c4-f51c5eae979d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:38:00 crc kubenswrapper[4805]: I0226 17:38:00.056678 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11e19e55-a5cb-4119-b0c4-f51c5eae979d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "11e19e55-a5cb-4119-b0c4-f51c5eae979d" (UID: "11e19e55-a5cb-4119-b0c4-f51c5eae979d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:38:00 crc kubenswrapper[4805]: I0226 17:38:00.072549 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11e19e55-a5cb-4119-b0c4-f51c5eae979d-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:00 crc kubenswrapper[4805]: I0226 17:38:00.072800 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/11e19e55-a5cb-4119-b0c4-f51c5eae979d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:00 crc kubenswrapper[4805]: I0226 17:38:00.072813 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sc9f\" (UniqueName: \"kubernetes.io/projected/11e19e55-a5cb-4119-b0c4-f51c5eae979d-kube-api-access-2sc9f\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:00 crc kubenswrapper[4805]: I0226 17:38:00.072825 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11e19e55-a5cb-4119-b0c4-f51c5eae979d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:00 crc kubenswrapper[4805]: I0226 17:38:00.072835 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/11e19e55-a5cb-4119-b0c4-f51c5eae979d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:00 crc kubenswrapper[4805]: I0226 17:38:00.138508 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535458-b6pvp"] Feb 26 17:38:00 crc kubenswrapper[4805]: E0226 17:38:00.139152 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11e19e55-a5cb-4119-b0c4-f51c5eae979d" containerName="init" Feb 26 17:38:00 crc kubenswrapper[4805]: I0226 17:38:00.139179 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="11e19e55-a5cb-4119-b0c4-f51c5eae979d" containerName="init" Feb 26 17:38:00 crc kubenswrapper[4805]: I0226 17:38:00.139411 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="11e19e55-a5cb-4119-b0c4-f51c5eae979d" containerName="init" Feb 26 17:38:00 crc kubenswrapper[4805]: I0226 17:38:00.140291 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535458-b6pvp" Feb 26 17:38:00 crc kubenswrapper[4805]: I0226 17:38:00.148385 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 17:38:00 crc kubenswrapper[4805]: I0226 17:38:00.148392 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 17:38:00 crc kubenswrapper[4805]: I0226 17:38:00.148518 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 17:38:00 crc kubenswrapper[4805]: I0226 17:38:00.180128 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535458-b6pvp"] Feb 26 17:38:00 crc kubenswrapper[4805]: I0226 17:38:00.275614 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw5pm\" (UniqueName: \"kubernetes.io/projected/3ea12120-9c03-4819-a2ae-61bf83333dea-kube-api-access-nw5pm\") pod \"auto-csr-approver-29535458-b6pvp\" (UID: \"3ea12120-9c03-4819-a2ae-61bf83333dea\") " pod="openshift-infra/auto-csr-approver-29535458-b6pvp" Feb 26 17:38:00 crc kubenswrapper[4805]: I0226 17:38:00.382394 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw5pm\" (UniqueName: \"kubernetes.io/projected/3ea12120-9c03-4819-a2ae-61bf83333dea-kube-api-access-nw5pm\") pod \"auto-csr-approver-29535458-b6pvp\" (UID: \"3ea12120-9c03-4819-a2ae-61bf83333dea\") " pod="openshift-infra/auto-csr-approver-29535458-b6pvp" Feb 26 17:38:00 crc kubenswrapper[4805]: I0226 17:38:00.480987 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw5pm\" (UniqueName: \"kubernetes.io/projected/3ea12120-9c03-4819-a2ae-61bf83333dea-kube-api-access-nw5pm\") pod \"auto-csr-approver-29535458-b6pvp\" (UID: \"3ea12120-9c03-4819-a2ae-61bf83333dea\") " pod="openshift-infra/auto-csr-approver-29535458-b6pvp" Feb 26 17:38:00 crc kubenswrapper[4805]: I0226 17:38:00.528950 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a466ee40-e6ef-4a36-96c6-88e7ce00a28c","Type":"ContainerStarted","Data":"10632fe66a287980c85a2c501bed2d62661a99fe6888c69ada7f5ede19ab46a8"} Feb 26 17:38:00 crc kubenswrapper[4805]: I0226 17:38:00.533922 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9d85d47c-zksjg" event={"ID":"11e19e55-a5cb-4119-b0c4-f51c5eae979d","Type":"ContainerDied","Data":"5ac57b41d160da5f42475d274d9d8f8d0c6610a01851296cf81ea3294a785156"} Feb 26 17:38:00 crc kubenswrapper[4805]: I0226 17:38:00.533972 4805 scope.go:117] "RemoveContainer" containerID="9b26719b879d98c7d6903a6b403cebb1d348011f76e5e3ae031ae83a4e40848a" Feb 26 17:38:00 crc kubenswrapper[4805]: I0226 17:38:00.534176 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9d85d47c-zksjg" Feb 26 17:38:00 crc kubenswrapper[4805]: I0226 17:38:00.556591 4805 generic.go:334] "Generic (PLEG): container finished" podID="d97c6376-ee3c-4aff-b2d5-0ba210dc1771" containerID="71fef9220f3f0561cbb260622402f5ed9d9fce3ba48930b96b9dc8a12db0a248" exitCode=0 Feb 26 17:38:00 crc kubenswrapper[4805]: I0226 17:38:00.556683 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-79b2z" event={"ID":"d97c6376-ee3c-4aff-b2d5-0ba210dc1771","Type":"ContainerDied","Data":"71fef9220f3f0561cbb260622402f5ed9d9fce3ba48930b96b9dc8a12db0a248"} Feb 26 17:38:00 crc kubenswrapper[4805]: I0226 17:38:00.621215 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"596906a4-e4c6-4ede-826b-d349dc6a8dbf","Type":"ContainerStarted","Data":"1eb2872e6326c49fa84052fd5a80beab80a6de021f4c3a29e0e02431923a49b1"} Feb 26 17:38:00 crc kubenswrapper[4805]: I0226 17:38:00.621246 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"596906a4-e4c6-4ede-826b-d349dc6a8dbf","Type":"ContainerStarted","Data":"ce2e3de1f1009e75509cf1161d413eab77f877b657e6d82777a7bc6f65767da2"} Feb 26 17:38:00 crc kubenswrapper[4805]: I0226 17:38:00.678623 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-zksjg"] Feb 26 17:38:00 crc kubenswrapper[4805]: I0226 17:38:00.715145 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9d85d47c-zksjg"] Feb 26 17:38:00 crc kubenswrapper[4805]: I0226 17:38:00.717528 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=18.717514609 podStartE2EDuration="18.717514609s" podCreationTimestamp="2026-02-26 17:37:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:38:00.671518746 +0000 UTC m=+1395.233273085" watchObservedRunningTime="2026-02-26 17:38:00.717514609 +0000 UTC m=+1395.279268948" Feb 26 17:38:00 crc kubenswrapper[4805]: I0226 17:38:00.775775 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535458-b6pvp" Feb 26 17:38:00 crc kubenswrapper[4805]: I0226 17:38:00.996996 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11e19e55-a5cb-4119-b0c4-f51c5eae979d" path="/var/lib/kubelet/pods/11e19e55-a5cb-4119-b0c4-f51c5eae979d/volumes" Feb 26 17:38:01 crc kubenswrapper[4805]: I0226 17:38:01.620585 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535458-b6pvp"] Feb 26 17:38:01 crc kubenswrapper[4805]: I0226 17:38:01.654007 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-79b2z" event={"ID":"d97c6376-ee3c-4aff-b2d5-0ba210dc1771","Type":"ContainerStarted","Data":"0b02a39c8da61c7922f1e82a6d38d46adc4de0077e74b351c23db6a30154bd68"} Feb 26 17:38:01 crc kubenswrapper[4805]: I0226 17:38:01.654160 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6ffb94d8ff-79b2z" Feb 26 17:38:01 crc kubenswrapper[4805]: I0226 17:38:01.682104 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6ffb94d8ff-79b2z" podStartSLOduration=4.68208783 podStartE2EDuration="4.68208783s" podCreationTimestamp="2026-02-26 17:37:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:38:01.680533071 +0000 UTC m=+1396.242287410" watchObservedRunningTime="2026-02-26 17:38:01.68208783 +0000 UTC m=+1396.243842169" Feb 26 17:38:03 crc kubenswrapper[4805]: I0226 17:38:03.234495 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 26 17:38:03 crc kubenswrapper[4805]: I0226 17:38:03.729475 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535458-b6pvp" event={"ID":"3ea12120-9c03-4819-a2ae-61bf83333dea","Type":"ContainerStarted","Data":"bf0065d9d8810d5b97eada495acfd6b3ccf7bbe3dc976c9480b5b1fbffe5df49"} Feb 26 17:38:03 crc kubenswrapper[4805]: I0226 17:38:03.732461 4805 generic.go:334] "Generic (PLEG): container finished" podID="f21b0b57-d027-42a1-a3c9-b4030f589db8" containerID="9faab5d1fbc11ac72a3512734c8b3416acd207ea773e248b500ca5d9782edac5" exitCode=0 Feb 26 17:38:03 crc kubenswrapper[4805]: I0226 17:38:03.732527 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-sqks9" event={"ID":"f21b0b57-d027-42a1-a3c9-b4030f589db8","Type":"ContainerDied","Data":"9faab5d1fbc11ac72a3512734c8b3416acd207ea773e248b500ca5d9782edac5"} Feb 26 17:38:04 crc kubenswrapper[4805]: I0226 17:38:04.742393 4805 generic.go:334] "Generic (PLEG): container finished" podID="89a5f81d-d1f1-4bed-8df9-82c5fab4274e" containerID="ba092aa320bc7b5360b30987fa561519721cdf764e9d7169e2e3f7bce8d0c26d" exitCode=0 Feb 26 17:38:04 crc kubenswrapper[4805]: I0226 17:38:04.742473 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xldbg" event={"ID":"89a5f81d-d1f1-4bed-8df9-82c5fab4274e","Type":"ContainerDied","Data":"ba092aa320bc7b5360b30987fa561519721cdf764e9d7169e2e3f7bce8d0c26d"} Feb 26 17:38:05 crc kubenswrapper[4805]: I0226 17:38:05.991888 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-sqks9" Feb 26 17:38:06 crc kubenswrapper[4805]: I0226 17:38:06.040579 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f21b0b57-d027-42a1-a3c9-b4030f589db8-config-data\") pod \"f21b0b57-d027-42a1-a3c9-b4030f589db8\" (UID: \"f21b0b57-d027-42a1-a3c9-b4030f589db8\") " Feb 26 17:38:06 crc kubenswrapper[4805]: I0226 17:38:06.040822 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f21b0b57-d027-42a1-a3c9-b4030f589db8-combined-ca-bundle\") pod \"f21b0b57-d027-42a1-a3c9-b4030f589db8\" (UID: \"f21b0b57-d027-42a1-a3c9-b4030f589db8\") " Feb 26 17:38:06 crc kubenswrapper[4805]: I0226 17:38:06.040908 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f21b0b57-d027-42a1-a3c9-b4030f589db8-db-sync-config-data\") pod \"f21b0b57-d027-42a1-a3c9-b4030f589db8\" (UID: \"f21b0b57-d027-42a1-a3c9-b4030f589db8\") " Feb 26 17:38:06 crc kubenswrapper[4805]: I0226 17:38:06.040959 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5zcs\" (UniqueName: \"kubernetes.io/projected/f21b0b57-d027-42a1-a3c9-b4030f589db8-kube-api-access-g5zcs\") pod \"f21b0b57-d027-42a1-a3c9-b4030f589db8\" (UID: \"f21b0b57-d027-42a1-a3c9-b4030f589db8\") " Feb 26 17:38:06 crc kubenswrapper[4805]: I0226 17:38:06.049818 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f21b0b57-d027-42a1-a3c9-b4030f589db8-kube-api-access-g5zcs" (OuterVolumeSpecName: "kube-api-access-g5zcs") pod "f21b0b57-d027-42a1-a3c9-b4030f589db8" (UID: "f21b0b57-d027-42a1-a3c9-b4030f589db8"). InnerVolumeSpecName "kube-api-access-g5zcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:38:06 crc kubenswrapper[4805]: I0226 17:38:06.051369 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f21b0b57-d027-42a1-a3c9-b4030f589db8-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f21b0b57-d027-42a1-a3c9-b4030f589db8" (UID: "f21b0b57-d027-42a1-a3c9-b4030f589db8"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:38:06 crc kubenswrapper[4805]: I0226 17:38:06.075276 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f21b0b57-d027-42a1-a3c9-b4030f589db8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f21b0b57-d027-42a1-a3c9-b4030f589db8" (UID: "f21b0b57-d027-42a1-a3c9-b4030f589db8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:38:06 crc kubenswrapper[4805]: I0226 17:38:06.103879 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f21b0b57-d027-42a1-a3c9-b4030f589db8-config-data" (OuterVolumeSpecName: "config-data") pod "f21b0b57-d027-42a1-a3c9-b4030f589db8" (UID: "f21b0b57-d027-42a1-a3c9-b4030f589db8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:38:06 crc kubenswrapper[4805]: I0226 17:38:06.145111 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f21b0b57-d027-42a1-a3c9-b4030f589db8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:06 crc kubenswrapper[4805]: I0226 17:38:06.145160 4805 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f21b0b57-d027-42a1-a3c9-b4030f589db8-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:06 crc kubenswrapper[4805]: I0226 17:38:06.145171 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5zcs\" (UniqueName: \"kubernetes.io/projected/f21b0b57-d027-42a1-a3c9-b4030f589db8-kube-api-access-g5zcs\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:06 crc kubenswrapper[4805]: I0226 17:38:06.145183 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f21b0b57-d027-42a1-a3c9-b4030f589db8-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:06 crc kubenswrapper[4805]: I0226 17:38:06.765466 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-sqks9" event={"ID":"f21b0b57-d027-42a1-a3c9-b4030f589db8","Type":"ContainerDied","Data":"f3ac6945ccb25945b32289bff6c662aa3ea1842af94a452a9d1f0fcf2a5107ab"} Feb 26 17:38:06 crc kubenswrapper[4805]: I0226 17:38:06.765512 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3ac6945ccb25945b32289bff6c662aa3ea1842af94a452a9d1f0fcf2a5107ab" Feb 26 17:38:06 crc kubenswrapper[4805]: I0226 17:38:06.765534 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-sqks9" Feb 26 17:38:07 crc kubenswrapper[4805]: I0226 17:38:07.492457 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-79b2z"] Feb 26 17:38:07 crc kubenswrapper[4805]: I0226 17:38:07.493210 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6ffb94d8ff-79b2z" podUID="d97c6376-ee3c-4aff-b2d5-0ba210dc1771" containerName="dnsmasq-dns" containerID="cri-o://0b02a39c8da61c7922f1e82a6d38d46adc4de0077e74b351c23db6a30154bd68" gracePeriod=10 Feb 26 17:38:07 crc kubenswrapper[4805]: I0226 17:38:07.495098 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6ffb94d8ff-79b2z" Feb 26 17:38:07 crc kubenswrapper[4805]: I0226 17:38:07.529614 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56798b757f-6sfst"] Feb 26 17:38:07 crc kubenswrapper[4805]: E0226 17:38:07.530507 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f21b0b57-d027-42a1-a3c9-b4030f589db8" containerName="glance-db-sync" Feb 26 17:38:07 crc kubenswrapper[4805]: I0226 17:38:07.530527 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f21b0b57-d027-42a1-a3c9-b4030f589db8" containerName="glance-db-sync" Feb 26 17:38:07 crc kubenswrapper[4805]: I0226 17:38:07.530721 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f21b0b57-d027-42a1-a3c9-b4030f589db8" containerName="glance-db-sync" Feb 26 17:38:07 crc kubenswrapper[4805]: I0226 17:38:07.532583 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56798b757f-6sfst" Feb 26 17:38:07 crc kubenswrapper[4805]: I0226 17:38:07.545152 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-6sfst"] Feb 26 17:38:07 crc kubenswrapper[4805]: I0226 17:38:07.589251 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c705c0f4-7fdd-44ed-8341-210b50ec8e3f-ovsdbserver-nb\") pod \"dnsmasq-dns-56798b757f-6sfst\" (UID: \"c705c0f4-7fdd-44ed-8341-210b50ec8e3f\") " pod="openstack/dnsmasq-dns-56798b757f-6sfst" Feb 26 17:38:07 crc kubenswrapper[4805]: I0226 17:38:07.589309 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c705c0f4-7fdd-44ed-8341-210b50ec8e3f-ovsdbserver-sb\") pod \"dnsmasq-dns-56798b757f-6sfst\" (UID: \"c705c0f4-7fdd-44ed-8341-210b50ec8e3f\") " pod="openstack/dnsmasq-dns-56798b757f-6sfst" Feb 26 17:38:07 crc kubenswrapper[4805]: I0226 17:38:07.589364 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ndwk\" (UniqueName: \"kubernetes.io/projected/c705c0f4-7fdd-44ed-8341-210b50ec8e3f-kube-api-access-6ndwk\") pod \"dnsmasq-dns-56798b757f-6sfst\" (UID: \"c705c0f4-7fdd-44ed-8341-210b50ec8e3f\") " pod="openstack/dnsmasq-dns-56798b757f-6sfst" Feb 26 17:38:07 crc kubenswrapper[4805]: I0226 17:38:07.589384 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c705c0f4-7fdd-44ed-8341-210b50ec8e3f-config\") pod \"dnsmasq-dns-56798b757f-6sfst\" (UID: \"c705c0f4-7fdd-44ed-8341-210b50ec8e3f\") " pod="openstack/dnsmasq-dns-56798b757f-6sfst" Feb 26 17:38:07 crc kubenswrapper[4805]: I0226 17:38:07.589411 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c705c0f4-7fdd-44ed-8341-210b50ec8e3f-dns-svc\") pod \"dnsmasq-dns-56798b757f-6sfst\" (UID: \"c705c0f4-7fdd-44ed-8341-210b50ec8e3f\") " pod="openstack/dnsmasq-dns-56798b757f-6sfst" Feb 26 17:38:07 crc kubenswrapper[4805]: I0226 17:38:07.691623 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ndwk\" (UniqueName: \"kubernetes.io/projected/c705c0f4-7fdd-44ed-8341-210b50ec8e3f-kube-api-access-6ndwk\") pod \"dnsmasq-dns-56798b757f-6sfst\" (UID: \"c705c0f4-7fdd-44ed-8341-210b50ec8e3f\") " pod="openstack/dnsmasq-dns-56798b757f-6sfst" Feb 26 17:38:07 crc kubenswrapper[4805]: I0226 17:38:07.691688 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c705c0f4-7fdd-44ed-8341-210b50ec8e3f-config\") pod \"dnsmasq-dns-56798b757f-6sfst\" (UID: \"c705c0f4-7fdd-44ed-8341-210b50ec8e3f\") " pod="openstack/dnsmasq-dns-56798b757f-6sfst" Feb 26 17:38:07 crc kubenswrapper[4805]: I0226 17:38:07.691710 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c705c0f4-7fdd-44ed-8341-210b50ec8e3f-dns-svc\") pod \"dnsmasq-dns-56798b757f-6sfst\" (UID: \"c705c0f4-7fdd-44ed-8341-210b50ec8e3f\") " pod="openstack/dnsmasq-dns-56798b757f-6sfst" Feb 26 17:38:07 crc kubenswrapper[4805]: I0226 17:38:07.691813 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c705c0f4-7fdd-44ed-8341-210b50ec8e3f-ovsdbserver-nb\") pod \"dnsmasq-dns-56798b757f-6sfst\" (UID: \"c705c0f4-7fdd-44ed-8341-210b50ec8e3f\") " pod="openstack/dnsmasq-dns-56798b757f-6sfst" Feb 26 17:38:07 crc kubenswrapper[4805]: I0226 17:38:07.691846 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c705c0f4-7fdd-44ed-8341-210b50ec8e3f-ovsdbserver-sb\") pod \"dnsmasq-dns-56798b757f-6sfst\" (UID: \"c705c0f4-7fdd-44ed-8341-210b50ec8e3f\") " pod="openstack/dnsmasq-dns-56798b757f-6sfst" Feb 26 17:38:07 crc kubenswrapper[4805]: I0226 17:38:07.692730 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c705c0f4-7fdd-44ed-8341-210b50ec8e3f-ovsdbserver-sb\") pod \"dnsmasq-dns-56798b757f-6sfst\" (UID: \"c705c0f4-7fdd-44ed-8341-210b50ec8e3f\") " pod="openstack/dnsmasq-dns-56798b757f-6sfst" Feb 26 17:38:07 crc kubenswrapper[4805]: I0226 17:38:07.693564 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c705c0f4-7fdd-44ed-8341-210b50ec8e3f-config\") pod \"dnsmasq-dns-56798b757f-6sfst\" (UID: \"c705c0f4-7fdd-44ed-8341-210b50ec8e3f\") " pod="openstack/dnsmasq-dns-56798b757f-6sfst" Feb 26 17:38:07 crc kubenswrapper[4805]: I0226 17:38:07.694772 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c705c0f4-7fdd-44ed-8341-210b50ec8e3f-dns-svc\") pod \"dnsmasq-dns-56798b757f-6sfst\" (UID: \"c705c0f4-7fdd-44ed-8341-210b50ec8e3f\") " pod="openstack/dnsmasq-dns-56798b757f-6sfst" Feb 26 17:38:07 crc kubenswrapper[4805]: I0226 17:38:07.695561 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c705c0f4-7fdd-44ed-8341-210b50ec8e3f-ovsdbserver-nb\") pod \"dnsmasq-dns-56798b757f-6sfst\" (UID: \"c705c0f4-7fdd-44ed-8341-210b50ec8e3f\") " pod="openstack/dnsmasq-dns-56798b757f-6sfst" Feb 26 17:38:07 crc kubenswrapper[4805]: I0226 17:38:07.736773 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ndwk\" (UniqueName: \"kubernetes.io/projected/c705c0f4-7fdd-44ed-8341-210b50ec8e3f-kube-api-access-6ndwk\") pod \"dnsmasq-dns-56798b757f-6sfst\" (UID: \"c705c0f4-7fdd-44ed-8341-210b50ec8e3f\") " pod="openstack/dnsmasq-dns-56798b757f-6sfst" Feb 26 17:38:07 crc kubenswrapper[4805]: I0226 17:38:07.830451 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6ffb94d8ff-79b2z" podUID="d97c6376-ee3c-4aff-b2d5-0ba210dc1771" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.167:5353: connect: connection refused" Feb 26 17:38:07 crc kubenswrapper[4805]: I0226 17:38:07.842380 4805 generic.go:334] "Generic (PLEG): container finished" podID="d97c6376-ee3c-4aff-b2d5-0ba210dc1771" containerID="0b02a39c8da61c7922f1e82a6d38d46adc4de0077e74b351c23db6a30154bd68" exitCode=0 Feb 26 17:38:07 crc kubenswrapper[4805]: I0226 17:38:07.842433 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-79b2z" event={"ID":"d97c6376-ee3c-4aff-b2d5-0ba210dc1771","Type":"ContainerDied","Data":"0b02a39c8da61c7922f1e82a6d38d46adc4de0077e74b351c23db6a30154bd68"} Feb 26 17:38:07 crc kubenswrapper[4805]: I0226 17:38:07.873570 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56798b757f-6sfst" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.285291 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.288350 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.292387 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-ff6b8" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.292815 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.293048 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.299110 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.426426 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aca42ff7-721d-4578-b0b9-d7905eb67afc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"aca42ff7-721d-4578-b0b9-d7905eb67afc\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.426554 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca42ff7-721d-4578-b0b9-d7905eb67afc-config-data\") pod \"glance-default-external-api-0\" (UID: \"aca42ff7-721d-4578-b0b9-d7905eb67afc\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.426598 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aca42ff7-721d-4578-b0b9-d7905eb67afc-scripts\") pod \"glance-default-external-api-0\" (UID: \"aca42ff7-721d-4578-b0b9-d7905eb67afc\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.426643 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxnkc\" (UniqueName: \"kubernetes.io/projected/aca42ff7-721d-4578-b0b9-d7905eb67afc-kube-api-access-dxnkc\") pod \"glance-default-external-api-0\" (UID: \"aca42ff7-721d-4578-b0b9-d7905eb67afc\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.426668 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aca42ff7-721d-4578-b0b9-d7905eb67afc-logs\") pod \"glance-default-external-api-0\" (UID: \"aca42ff7-721d-4578-b0b9-d7905eb67afc\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.426743 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64\") pod \"glance-default-external-api-0\" (UID: \"aca42ff7-721d-4578-b0b9-d7905eb67afc\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.426796 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca42ff7-721d-4578-b0b9-d7905eb67afc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"aca42ff7-721d-4578-b0b9-d7905eb67afc\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.528038 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64\") pod \"glance-default-external-api-0\" (UID: \"aca42ff7-721d-4578-b0b9-d7905eb67afc\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.528133 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca42ff7-721d-4578-b0b9-d7905eb67afc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"aca42ff7-721d-4578-b0b9-d7905eb67afc\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.528174 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aca42ff7-721d-4578-b0b9-d7905eb67afc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"aca42ff7-721d-4578-b0b9-d7905eb67afc\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.528252 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca42ff7-721d-4578-b0b9-d7905eb67afc-config-data\") pod \"glance-default-external-api-0\" (UID: \"aca42ff7-721d-4578-b0b9-d7905eb67afc\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.528294 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aca42ff7-721d-4578-b0b9-d7905eb67afc-scripts\") pod \"glance-default-external-api-0\" (UID: \"aca42ff7-721d-4578-b0b9-d7905eb67afc\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.528334 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxnkc\" (UniqueName: \"kubernetes.io/projected/aca42ff7-721d-4578-b0b9-d7905eb67afc-kube-api-access-dxnkc\") pod \"glance-default-external-api-0\" (UID: \"aca42ff7-721d-4578-b0b9-d7905eb67afc\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.528356 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aca42ff7-721d-4578-b0b9-d7905eb67afc-logs\") pod \"glance-default-external-api-0\" (UID: \"aca42ff7-721d-4578-b0b9-d7905eb67afc\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.528833 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aca42ff7-721d-4578-b0b9-d7905eb67afc-logs\") pod \"glance-default-external-api-0\" (UID: \"aca42ff7-721d-4578-b0b9-d7905eb67afc\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.529497 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aca42ff7-721d-4578-b0b9-d7905eb67afc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"aca42ff7-721d-4578-b0b9-d7905eb67afc\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.533478 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca42ff7-721d-4578-b0b9-d7905eb67afc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"aca42ff7-721d-4578-b0b9-d7905eb67afc\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.534308 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aca42ff7-721d-4578-b0b9-d7905eb67afc-scripts\") pod \"glance-default-external-api-0\" (UID: \"aca42ff7-721d-4578-b0b9-d7905eb67afc\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.534475 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca42ff7-721d-4578-b0b9-d7905eb67afc-config-data\") pod \"glance-default-external-api-0\" (UID: \"aca42ff7-721d-4578-b0b9-d7905eb67afc\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.535066 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.535108 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64\") pod \"glance-default-external-api-0\" (UID: \"aca42ff7-721d-4578-b0b9-d7905eb67afc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a78d83e29b9efed98adc7cd32a238f67178734183d9dc19f1da819604a3e7a12/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.551787 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxnkc\" (UniqueName: \"kubernetes.io/projected/aca42ff7-721d-4578-b0b9-d7905eb67afc-kube-api-access-dxnkc\") pod \"glance-default-external-api-0\" (UID: \"aca42ff7-721d-4578-b0b9-d7905eb67afc\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.564564 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.566297 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.569633 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.578379 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.619395 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64\") pod \"glance-default-external-api-0\" (UID: \"aca42ff7-721d-4578-b0b9-d7905eb67afc\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.629986 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-256mk\" (UniqueName: \"kubernetes.io/projected/3a821121-9d7e-4ff5-9e30-70bb10327945-kube-api-access-256mk\") pod \"glance-default-internal-api-0\" (UID: \"3a821121-9d7e-4ff5-9e30-70bb10327945\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.630056 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a821121-9d7e-4ff5-9e30-70bb10327945-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3a821121-9d7e-4ff5-9e30-70bb10327945\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.630087 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3a821121-9d7e-4ff5-9e30-70bb10327945-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3a821121-9d7e-4ff5-9e30-70bb10327945\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.630211 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a821121-9d7e-4ff5-9e30-70bb10327945-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3a821121-9d7e-4ff5-9e30-70bb10327945\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.630482 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a821121-9d7e-4ff5-9e30-70bb10327945-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3a821121-9d7e-4ff5-9e30-70bb10327945\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.630686 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a821121-9d7e-4ff5-9e30-70bb10327945-logs\") pod \"glance-default-internal-api-0\" (UID: \"3a821121-9d7e-4ff5-9e30-70bb10327945\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.630771 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74\") pod \"glance-default-internal-api-0\" (UID: \"3a821121-9d7e-4ff5-9e30-70bb10327945\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.732684 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a821121-9d7e-4ff5-9e30-70bb10327945-logs\") pod \"glance-default-internal-api-0\" (UID: \"3a821121-9d7e-4ff5-9e30-70bb10327945\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.732789 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74\") pod \"glance-default-internal-api-0\" (UID: \"3a821121-9d7e-4ff5-9e30-70bb10327945\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.732832 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-256mk\" (UniqueName: \"kubernetes.io/projected/3a821121-9d7e-4ff5-9e30-70bb10327945-kube-api-access-256mk\") pod \"glance-default-internal-api-0\" (UID: \"3a821121-9d7e-4ff5-9e30-70bb10327945\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.732859 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a821121-9d7e-4ff5-9e30-70bb10327945-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3a821121-9d7e-4ff5-9e30-70bb10327945\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.732891 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3a821121-9d7e-4ff5-9e30-70bb10327945-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3a821121-9d7e-4ff5-9e30-70bb10327945\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.732945 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a821121-9d7e-4ff5-9e30-70bb10327945-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3a821121-9d7e-4ff5-9e30-70bb10327945\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.733053 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a821121-9d7e-4ff5-9e30-70bb10327945-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3a821121-9d7e-4ff5-9e30-70bb10327945\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.733349 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a821121-9d7e-4ff5-9e30-70bb10327945-logs\") pod \"glance-default-internal-api-0\" (UID: \"3a821121-9d7e-4ff5-9e30-70bb10327945\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.733685 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3a821121-9d7e-4ff5-9e30-70bb10327945-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3a821121-9d7e-4ff5-9e30-70bb10327945\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.736422 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.736460 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74\") pod \"glance-default-internal-api-0\" (UID: \"3a821121-9d7e-4ff5-9e30-70bb10327945\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0507a3f442b97fe1466b7dfe12c3b0da8e1c69cf48d5061e83bd97d1212f1f63/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.737720 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a821121-9d7e-4ff5-9e30-70bb10327945-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3a821121-9d7e-4ff5-9e30-70bb10327945\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.741787 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a821121-9d7e-4ff5-9e30-70bb10327945-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3a821121-9d7e-4ff5-9e30-70bb10327945\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.742307 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a821121-9d7e-4ff5-9e30-70bb10327945-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3a821121-9d7e-4ff5-9e30-70bb10327945\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.752996 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-256mk\" (UniqueName: \"kubernetes.io/projected/3a821121-9d7e-4ff5-9e30-70bb10327945-kube-api-access-256mk\") pod \"glance-default-internal-api-0\" (UID: \"3a821121-9d7e-4ff5-9e30-70bb10327945\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.788884 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74\") pod \"glance-default-internal-api-0\" (UID: \"3a821121-9d7e-4ff5-9e30-70bb10327945\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.922487 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 17:38:08 crc kubenswrapper[4805]: I0226 17:38:08.977808 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 17:38:10 crc kubenswrapper[4805]: I0226 17:38:10.295502 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 17:38:10 crc kubenswrapper[4805]: I0226 17:38:10.525425 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 17:38:12 crc kubenswrapper[4805]: I0226 17:38:12.013081 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xldbg" event={"ID":"89a5f81d-d1f1-4bed-8df9-82c5fab4274e","Type":"ContainerDied","Data":"b5feb4135848dceef55e2849b8592ac9ab40d09a59ac77e89d172a52cbfbe2cc"} Feb 26 17:38:12 crc kubenswrapper[4805]: I0226 17:38:12.013370 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5feb4135848dceef55e2849b8592ac9ab40d09a59ac77e89d172a52cbfbe2cc" Feb 26 17:38:12 crc kubenswrapper[4805]: I0226 17:38:12.078599 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xldbg" Feb 26 17:38:12 crc kubenswrapper[4805]: I0226 17:38:12.277371 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89a5f81d-d1f1-4bed-8df9-82c5fab4274e-combined-ca-bundle\") pod \"89a5f81d-d1f1-4bed-8df9-82c5fab4274e\" (UID: \"89a5f81d-d1f1-4bed-8df9-82c5fab4274e\") " Feb 26 17:38:12 crc kubenswrapper[4805]: I0226 17:38:12.277797 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89a5f81d-d1f1-4bed-8df9-82c5fab4274e-config-data\") pod \"89a5f81d-d1f1-4bed-8df9-82c5fab4274e\" (UID: \"89a5f81d-d1f1-4bed-8df9-82c5fab4274e\") " Feb 26 17:38:12 crc kubenswrapper[4805]: I0226 17:38:12.277917 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/89a5f81d-d1f1-4bed-8df9-82c5fab4274e-fernet-keys\") pod \"89a5f81d-d1f1-4bed-8df9-82c5fab4274e\" (UID: \"89a5f81d-d1f1-4bed-8df9-82c5fab4274e\") " Feb 26 17:38:12 crc kubenswrapper[4805]: I0226 17:38:12.277950 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89a5f81d-d1f1-4bed-8df9-82c5fab4274e-scripts\") pod \"89a5f81d-d1f1-4bed-8df9-82c5fab4274e\" (UID: \"89a5f81d-d1f1-4bed-8df9-82c5fab4274e\") " Feb 26 17:38:12 crc kubenswrapper[4805]: I0226 17:38:12.278043 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r55nn\" (UniqueName: \"kubernetes.io/projected/89a5f81d-d1f1-4bed-8df9-82c5fab4274e-kube-api-access-r55nn\") pod \"89a5f81d-d1f1-4bed-8df9-82c5fab4274e\" (UID: \"89a5f81d-d1f1-4bed-8df9-82c5fab4274e\") " Feb 26 17:38:12 crc kubenswrapper[4805]: I0226 17:38:12.278067 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/89a5f81d-d1f1-4bed-8df9-82c5fab4274e-credential-keys\") pod \"89a5f81d-d1f1-4bed-8df9-82c5fab4274e\" (UID: \"89a5f81d-d1f1-4bed-8df9-82c5fab4274e\") " Feb 26 17:38:12 crc kubenswrapper[4805]: I0226 17:38:12.284432 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89a5f81d-d1f1-4bed-8df9-82c5fab4274e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "89a5f81d-d1f1-4bed-8df9-82c5fab4274e" (UID: "89a5f81d-d1f1-4bed-8df9-82c5fab4274e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:38:12 crc kubenswrapper[4805]: I0226 17:38:12.297843 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89a5f81d-d1f1-4bed-8df9-82c5fab4274e-scripts" (OuterVolumeSpecName: "scripts") pod "89a5f81d-d1f1-4bed-8df9-82c5fab4274e" (UID: "89a5f81d-d1f1-4bed-8df9-82c5fab4274e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:38:12 crc kubenswrapper[4805]: I0226 17:38:12.297857 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89a5f81d-d1f1-4bed-8df9-82c5fab4274e-kube-api-access-r55nn" (OuterVolumeSpecName: "kube-api-access-r55nn") pod "89a5f81d-d1f1-4bed-8df9-82c5fab4274e" (UID: "89a5f81d-d1f1-4bed-8df9-82c5fab4274e"). InnerVolumeSpecName "kube-api-access-r55nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:38:12 crc kubenswrapper[4805]: I0226 17:38:12.300662 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89a5f81d-d1f1-4bed-8df9-82c5fab4274e-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "89a5f81d-d1f1-4bed-8df9-82c5fab4274e" (UID: "89a5f81d-d1f1-4bed-8df9-82c5fab4274e"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:38:12 crc kubenswrapper[4805]: I0226 17:38:12.310074 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89a5f81d-d1f1-4bed-8df9-82c5fab4274e-config-data" (OuterVolumeSpecName: "config-data") pod "89a5f81d-d1f1-4bed-8df9-82c5fab4274e" (UID: "89a5f81d-d1f1-4bed-8df9-82c5fab4274e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:38:12 crc kubenswrapper[4805]: I0226 17:38:12.310157 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89a5f81d-d1f1-4bed-8df9-82c5fab4274e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89a5f81d-d1f1-4bed-8df9-82c5fab4274e" (UID: "89a5f81d-d1f1-4bed-8df9-82c5fab4274e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:38:12 crc kubenswrapper[4805]: I0226 17:38:12.380950 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89a5f81d-d1f1-4bed-8df9-82c5fab4274e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:12 crc kubenswrapper[4805]: I0226 17:38:12.380999 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89a5f81d-d1f1-4bed-8df9-82c5fab4274e-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:12 crc kubenswrapper[4805]: I0226 17:38:12.381011 4805 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/89a5f81d-d1f1-4bed-8df9-82c5fab4274e-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:12 crc kubenswrapper[4805]: I0226 17:38:12.381039 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89a5f81d-d1f1-4bed-8df9-82c5fab4274e-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:12 crc kubenswrapper[4805]: I0226 17:38:12.381051 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r55nn\" (UniqueName: \"kubernetes.io/projected/89a5f81d-d1f1-4bed-8df9-82c5fab4274e-kube-api-access-r55nn\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:12 crc kubenswrapper[4805]: I0226 17:38:12.381065 4805 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/89a5f81d-d1f1-4bed-8df9-82c5fab4274e-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:13 crc kubenswrapper[4805]: I0226 17:38:13.021188 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xldbg" Feb 26 17:38:13 crc kubenswrapper[4805]: I0226 17:38:13.161123 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-xldbg"] Feb 26 17:38:13 crc kubenswrapper[4805]: I0226 17:38:13.168413 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-xldbg"] Feb 26 17:38:13 crc kubenswrapper[4805]: I0226 17:38:13.235101 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 26 17:38:13 crc kubenswrapper[4805]: I0226 17:38:13.245682 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 26 17:38:13 crc kubenswrapper[4805]: I0226 17:38:13.266611 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-jlsft"] Feb 26 17:38:13 crc kubenswrapper[4805]: E0226 17:38:13.267318 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89a5f81d-d1f1-4bed-8df9-82c5fab4274e" containerName="keystone-bootstrap" Feb 26 17:38:13 crc kubenswrapper[4805]: I0226 17:38:13.267344 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a5f81d-d1f1-4bed-8df9-82c5fab4274e" containerName="keystone-bootstrap" Feb 26 17:38:13 crc kubenswrapper[4805]: I0226 17:38:13.267552 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="89a5f81d-d1f1-4bed-8df9-82c5fab4274e" containerName="keystone-bootstrap" Feb 26 17:38:13 crc kubenswrapper[4805]: I0226 17:38:13.268412 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jlsft" Feb 26 17:38:13 crc kubenswrapper[4805]: I0226 17:38:13.270414 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 26 17:38:13 crc kubenswrapper[4805]: I0226 17:38:13.270664 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 26 17:38:13 crc kubenswrapper[4805]: I0226 17:38:13.270929 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bvnnf" Feb 26 17:38:13 crc kubenswrapper[4805]: I0226 17:38:13.271371 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 26 17:38:13 crc kubenswrapper[4805]: I0226 17:38:13.280344 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jlsft"] Feb 26 17:38:13 crc kubenswrapper[4805]: I0226 17:38:13.290189 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4-scripts\") pod \"keystone-bootstrap-jlsft\" (UID: \"4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4\") " pod="openstack/keystone-bootstrap-jlsft" Feb 26 17:38:13 crc kubenswrapper[4805]: I0226 17:38:13.290270 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq9dl\" (UniqueName: \"kubernetes.io/projected/4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4-kube-api-access-xq9dl\") pod \"keystone-bootstrap-jlsft\" (UID: \"4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4\") " pod="openstack/keystone-bootstrap-jlsft" Feb 26 17:38:13 crc kubenswrapper[4805]: I0226 17:38:13.290327 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4-credential-keys\") pod \"keystone-bootstrap-jlsft\" (UID: \"4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4\") " pod="openstack/keystone-bootstrap-jlsft" Feb 26 17:38:13 crc kubenswrapper[4805]: I0226 17:38:13.290359 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4-config-data\") pod \"keystone-bootstrap-jlsft\" (UID: \"4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4\") " pod="openstack/keystone-bootstrap-jlsft" Feb 26 17:38:13 crc kubenswrapper[4805]: I0226 17:38:13.290395 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4-fernet-keys\") pod \"keystone-bootstrap-jlsft\" (UID: \"4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4\") " pod="openstack/keystone-bootstrap-jlsft" Feb 26 17:38:13 crc kubenswrapper[4805]: I0226 17:38:13.290413 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4-combined-ca-bundle\") pod \"keystone-bootstrap-jlsft\" (UID: \"4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4\") " pod="openstack/keystone-bootstrap-jlsft" Feb 26 17:38:13 crc kubenswrapper[4805]: I0226 17:38:13.393758 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4-scripts\") pod \"keystone-bootstrap-jlsft\" (UID: \"4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4\") " pod="openstack/keystone-bootstrap-jlsft" Feb 26 17:38:13 crc kubenswrapper[4805]: I0226 17:38:13.394410 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq9dl\" (UniqueName: \"kubernetes.io/projected/4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4-kube-api-access-xq9dl\") pod \"keystone-bootstrap-jlsft\" (UID: \"4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4\") " pod="openstack/keystone-bootstrap-jlsft" Feb 26 17:38:13 crc kubenswrapper[4805]: I0226 17:38:13.394634 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4-credential-keys\") pod \"keystone-bootstrap-jlsft\" (UID: \"4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4\") " pod="openstack/keystone-bootstrap-jlsft" Feb 26 17:38:13 crc kubenswrapper[4805]: I0226 17:38:13.396928 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4-config-data\") pod \"keystone-bootstrap-jlsft\" (UID: \"4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4\") " pod="openstack/keystone-bootstrap-jlsft" Feb 26 17:38:13 crc kubenswrapper[4805]: I0226 17:38:13.396971 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4-fernet-keys\") pod \"keystone-bootstrap-jlsft\" (UID: \"4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4\") " pod="openstack/keystone-bootstrap-jlsft" Feb 26 17:38:13 crc kubenswrapper[4805]: I0226 17:38:13.397068 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4-combined-ca-bundle\") pod \"keystone-bootstrap-jlsft\" (UID: \"4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4\") " pod="openstack/keystone-bootstrap-jlsft" Feb 26 17:38:13 crc kubenswrapper[4805]: I0226 17:38:13.401004 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4-scripts\") pod \"keystone-bootstrap-jlsft\" (UID: \"4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4\") " pod="openstack/keystone-bootstrap-jlsft" Feb 26 17:38:13 crc kubenswrapper[4805]: I0226 17:38:13.402572 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4-config-data\") pod \"keystone-bootstrap-jlsft\" (UID: \"4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4\") " pod="openstack/keystone-bootstrap-jlsft" Feb 26 17:38:13 crc kubenswrapper[4805]: I0226 17:38:13.402650 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4-fernet-keys\") pod \"keystone-bootstrap-jlsft\" (UID: \"4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4\") " pod="openstack/keystone-bootstrap-jlsft" Feb 26 17:38:13 crc kubenswrapper[4805]: I0226 17:38:13.403846 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4-credential-keys\") pod \"keystone-bootstrap-jlsft\" (UID: \"4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4\") " pod="openstack/keystone-bootstrap-jlsft" Feb 26 17:38:13 crc kubenswrapper[4805]: I0226 17:38:13.408777 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4-combined-ca-bundle\") pod \"keystone-bootstrap-jlsft\" (UID: \"4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4\") " pod="openstack/keystone-bootstrap-jlsft" Feb 26 17:38:13 crc kubenswrapper[4805]: I0226 17:38:13.411721 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq9dl\" (UniqueName: \"kubernetes.io/projected/4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4-kube-api-access-xq9dl\") pod \"keystone-bootstrap-jlsft\" (UID: \"4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4\") " pod="openstack/keystone-bootstrap-jlsft" Feb 26 17:38:13 crc kubenswrapper[4805]: I0226 17:38:13.600951 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jlsft" Feb 26 17:38:14 crc kubenswrapper[4805]: I0226 17:38:14.037224 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 26 17:38:14 crc kubenswrapper[4805]: I0226 17:38:14.966957 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89a5f81d-d1f1-4bed-8df9-82c5fab4274e" path="/var/lib/kubelet/pods/89a5f81d-d1f1-4bed-8df9-82c5fab4274e/volumes" Feb 26 17:38:16 crc kubenswrapper[4805]: E0226 17:38:16.861763 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 26 17:38:16 crc kubenswrapper[4805]: E0226 17:38:16.862276 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wp4cx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-wg9r2_openstack(1627cc17-6ee2-4176-b719-aa04e00aa881): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 17:38:16 crc kubenswrapper[4805]: E0226 17:38:16.863478 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-wg9r2" podUID="1627cc17-6ee2-4176-b719-aa04e00aa881" Feb 26 17:38:17 crc kubenswrapper[4805]: E0226 17:38:17.061034 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-wg9r2" podUID="1627cc17-6ee2-4176-b719-aa04e00aa881" Feb 26 17:38:17 crc kubenswrapper[4805]: I0226 17:38:17.825708 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6ffb94d8ff-79b2z" podUID="d97c6376-ee3c-4aff-b2d5-0ba210dc1771" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.167:5353: i/o timeout" Feb 26 17:38:19 crc kubenswrapper[4805]: I0226 17:38:19.080699 4805 generic.go:334] "Generic (PLEG): container finished" podID="b61d3acd-4133-4fe1-82b0-05036641ed78" containerID="5c3805d730f539b9aaabdacf6e1567a961008f7e3d540451d5b5e7f9e27ded8d" exitCode=0 Feb 26 17:38:19 crc kubenswrapper[4805]: I0226 17:38:19.080745 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-5v7qp" event={"ID":"b61d3acd-4133-4fe1-82b0-05036641ed78","Type":"ContainerDied","Data":"5c3805d730f539b9aaabdacf6e1567a961008f7e3d540451d5b5e7f9e27ded8d"} Feb 26 17:38:22 crc kubenswrapper[4805]: I0226 17:38:22.826298 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6ffb94d8ff-79b2z" podUID="d97c6376-ee3c-4aff-b2d5-0ba210dc1771" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.167:5353: i/o timeout" Feb 26 17:38:22 crc kubenswrapper[4805]: I0226 17:38:22.827106 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6ffb94d8ff-79b2z" Feb 26 17:38:27 crc kubenswrapper[4805]: I0226 17:38:27.827761 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6ffb94d8ff-79b2z" podUID="d97c6376-ee3c-4aff-b2d5-0ba210dc1771" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.167:5353: i/o timeout" Feb 26 17:38:30 crc kubenswrapper[4805]: E0226 17:38:30.431294 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 26 17:38:30 crc kubenswrapper[4805]: E0226 17:38:30.432480 4805 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 26 17:38:30 crc kubenswrapper[4805]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 26 17:38:30 crc kubenswrapper[4805]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nw5pm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29535458-b6pvp_openshift-infra(3ea12120-9c03-4819-a2ae-61bf83333dea): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Feb 26 17:38:30 crc kubenswrapper[4805]: > logger="UnhandledError" Feb 26 17:38:30 crc kubenswrapper[4805]: E0226 17:38:30.433695 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29535458-b6pvp" podUID="3ea12120-9c03-4819-a2ae-61bf83333dea" Feb 26 17:38:30 crc kubenswrapper[4805]: I0226 17:38:30.567883 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-79b2z" Feb 26 17:38:30 crc kubenswrapper[4805]: I0226 17:38:30.747634 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thsjp\" (UniqueName: \"kubernetes.io/projected/d97c6376-ee3c-4aff-b2d5-0ba210dc1771-kube-api-access-thsjp\") pod \"d97c6376-ee3c-4aff-b2d5-0ba210dc1771\" (UID: \"d97c6376-ee3c-4aff-b2d5-0ba210dc1771\") " Feb 26 17:38:30 crc kubenswrapper[4805]: I0226 17:38:30.747814 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d97c6376-ee3c-4aff-b2d5-0ba210dc1771-config\") pod \"d97c6376-ee3c-4aff-b2d5-0ba210dc1771\" (UID: \"d97c6376-ee3c-4aff-b2d5-0ba210dc1771\") " Feb 26 17:38:30 crc kubenswrapper[4805]: I0226 17:38:30.747865 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d97c6376-ee3c-4aff-b2d5-0ba210dc1771-ovsdbserver-sb\") pod \"d97c6376-ee3c-4aff-b2d5-0ba210dc1771\" (UID: \"d97c6376-ee3c-4aff-b2d5-0ba210dc1771\") " Feb 26 17:38:30 crc kubenswrapper[4805]: I0226 17:38:30.747913 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d97c6376-ee3c-4aff-b2d5-0ba210dc1771-dns-svc\") pod \"d97c6376-ee3c-4aff-b2d5-0ba210dc1771\" (UID: \"d97c6376-ee3c-4aff-b2d5-0ba210dc1771\") " Feb 26 17:38:30 crc kubenswrapper[4805]: I0226 17:38:30.747945 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d97c6376-ee3c-4aff-b2d5-0ba210dc1771-ovsdbserver-nb\") pod \"d97c6376-ee3c-4aff-b2d5-0ba210dc1771\" (UID: \"d97c6376-ee3c-4aff-b2d5-0ba210dc1771\") " Feb 26 17:38:30 crc kubenswrapper[4805]: I0226 17:38:30.756526 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d97c6376-ee3c-4aff-b2d5-0ba210dc1771-kube-api-access-thsjp" (OuterVolumeSpecName: "kube-api-access-thsjp") pod "d97c6376-ee3c-4aff-b2d5-0ba210dc1771" (UID: "d97c6376-ee3c-4aff-b2d5-0ba210dc1771"). InnerVolumeSpecName "kube-api-access-thsjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:38:30 crc kubenswrapper[4805]: I0226 17:38:30.803865 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d97c6376-ee3c-4aff-b2d5-0ba210dc1771-config" (OuterVolumeSpecName: "config") pod "d97c6376-ee3c-4aff-b2d5-0ba210dc1771" (UID: "d97c6376-ee3c-4aff-b2d5-0ba210dc1771"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:38:30 crc kubenswrapper[4805]: I0226 17:38:30.804316 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d97c6376-ee3c-4aff-b2d5-0ba210dc1771-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d97c6376-ee3c-4aff-b2d5-0ba210dc1771" (UID: "d97c6376-ee3c-4aff-b2d5-0ba210dc1771"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:38:30 crc kubenswrapper[4805]: I0226 17:38:30.805367 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d97c6376-ee3c-4aff-b2d5-0ba210dc1771-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d97c6376-ee3c-4aff-b2d5-0ba210dc1771" (UID: "d97c6376-ee3c-4aff-b2d5-0ba210dc1771"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:38:30 crc kubenswrapper[4805]: I0226 17:38:30.816987 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d97c6376-ee3c-4aff-b2d5-0ba210dc1771-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d97c6376-ee3c-4aff-b2d5-0ba210dc1771" (UID: "d97c6376-ee3c-4aff-b2d5-0ba210dc1771"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:38:30 crc kubenswrapper[4805]: I0226 17:38:30.851161 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d97c6376-ee3c-4aff-b2d5-0ba210dc1771-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:30 crc kubenswrapper[4805]: I0226 17:38:30.851210 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d97c6376-ee3c-4aff-b2d5-0ba210dc1771-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:30 crc kubenswrapper[4805]: I0226 17:38:30.851222 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d97c6376-ee3c-4aff-b2d5-0ba210dc1771-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:30 crc kubenswrapper[4805]: I0226 17:38:30.851234 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d97c6376-ee3c-4aff-b2d5-0ba210dc1771-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:30 crc kubenswrapper[4805]: I0226 17:38:30.851247 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thsjp\" (UniqueName: \"kubernetes.io/projected/d97c6376-ee3c-4aff-b2d5-0ba210dc1771-kube-api-access-thsjp\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:30 crc kubenswrapper[4805]: I0226 17:38:30.924589 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-5v7qp" Feb 26 17:38:31 crc kubenswrapper[4805]: I0226 17:38:31.059982 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6n9n5\" (UniqueName: \"kubernetes.io/projected/b61d3acd-4133-4fe1-82b0-05036641ed78-kube-api-access-6n9n5\") pod \"b61d3acd-4133-4fe1-82b0-05036641ed78\" (UID: \"b61d3acd-4133-4fe1-82b0-05036641ed78\") " Feb 26 17:38:31 crc kubenswrapper[4805]: I0226 17:38:31.060216 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b61d3acd-4133-4fe1-82b0-05036641ed78-combined-ca-bundle\") pod \"b61d3acd-4133-4fe1-82b0-05036641ed78\" (UID: \"b61d3acd-4133-4fe1-82b0-05036641ed78\") " Feb 26 17:38:31 crc kubenswrapper[4805]: I0226 17:38:31.060448 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b61d3acd-4133-4fe1-82b0-05036641ed78-config\") pod \"b61d3acd-4133-4fe1-82b0-05036641ed78\" (UID: \"b61d3acd-4133-4fe1-82b0-05036641ed78\") " Feb 26 17:38:31 crc kubenswrapper[4805]: I0226 17:38:31.066469 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b61d3acd-4133-4fe1-82b0-05036641ed78-kube-api-access-6n9n5" (OuterVolumeSpecName: "kube-api-access-6n9n5") pod "b61d3acd-4133-4fe1-82b0-05036641ed78" (UID: "b61d3acd-4133-4fe1-82b0-05036641ed78"). InnerVolumeSpecName "kube-api-access-6n9n5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:38:31 crc kubenswrapper[4805]: I0226 17:38:31.091867 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b61d3acd-4133-4fe1-82b0-05036641ed78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b61d3acd-4133-4fe1-82b0-05036641ed78" (UID: "b61d3acd-4133-4fe1-82b0-05036641ed78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:38:31 crc kubenswrapper[4805]: I0226 17:38:31.098318 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b61d3acd-4133-4fe1-82b0-05036641ed78-config" (OuterVolumeSpecName: "config") pod "b61d3acd-4133-4fe1-82b0-05036641ed78" (UID: "b61d3acd-4133-4fe1-82b0-05036641ed78"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:38:31 crc kubenswrapper[4805]: I0226 17:38:31.164729 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b61d3acd-4133-4fe1-82b0-05036641ed78-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:31 crc kubenswrapper[4805]: I0226 17:38:31.164772 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6n9n5\" (UniqueName: \"kubernetes.io/projected/b61d3acd-4133-4fe1-82b0-05036641ed78-kube-api-access-6n9n5\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:31 crc kubenswrapper[4805]: I0226 17:38:31.164786 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b61d3acd-4133-4fe1-82b0-05036641ed78-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:31 crc kubenswrapper[4805]: I0226 17:38:31.212918 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ffb94d8ff-79b2z" event={"ID":"d97c6376-ee3c-4aff-b2d5-0ba210dc1771","Type":"ContainerDied","Data":"b10b7bbd24ab0cf3739d3d557b510ba56d7cc3e1df46f3a8049a73298109d6d1"} Feb 26 17:38:31 crc kubenswrapper[4805]: I0226 17:38:31.213246 4805 scope.go:117] "RemoveContainer" containerID="0b02a39c8da61c7922f1e82a6d38d46adc4de0077e74b351c23db6a30154bd68" Feb 26 17:38:31 crc kubenswrapper[4805]: I0226 17:38:31.212992 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ffb94d8ff-79b2z" Feb 26 17:38:31 crc kubenswrapper[4805]: I0226 17:38:31.224488 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-5v7qp" Feb 26 17:38:31 crc kubenswrapper[4805]: I0226 17:38:31.228776 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-5v7qp" event={"ID":"b61d3acd-4133-4fe1-82b0-05036641ed78","Type":"ContainerDied","Data":"7e962d678086bc0bb48c0211cfee83c26c66c543ed43646368260ee139d58b58"} Feb 26 17:38:31 crc kubenswrapper[4805]: I0226 17:38:31.228839 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e962d678086bc0bb48c0211cfee83c26c66c543ed43646368260ee139d58b58" Feb 26 17:38:31 crc kubenswrapper[4805]: I0226 17:38:31.254003 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-79b2z"] Feb 26 17:38:31 crc kubenswrapper[4805]: I0226 17:38:31.270782 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6ffb94d8ff-79b2z"] Feb 26 17:38:32 crc kubenswrapper[4805]: E0226 17:38:32.049350 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29535458-b6pvp" podUID="3ea12120-9c03-4819-a2ae-61bf83333dea" Feb 26 17:38:32 crc kubenswrapper[4805]: E0226 17:38:32.197797 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 26 17:38:32 crc kubenswrapper[4805]: E0226 17:38:32.198386 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n58gb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-dgxjz_openstack(8969f13b-8f7b-4e4e-a891-eac8a978bb42): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 17:38:32 crc kubenswrapper[4805]: E0226 17:38:32.199855 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-dgxjz" podUID="8969f13b-8f7b-4e4e-a891-eac8a978bb42" Feb 26 17:38:32 crc kubenswrapper[4805]: E0226 17:38:32.242122 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-dgxjz" podUID="8969f13b-8f7b-4e4e-a891-eac8a978bb42" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.321617 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-6sfst"] Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.395433 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-69fdbbb8fb-b5bss"] Feb 26 17:38:32 crc kubenswrapper[4805]: E0226 17:38:32.396423 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b61d3acd-4133-4fe1-82b0-05036641ed78" containerName="neutron-db-sync" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.396455 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b61d3acd-4133-4fe1-82b0-05036641ed78" containerName="neutron-db-sync" Feb 26 17:38:32 crc kubenswrapper[4805]: E0226 17:38:32.396476 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d97c6376-ee3c-4aff-b2d5-0ba210dc1771" containerName="init" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.396485 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d97c6376-ee3c-4aff-b2d5-0ba210dc1771" containerName="init" Feb 26 17:38:32 crc kubenswrapper[4805]: E0226 17:38:32.396493 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d97c6376-ee3c-4aff-b2d5-0ba210dc1771" containerName="dnsmasq-dns" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.396503 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d97c6376-ee3c-4aff-b2d5-0ba210dc1771" containerName="dnsmasq-dns" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.396830 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="d97c6376-ee3c-4aff-b2d5-0ba210dc1771" containerName="dnsmasq-dns" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.396854 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="b61d3acd-4133-4fe1-82b0-05036641ed78" containerName="neutron-db-sync" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.408473 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-69fdbbb8fb-b5bss" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.413572 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-5fk5n" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.413835 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.414046 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.414202 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.421683 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b6c948c7-rsnj7"] Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.423779 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b6c948c7-rsnj7" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.443195 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-69fdbbb8fb-b5bss"] Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.469823 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b6c948c7-rsnj7"] Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.509465 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe736fb-9cf2-4686-ac9f-d9da17f1e567-combined-ca-bundle\") pod \"neutron-69fdbbb8fb-b5bss\" (UID: \"ffe736fb-9cf2-4686-ac9f-d9da17f1e567\") " pod="openstack/neutron-69fdbbb8fb-b5bss" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.509539 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffe736fb-9cf2-4686-ac9f-d9da17f1e567-ovndb-tls-certs\") pod \"neutron-69fdbbb8fb-b5bss\" (UID: \"ffe736fb-9cf2-4686-ac9f-d9da17f1e567\") " pod="openstack/neutron-69fdbbb8fb-b5bss" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.509667 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ffe736fb-9cf2-4686-ac9f-d9da17f1e567-config\") pod \"neutron-69fdbbb8fb-b5bss\" (UID: \"ffe736fb-9cf2-4686-ac9f-d9da17f1e567\") " pod="openstack/neutron-69fdbbb8fb-b5bss" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.509687 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ffe736fb-9cf2-4686-ac9f-d9da17f1e567-httpd-config\") pod \"neutron-69fdbbb8fb-b5bss\" (UID: \"ffe736fb-9cf2-4686-ac9f-d9da17f1e567\") " pod="openstack/neutron-69fdbbb8fb-b5bss" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.509741 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snp6r\" (UniqueName: \"kubernetes.io/projected/ffe736fb-9cf2-4686-ac9f-d9da17f1e567-kube-api-access-snp6r\") pod \"neutron-69fdbbb8fb-b5bss\" (UID: \"ffe736fb-9cf2-4686-ac9f-d9da17f1e567\") " pod="openstack/neutron-69fdbbb8fb-b5bss" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.611431 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ffe736fb-9cf2-4686-ac9f-d9da17f1e567-config\") pod \"neutron-69fdbbb8fb-b5bss\" (UID: \"ffe736fb-9cf2-4686-ac9f-d9da17f1e567\") " pod="openstack/neutron-69fdbbb8fb-b5bss" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.611498 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ffe736fb-9cf2-4686-ac9f-d9da17f1e567-httpd-config\") pod \"neutron-69fdbbb8fb-b5bss\" (UID: \"ffe736fb-9cf2-4686-ac9f-d9da17f1e567\") " pod="openstack/neutron-69fdbbb8fb-b5bss" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.611564 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70970a4c-b0b2-4648-b4c8-2ad75dfd14a6-ovsdbserver-sb\") pod \"dnsmasq-dns-b6c948c7-rsnj7\" (UID: \"70970a4c-b0b2-4648-b4c8-2ad75dfd14a6\") " pod="openstack/dnsmasq-dns-b6c948c7-rsnj7" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.611607 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snp6r\" (UniqueName: \"kubernetes.io/projected/ffe736fb-9cf2-4686-ac9f-d9da17f1e567-kube-api-access-snp6r\") pod \"neutron-69fdbbb8fb-b5bss\" (UID: \"ffe736fb-9cf2-4686-ac9f-d9da17f1e567\") " pod="openstack/neutron-69fdbbb8fb-b5bss" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.611680 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe736fb-9cf2-4686-ac9f-d9da17f1e567-combined-ca-bundle\") pod \"neutron-69fdbbb8fb-b5bss\" (UID: \"ffe736fb-9cf2-4686-ac9f-d9da17f1e567\") " pod="openstack/neutron-69fdbbb8fb-b5bss" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.612208 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffe736fb-9cf2-4686-ac9f-d9da17f1e567-ovndb-tls-certs\") pod \"neutron-69fdbbb8fb-b5bss\" (UID: \"ffe736fb-9cf2-4686-ac9f-d9da17f1e567\") " pod="openstack/neutron-69fdbbb8fb-b5bss" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.612267 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70970a4c-b0b2-4648-b4c8-2ad75dfd14a6-ovsdbserver-nb\") pod \"dnsmasq-dns-b6c948c7-rsnj7\" (UID: \"70970a4c-b0b2-4648-b4c8-2ad75dfd14a6\") " pod="openstack/dnsmasq-dns-b6c948c7-rsnj7" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.612328 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70970a4c-b0b2-4648-b4c8-2ad75dfd14a6-config\") pod \"dnsmasq-dns-b6c948c7-rsnj7\" (UID: \"70970a4c-b0b2-4648-b4c8-2ad75dfd14a6\") " pod="openstack/dnsmasq-dns-b6c948c7-rsnj7" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.612347 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70970a4c-b0b2-4648-b4c8-2ad75dfd14a6-dns-svc\") pod \"dnsmasq-dns-b6c948c7-rsnj7\" (UID: \"70970a4c-b0b2-4648-b4c8-2ad75dfd14a6\") " pod="openstack/dnsmasq-dns-b6c948c7-rsnj7" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.612373 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntztd\" (UniqueName: \"kubernetes.io/projected/70970a4c-b0b2-4648-b4c8-2ad75dfd14a6-kube-api-access-ntztd\") pod \"dnsmasq-dns-b6c948c7-rsnj7\" (UID: \"70970a4c-b0b2-4648-b4c8-2ad75dfd14a6\") " pod="openstack/dnsmasq-dns-b6c948c7-rsnj7" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.616738 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffe736fb-9cf2-4686-ac9f-d9da17f1e567-ovndb-tls-certs\") pod \"neutron-69fdbbb8fb-b5bss\" (UID: \"ffe736fb-9cf2-4686-ac9f-d9da17f1e567\") " pod="openstack/neutron-69fdbbb8fb-b5bss" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.618220 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe736fb-9cf2-4686-ac9f-d9da17f1e567-combined-ca-bundle\") pod \"neutron-69fdbbb8fb-b5bss\" (UID: \"ffe736fb-9cf2-4686-ac9f-d9da17f1e567\") " pod="openstack/neutron-69fdbbb8fb-b5bss" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.619871 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ffe736fb-9cf2-4686-ac9f-d9da17f1e567-config\") pod \"neutron-69fdbbb8fb-b5bss\" (UID: \"ffe736fb-9cf2-4686-ac9f-d9da17f1e567\") " pod="openstack/neutron-69fdbbb8fb-b5bss" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.622565 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ffe736fb-9cf2-4686-ac9f-d9da17f1e567-httpd-config\") pod \"neutron-69fdbbb8fb-b5bss\" (UID: \"ffe736fb-9cf2-4686-ac9f-d9da17f1e567\") " pod="openstack/neutron-69fdbbb8fb-b5bss" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.634096 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snp6r\" (UniqueName: \"kubernetes.io/projected/ffe736fb-9cf2-4686-ac9f-d9da17f1e567-kube-api-access-snp6r\") pod \"neutron-69fdbbb8fb-b5bss\" (UID: \"ffe736fb-9cf2-4686-ac9f-d9da17f1e567\") " pod="openstack/neutron-69fdbbb8fb-b5bss" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.714631 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70970a4c-b0b2-4648-b4c8-2ad75dfd14a6-ovsdbserver-nb\") pod \"dnsmasq-dns-b6c948c7-rsnj7\" (UID: \"70970a4c-b0b2-4648-b4c8-2ad75dfd14a6\") " pod="openstack/dnsmasq-dns-b6c948c7-rsnj7" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.714755 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70970a4c-b0b2-4648-b4c8-2ad75dfd14a6-config\") pod \"dnsmasq-dns-b6c948c7-rsnj7\" (UID: \"70970a4c-b0b2-4648-b4c8-2ad75dfd14a6\") " pod="openstack/dnsmasq-dns-b6c948c7-rsnj7" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.714804 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70970a4c-b0b2-4648-b4c8-2ad75dfd14a6-dns-svc\") pod \"dnsmasq-dns-b6c948c7-rsnj7\" (UID: \"70970a4c-b0b2-4648-b4c8-2ad75dfd14a6\") " pod="openstack/dnsmasq-dns-b6c948c7-rsnj7" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.714835 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntztd\" (UniqueName: \"kubernetes.io/projected/70970a4c-b0b2-4648-b4c8-2ad75dfd14a6-kube-api-access-ntztd\") pod \"dnsmasq-dns-b6c948c7-rsnj7\" (UID: \"70970a4c-b0b2-4648-b4c8-2ad75dfd14a6\") " pod="openstack/dnsmasq-dns-b6c948c7-rsnj7" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.714929 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70970a4c-b0b2-4648-b4c8-2ad75dfd14a6-ovsdbserver-sb\") pod \"dnsmasq-dns-b6c948c7-rsnj7\" (UID: \"70970a4c-b0b2-4648-b4c8-2ad75dfd14a6\") " pod="openstack/dnsmasq-dns-b6c948c7-rsnj7" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.725108 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70970a4c-b0b2-4648-b4c8-2ad75dfd14a6-ovsdbserver-sb\") pod \"dnsmasq-dns-b6c948c7-rsnj7\" (UID: \"70970a4c-b0b2-4648-b4c8-2ad75dfd14a6\") " pod="openstack/dnsmasq-dns-b6c948c7-rsnj7" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.725164 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70970a4c-b0b2-4648-b4c8-2ad75dfd14a6-ovsdbserver-nb\") pod \"dnsmasq-dns-b6c948c7-rsnj7\" (UID: \"70970a4c-b0b2-4648-b4c8-2ad75dfd14a6\") " pod="openstack/dnsmasq-dns-b6c948c7-rsnj7" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.725475 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70970a4c-b0b2-4648-b4c8-2ad75dfd14a6-dns-svc\") pod \"dnsmasq-dns-b6c948c7-rsnj7\" (UID: \"70970a4c-b0b2-4648-b4c8-2ad75dfd14a6\") " pod="openstack/dnsmasq-dns-b6c948c7-rsnj7" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.725610 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70970a4c-b0b2-4648-b4c8-2ad75dfd14a6-config\") pod \"dnsmasq-dns-b6c948c7-rsnj7\" (UID: \"70970a4c-b0b2-4648-b4c8-2ad75dfd14a6\") " pod="openstack/dnsmasq-dns-b6c948c7-rsnj7" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.747306 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-69fdbbb8fb-b5bss" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.764489 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntztd\" (UniqueName: \"kubernetes.io/projected/70970a4c-b0b2-4648-b4c8-2ad75dfd14a6-kube-api-access-ntztd\") pod \"dnsmasq-dns-b6c948c7-rsnj7\" (UID: \"70970a4c-b0b2-4648-b4c8-2ad75dfd14a6\") " pod="openstack/dnsmasq-dns-b6c948c7-rsnj7" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.829453 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6ffb94d8ff-79b2z" podUID="d97c6376-ee3c-4aff-b2d5-0ba210dc1771" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.167:5353: i/o timeout" Feb 26 17:38:32 crc kubenswrapper[4805]: I0226 17:38:32.971657 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d97c6376-ee3c-4aff-b2d5-0ba210dc1771" path="/var/lib/kubelet/pods/d97c6376-ee3c-4aff-b2d5-0ba210dc1771/volumes" Feb 26 17:38:33 crc kubenswrapper[4805]: I0226 17:38:33.061410 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b6c948c7-rsnj7" Feb 26 17:38:34 crc kubenswrapper[4805]: I0226 17:38:34.778722 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-66c9c57f69-5r7hv"] Feb 26 17:38:34 crc kubenswrapper[4805]: I0226 17:38:34.781565 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66c9c57f69-5r7hv" Feb 26 17:38:34 crc kubenswrapper[4805]: I0226 17:38:34.784180 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 26 17:38:34 crc kubenswrapper[4805]: I0226 17:38:34.785368 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 26 17:38:34 crc kubenswrapper[4805]: I0226 17:38:34.795258 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-66c9c57f69-5r7hv"] Feb 26 17:38:35 crc kubenswrapper[4805]: I0226 17:38:35.133501 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a821a2-53df-4081-a120-61b7b90b3120-internal-tls-certs\") pod \"neutron-66c9c57f69-5r7hv\" (UID: \"04a821a2-53df-4081-a120-61b7b90b3120\") " pod="openstack/neutron-66c9c57f69-5r7hv" Feb 26 17:38:35 crc kubenswrapper[4805]: I0226 17:38:35.133650 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04a821a2-53df-4081-a120-61b7b90b3120-combined-ca-bundle\") pod \"neutron-66c9c57f69-5r7hv\" (UID: \"04a821a2-53df-4081-a120-61b7b90b3120\") " pod="openstack/neutron-66c9c57f69-5r7hv" Feb 26 17:38:35 crc kubenswrapper[4805]: I0226 17:38:35.133726 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a821a2-53df-4081-a120-61b7b90b3120-ovndb-tls-certs\") pod \"neutron-66c9c57f69-5r7hv\" (UID: \"04a821a2-53df-4081-a120-61b7b90b3120\") " pod="openstack/neutron-66c9c57f69-5r7hv" Feb 26 17:38:35 crc kubenswrapper[4805]: I0226 17:38:35.133846 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/04a821a2-53df-4081-a120-61b7b90b3120-config\") pod \"neutron-66c9c57f69-5r7hv\" (UID: \"04a821a2-53df-4081-a120-61b7b90b3120\") " pod="openstack/neutron-66c9c57f69-5r7hv" Feb 26 17:38:35 crc kubenswrapper[4805]: I0226 17:38:35.133948 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/04a821a2-53df-4081-a120-61b7b90b3120-httpd-config\") pod \"neutron-66c9c57f69-5r7hv\" (UID: \"04a821a2-53df-4081-a120-61b7b90b3120\") " pod="openstack/neutron-66c9c57f69-5r7hv" Feb 26 17:38:35 crc kubenswrapper[4805]: I0226 17:38:35.134037 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrrbd\" (UniqueName: \"kubernetes.io/projected/04a821a2-53df-4081-a120-61b7b90b3120-kube-api-access-wrrbd\") pod \"neutron-66c9c57f69-5r7hv\" (UID: \"04a821a2-53df-4081-a120-61b7b90b3120\") " pod="openstack/neutron-66c9c57f69-5r7hv" Feb 26 17:38:35 crc kubenswrapper[4805]: I0226 17:38:35.134061 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a821a2-53df-4081-a120-61b7b90b3120-public-tls-certs\") pod \"neutron-66c9c57f69-5r7hv\" (UID: \"04a821a2-53df-4081-a120-61b7b90b3120\") " pod="openstack/neutron-66c9c57f69-5r7hv" Feb 26 17:38:35 crc kubenswrapper[4805]: I0226 17:38:35.235862 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/04a821a2-53df-4081-a120-61b7b90b3120-httpd-config\") pod \"neutron-66c9c57f69-5r7hv\" (UID: \"04a821a2-53df-4081-a120-61b7b90b3120\") " pod="openstack/neutron-66c9c57f69-5r7hv" Feb 26 17:38:35 crc kubenswrapper[4805]: I0226 17:38:35.235936 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrrbd\" (UniqueName: \"kubernetes.io/projected/04a821a2-53df-4081-a120-61b7b90b3120-kube-api-access-wrrbd\") pod \"neutron-66c9c57f69-5r7hv\" (UID: \"04a821a2-53df-4081-a120-61b7b90b3120\") " pod="openstack/neutron-66c9c57f69-5r7hv" Feb 26 17:38:35 crc kubenswrapper[4805]: I0226 17:38:35.235964 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a821a2-53df-4081-a120-61b7b90b3120-public-tls-certs\") pod \"neutron-66c9c57f69-5r7hv\" (UID: \"04a821a2-53df-4081-a120-61b7b90b3120\") " pod="openstack/neutron-66c9c57f69-5r7hv" Feb 26 17:38:35 crc kubenswrapper[4805]: I0226 17:38:35.236008 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a821a2-53df-4081-a120-61b7b90b3120-internal-tls-certs\") pod \"neutron-66c9c57f69-5r7hv\" (UID: \"04a821a2-53df-4081-a120-61b7b90b3120\") " pod="openstack/neutron-66c9c57f69-5r7hv" Feb 26 17:38:35 crc kubenswrapper[4805]: I0226 17:38:35.236136 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04a821a2-53df-4081-a120-61b7b90b3120-combined-ca-bundle\") pod \"neutron-66c9c57f69-5r7hv\" (UID: \"04a821a2-53df-4081-a120-61b7b90b3120\") " pod="openstack/neutron-66c9c57f69-5r7hv" Feb 26 17:38:35 crc kubenswrapper[4805]: I0226 17:38:35.236169 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a821a2-53df-4081-a120-61b7b90b3120-ovndb-tls-certs\") pod \"neutron-66c9c57f69-5r7hv\" (UID: \"04a821a2-53df-4081-a120-61b7b90b3120\") " pod="openstack/neutron-66c9c57f69-5r7hv" Feb 26 17:38:35 crc kubenswrapper[4805]: I0226 17:38:35.236194 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/04a821a2-53df-4081-a120-61b7b90b3120-config\") pod \"neutron-66c9c57f69-5r7hv\" (UID: \"04a821a2-53df-4081-a120-61b7b90b3120\") " pod="openstack/neutron-66c9c57f69-5r7hv" Feb 26 17:38:35 crc kubenswrapper[4805]: I0226 17:38:35.242366 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/04a821a2-53df-4081-a120-61b7b90b3120-config\") pod \"neutron-66c9c57f69-5r7hv\" (UID: \"04a821a2-53df-4081-a120-61b7b90b3120\") " pod="openstack/neutron-66c9c57f69-5r7hv" Feb 26 17:38:35 crc kubenswrapper[4805]: I0226 17:38:35.242634 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a821a2-53df-4081-a120-61b7b90b3120-internal-tls-certs\") pod \"neutron-66c9c57f69-5r7hv\" (UID: \"04a821a2-53df-4081-a120-61b7b90b3120\") " pod="openstack/neutron-66c9c57f69-5r7hv" Feb 26 17:38:35 crc kubenswrapper[4805]: I0226 17:38:35.243998 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/04a821a2-53df-4081-a120-61b7b90b3120-httpd-config\") pod \"neutron-66c9c57f69-5r7hv\" (UID: \"04a821a2-53df-4081-a120-61b7b90b3120\") " pod="openstack/neutron-66c9c57f69-5r7hv" Feb 26 17:38:35 crc kubenswrapper[4805]: I0226 17:38:35.255332 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a821a2-53df-4081-a120-61b7b90b3120-public-tls-certs\") pod \"neutron-66c9c57f69-5r7hv\" (UID: \"04a821a2-53df-4081-a120-61b7b90b3120\") " pod="openstack/neutron-66c9c57f69-5r7hv" Feb 26 17:38:35 crc kubenswrapper[4805]: I0226 17:38:35.256475 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04a821a2-53df-4081-a120-61b7b90b3120-combined-ca-bundle\") pod \"neutron-66c9c57f69-5r7hv\" (UID: \"04a821a2-53df-4081-a120-61b7b90b3120\") " pod="openstack/neutron-66c9c57f69-5r7hv" Feb 26 17:38:35 crc kubenswrapper[4805]: I0226 17:38:35.258700 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrrbd\" (UniqueName: \"kubernetes.io/projected/04a821a2-53df-4081-a120-61b7b90b3120-kube-api-access-wrrbd\") pod \"neutron-66c9c57f69-5r7hv\" (UID: \"04a821a2-53df-4081-a120-61b7b90b3120\") " pod="openstack/neutron-66c9c57f69-5r7hv" Feb 26 17:38:35 crc kubenswrapper[4805]: I0226 17:38:35.258830 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a821a2-53df-4081-a120-61b7b90b3120-ovndb-tls-certs\") pod \"neutron-66c9c57f69-5r7hv\" (UID: \"04a821a2-53df-4081-a120-61b7b90b3120\") " pod="openstack/neutron-66c9c57f69-5r7hv" Feb 26 17:38:35 crc kubenswrapper[4805]: I0226 17:38:35.406928 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66c9c57f69-5r7hv" Feb 26 17:38:36 crc kubenswrapper[4805]: I0226 17:38:36.756283 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-6sfst"] Feb 26 17:38:36 crc kubenswrapper[4805]: I0226 17:38:36.839328 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 17:38:36 crc kubenswrapper[4805]: I0226 17:38:36.926703 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 17:38:39 crc kubenswrapper[4805]: I0226 17:38:39.337834 4805 scope.go:117] "RemoveContainer" containerID="71fef9220f3f0561cbb260622402f5ed9d9fce3ba48930b96b9dc8a12db0a248" Feb 26 17:38:39 crc kubenswrapper[4805]: W0226 17:38:39.398371 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc705c0f4_7fdd_44ed_8341_210b50ec8e3f.slice/crio-0a861499e3d49ff2e2bce0ac4c9b44cdf270a7009087227fecf2796490cadf87 WatchSource:0}: Error finding container 0a861499e3d49ff2e2bce0ac4c9b44cdf270a7009087227fecf2796490cadf87: Status 404 returned error can't find the container with id 0a861499e3d49ff2e2bce0ac4c9b44cdf270a7009087227fecf2796490cadf87 Feb 26 17:38:39 crc kubenswrapper[4805]: I0226 17:38:39.836656 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jlsft"] Feb 26 17:38:40 crc kubenswrapper[4805]: W0226 17:38:40.133250 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a161dc5_9880_4ec1_a1cc_cd6abc30a9d4.slice/crio-d9b1e512eaf81d4e1c533a09caf87d9efb76051102fcd8191012b926b5f69901 WatchSource:0}: Error finding container d9b1e512eaf81d4e1c533a09caf87d9efb76051102fcd8191012b926b5f69901: Status 404 returned error can't find the container with id d9b1e512eaf81d4e1c533a09caf87d9efb76051102fcd8191012b926b5f69901 Feb 26 17:38:40 crc kubenswrapper[4805]: E0226 17:38:40.161563 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 26 17:38:40 crc kubenswrapper[4805]: E0226 17:38:40.161672 4805 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 26 17:38:40 crc kubenswrapper[4805]: E0226 17:38:40.161877 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9gf7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-xgqc6_openstack(96d28605-2282-4e5f-93d6-5a3023c7bc9c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 17:38:40 crc kubenswrapper[4805]: E0226 17:38:40.163036 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cloudkitty-db-sync-xgqc6" podUID="96d28605-2282-4e5f-93d6-5a3023c7bc9c" Feb 26 17:38:40 crc kubenswrapper[4805]: I0226 17:38:40.372904 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3a821121-9d7e-4ff5-9e30-70bb10327945","Type":"ContainerStarted","Data":"c21709b0882f409a63cc3bb30f8e40a63dbbfddbacfd1f390271cfecf4576f98"} Feb 26 17:38:40 crc kubenswrapper[4805]: I0226 17:38:40.375485 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"aca42ff7-721d-4578-b0b9-d7905eb67afc","Type":"ContainerStarted","Data":"ca341b78d513adf25045a97655c7a4281c2ba7508a50b1cb80d6a36ca0daecc9"} Feb 26 17:38:40 crc kubenswrapper[4805]: I0226 17:38:40.377167 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-6sfst" event={"ID":"c705c0f4-7fdd-44ed-8341-210b50ec8e3f","Type":"ContainerStarted","Data":"0a861499e3d49ff2e2bce0ac4c9b44cdf270a7009087227fecf2796490cadf87"} Feb 26 17:38:40 crc kubenswrapper[4805]: I0226 17:38:40.378556 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jlsft" event={"ID":"4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4","Type":"ContainerStarted","Data":"d9b1e512eaf81d4e1c533a09caf87d9efb76051102fcd8191012b926b5f69901"} Feb 26 17:38:40 crc kubenswrapper[4805]: E0226 17:38:40.392156 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-xgqc6" podUID="96d28605-2282-4e5f-93d6-5a3023c7bc9c" Feb 26 17:38:40 crc kubenswrapper[4805]: I0226 17:38:40.692295 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b6c948c7-rsnj7"] Feb 26 17:38:40 crc kubenswrapper[4805]: I0226 17:38:40.724853 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-69fdbbb8fb-b5bss"] Feb 26 17:38:40 crc kubenswrapper[4805]: W0226 17:38:40.760515 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70970a4c_b0b2_4648_b4c8_2ad75dfd14a6.slice/crio-48b9cc38af575ee160c400a0f06eb5620c3a6ecd00be0cba5348b1d4f90d5f90 WatchSource:0}: Error finding container 48b9cc38af575ee160c400a0f06eb5620c3a6ecd00be0cba5348b1d4f90d5f90: Status 404 returned error can't find the container with id 48b9cc38af575ee160c400a0f06eb5620c3a6ecd00be0cba5348b1d4f90d5f90 Feb 26 17:38:40 crc kubenswrapper[4805]: I0226 17:38:40.842099 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-66c9c57f69-5r7hv"] Feb 26 17:38:40 crc kubenswrapper[4805]: W0226 17:38:40.867346 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04a821a2_53df_4081_a120_61b7b90b3120.slice/crio-01281376c435352a21faf37222dc57188d7413dab2b2fd50066d6f2d6ee970b2 WatchSource:0}: Error finding container 01281376c435352a21faf37222dc57188d7413dab2b2fd50066d6f2d6ee970b2: Status 404 returned error can't find the container with id 01281376c435352a21faf37222dc57188d7413dab2b2fd50066d6f2d6ee970b2 Feb 26 17:38:41 crc kubenswrapper[4805]: I0226 17:38:41.399629 4805 generic.go:334] "Generic (PLEG): container finished" podID="c705c0f4-7fdd-44ed-8341-210b50ec8e3f" containerID="8e0cfdb63960e6c4c984b6703fcb097bb2787b840a6872cb074c9dfa3bc2e5f1" exitCode=0 Feb 26 17:38:41 crc kubenswrapper[4805]: I0226 17:38:41.399941 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-6sfst" event={"ID":"c705c0f4-7fdd-44ed-8341-210b50ec8e3f","Type":"ContainerDied","Data":"8e0cfdb63960e6c4c984b6703fcb097bb2787b840a6872cb074c9dfa3bc2e5f1"} Feb 26 17:38:41 crc kubenswrapper[4805]: I0226 17:38:41.418334 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"137f038e-91ff-44e0-9f5c-616765295b23","Type":"ContainerStarted","Data":"42434e486b56a20091e18aef8acebafe32d47be4e8d9bbd951451b88a005f112"} Feb 26 17:38:41 crc kubenswrapper[4805]: I0226 17:38:41.457267 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a466ee40-e6ef-4a36-96c6-88e7ce00a28c","Type":"ContainerStarted","Data":"78df99c97915af9756cff83bd5fc2f5b5c008e3975f989be1a5fb29a7bf87060"} Feb 26 17:38:41 crc kubenswrapper[4805]: I0226 17:38:41.457499 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a466ee40-e6ef-4a36-96c6-88e7ce00a28c","Type":"ContainerStarted","Data":"ba5f425e408dac9e069ae2b9c3054b24f7e96a4ba37d9970f7e32ebbfcb2032c"} Feb 26 17:38:41 crc kubenswrapper[4805]: I0226 17:38:41.474864 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jlsft" event={"ID":"4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4","Type":"ContainerStarted","Data":"f18cd5e30cd9c44aaa5bf52ec9a00c41d4f42071caad7ddb87a3efcc5e7c2330"} Feb 26 17:38:41 crc kubenswrapper[4805]: I0226 17:38:41.491985 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3a821121-9d7e-4ff5-9e30-70bb10327945","Type":"ContainerStarted","Data":"58fc52c99076e2838d8bfcf3d2f8af6e5596d67cd37fd3decdfc29983cd2b591"} Feb 26 17:38:41 crc kubenswrapper[4805]: I0226 17:38:41.501906 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-jlsft" podStartSLOduration=28.5018744 podStartE2EDuration="28.5018744s" podCreationTimestamp="2026-02-26 17:38:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:38:41.497728556 +0000 UTC m=+1436.059482905" watchObservedRunningTime="2026-02-26 17:38:41.5018744 +0000 UTC m=+1436.063628729" Feb 26 17:38:41 crc kubenswrapper[4805]: I0226 17:38:41.503248 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"aca42ff7-721d-4578-b0b9-d7905eb67afc","Type":"ContainerStarted","Data":"c7c1091ee56e42ac4522064377a7570e7fb361d89a82f24a0b487b5f2d271f27"} Feb 26 17:38:41 crc kubenswrapper[4805]: I0226 17:38:41.508993 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6c948c7-rsnj7" event={"ID":"70970a4c-b0b2-4648-b4c8-2ad75dfd14a6","Type":"ContainerStarted","Data":"e84a782878a5f7da958a196b8a0122c13307ebbcbf330b97e7d4a4b85c5ed3f1"} Feb 26 17:38:41 crc kubenswrapper[4805]: I0226 17:38:41.509051 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6c948c7-rsnj7" event={"ID":"70970a4c-b0b2-4648-b4c8-2ad75dfd14a6","Type":"ContainerStarted","Data":"48b9cc38af575ee160c400a0f06eb5620c3a6ecd00be0cba5348b1d4f90d5f90"} Feb 26 17:38:41 crc kubenswrapper[4805]: I0226 17:38:41.512223 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wg9r2" event={"ID":"1627cc17-6ee2-4176-b719-aa04e00aa881","Type":"ContainerStarted","Data":"56941ff444a4e8d6538e0392b00c752ffa2d15a3bb5f28f4f4b3d1fb49310946"} Feb 26 17:38:41 crc kubenswrapper[4805]: I0226 17:38:41.526692 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-79wtt" event={"ID":"5c0f3346-b7f3-48c3-b177-86d8adb7d190","Type":"ContainerStarted","Data":"5e8b2dc525618a74dce4e1d421f9b858f0f55249f5cd42c645e6d4d81f6dd1ea"} Feb 26 17:38:41 crc kubenswrapper[4805]: I0226 17:38:41.530618 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66c9c57f69-5r7hv" event={"ID":"04a821a2-53df-4081-a120-61b7b90b3120","Type":"ContainerStarted","Data":"ba7f10b014dccb0e6ad27fd1d69ec2a4e15adba750fa28c92b9528f3da0a5662"} Feb 26 17:38:41 crc kubenswrapper[4805]: I0226 17:38:41.530664 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66c9c57f69-5r7hv" event={"ID":"04a821a2-53df-4081-a120-61b7b90b3120","Type":"ContainerStarted","Data":"01281376c435352a21faf37222dc57188d7413dab2b2fd50066d6f2d6ee970b2"} Feb 26 17:38:41 crc kubenswrapper[4805]: I0226 17:38:41.542407 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-69fdbbb8fb-b5bss" event={"ID":"ffe736fb-9cf2-4686-ac9f-d9da17f1e567","Type":"ContainerStarted","Data":"a11287f85caeabc215617eb14feed8be325853f4380b7830acfac44c9b585f31"} Feb 26 17:38:41 crc kubenswrapper[4805]: I0226 17:38:41.542460 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-69fdbbb8fb-b5bss" event={"ID":"ffe736fb-9cf2-4686-ac9f-d9da17f1e567","Type":"ContainerStarted","Data":"6754a832a94c9b06a5ad9201ab2d7d563400c1c704b9288a2c6345f668e29490"} Feb 26 17:38:41 crc kubenswrapper[4805]: I0226 17:38:41.620938 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-79wtt" podStartSLOduration=13.849994012 podStartE2EDuration="45.620918095s" podCreationTimestamp="2026-02-26 17:37:56 +0000 UTC" firstStartedPulling="2026-02-26 17:37:58.673498914 +0000 UTC m=+1393.235253273" lastFinishedPulling="2026-02-26 17:38:30.444423007 +0000 UTC m=+1425.006177356" observedRunningTime="2026-02-26 17:38:41.57398556 +0000 UTC m=+1436.135739899" watchObservedRunningTime="2026-02-26 17:38:41.620918095 +0000 UTC m=+1436.182672434" Feb 26 17:38:41 crc kubenswrapper[4805]: I0226 17:38:41.646815 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-wg9r2" podStartSLOduration=3.37834143 podStartE2EDuration="44.646793378s" podCreationTimestamp="2026-02-26 17:37:57 +0000 UTC" firstStartedPulling="2026-02-26 17:37:58.969112209 +0000 UTC m=+1393.530866548" lastFinishedPulling="2026-02-26 17:38:40.237564147 +0000 UTC m=+1434.799318496" observedRunningTime="2026-02-26 17:38:41.613506348 +0000 UTC m=+1436.175260697" watchObservedRunningTime="2026-02-26 17:38:41.646793378 +0000 UTC m=+1436.208547717" Feb 26 17:38:41 crc kubenswrapper[4805]: I0226 17:38:41.931766 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56798b757f-6sfst" Feb 26 17:38:42 crc kubenswrapper[4805]: I0226 17:38:42.120466 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c705c0f4-7fdd-44ed-8341-210b50ec8e3f-config\") pod \"c705c0f4-7fdd-44ed-8341-210b50ec8e3f\" (UID: \"c705c0f4-7fdd-44ed-8341-210b50ec8e3f\") " Feb 26 17:38:42 crc kubenswrapper[4805]: I0226 17:38:42.120601 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c705c0f4-7fdd-44ed-8341-210b50ec8e3f-dns-svc\") pod \"c705c0f4-7fdd-44ed-8341-210b50ec8e3f\" (UID: \"c705c0f4-7fdd-44ed-8341-210b50ec8e3f\") " Feb 26 17:38:42 crc kubenswrapper[4805]: I0226 17:38:42.120734 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c705c0f4-7fdd-44ed-8341-210b50ec8e3f-ovsdbserver-sb\") pod \"c705c0f4-7fdd-44ed-8341-210b50ec8e3f\" (UID: \"c705c0f4-7fdd-44ed-8341-210b50ec8e3f\") " Feb 26 17:38:42 crc kubenswrapper[4805]: I0226 17:38:42.120792 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c705c0f4-7fdd-44ed-8341-210b50ec8e3f-ovsdbserver-nb\") pod \"c705c0f4-7fdd-44ed-8341-210b50ec8e3f\" (UID: \"c705c0f4-7fdd-44ed-8341-210b50ec8e3f\") " Feb 26 17:38:42 crc kubenswrapper[4805]: I0226 17:38:42.120853 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ndwk\" (UniqueName: \"kubernetes.io/projected/c705c0f4-7fdd-44ed-8341-210b50ec8e3f-kube-api-access-6ndwk\") pod \"c705c0f4-7fdd-44ed-8341-210b50ec8e3f\" (UID: \"c705c0f4-7fdd-44ed-8341-210b50ec8e3f\") " Feb 26 17:38:42 crc kubenswrapper[4805]: I0226 17:38:42.156656 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c705c0f4-7fdd-44ed-8341-210b50ec8e3f-kube-api-access-6ndwk" (OuterVolumeSpecName: "kube-api-access-6ndwk") pod "c705c0f4-7fdd-44ed-8341-210b50ec8e3f" (UID: "c705c0f4-7fdd-44ed-8341-210b50ec8e3f"). InnerVolumeSpecName "kube-api-access-6ndwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:38:42 crc kubenswrapper[4805]: I0226 17:38:42.167252 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c705c0f4-7fdd-44ed-8341-210b50ec8e3f-config" (OuterVolumeSpecName: "config") pod "c705c0f4-7fdd-44ed-8341-210b50ec8e3f" (UID: "c705c0f4-7fdd-44ed-8341-210b50ec8e3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:38:42 crc kubenswrapper[4805]: I0226 17:38:42.173635 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c705c0f4-7fdd-44ed-8341-210b50ec8e3f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c705c0f4-7fdd-44ed-8341-210b50ec8e3f" (UID: "c705c0f4-7fdd-44ed-8341-210b50ec8e3f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:38:42 crc kubenswrapper[4805]: I0226 17:38:42.181702 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c705c0f4-7fdd-44ed-8341-210b50ec8e3f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c705c0f4-7fdd-44ed-8341-210b50ec8e3f" (UID: "c705c0f4-7fdd-44ed-8341-210b50ec8e3f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:38:42 crc kubenswrapper[4805]: I0226 17:38:42.185895 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c705c0f4-7fdd-44ed-8341-210b50ec8e3f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c705c0f4-7fdd-44ed-8341-210b50ec8e3f" (UID: "c705c0f4-7fdd-44ed-8341-210b50ec8e3f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:38:42 crc kubenswrapper[4805]: I0226 17:38:42.245507 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ndwk\" (UniqueName: \"kubernetes.io/projected/c705c0f4-7fdd-44ed-8341-210b50ec8e3f-kube-api-access-6ndwk\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:42 crc kubenswrapper[4805]: I0226 17:38:42.245549 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c705c0f4-7fdd-44ed-8341-210b50ec8e3f-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:42 crc kubenswrapper[4805]: I0226 17:38:42.245561 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c705c0f4-7fdd-44ed-8341-210b50ec8e3f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:42 crc kubenswrapper[4805]: I0226 17:38:42.245572 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c705c0f4-7fdd-44ed-8341-210b50ec8e3f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:42 crc kubenswrapper[4805]: I0226 17:38:42.245581 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c705c0f4-7fdd-44ed-8341-210b50ec8e3f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:42 crc kubenswrapper[4805]: I0226 17:38:42.585558 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3a821121-9d7e-4ff5-9e30-70bb10327945","Type":"ContainerStarted","Data":"9718a253aa93240f56efe6ddb3b0c2cfd4cd75b9a841a2d747bc2b5fe20281c0"} Feb 26 17:38:42 crc kubenswrapper[4805]: I0226 17:38:42.595322 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66c9c57f69-5r7hv" event={"ID":"04a821a2-53df-4081-a120-61b7b90b3120","Type":"ContainerStarted","Data":"1016df8f9d8a15b6c98fe96ede4c5294b7dc0bac93ab731f36779f30155311b4"} Feb 26 17:38:42 crc kubenswrapper[4805]: I0226 17:38:42.595503 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-66c9c57f69-5r7hv" Feb 26 17:38:42 crc kubenswrapper[4805]: I0226 17:38:42.609047 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"aca42ff7-721d-4578-b0b9-d7905eb67afc","Type":"ContainerStarted","Data":"c3a592281ea48f892468880fe01ad89a1e941383763c34095886d7c7a13342ff"} Feb 26 17:38:42 crc kubenswrapper[4805]: I0226 17:38:42.609216 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="aca42ff7-721d-4578-b0b9-d7905eb67afc" containerName="glance-log" containerID="cri-o://c7c1091ee56e42ac4522064377a7570e7fb361d89a82f24a0b487b5f2d271f27" gracePeriod=30 Feb 26 17:38:42 crc kubenswrapper[4805]: I0226 17:38:42.609484 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="aca42ff7-721d-4578-b0b9-d7905eb67afc" containerName="glance-httpd" containerID="cri-o://c3a592281ea48f892468880fe01ad89a1e941383763c34095886d7c7a13342ff" gracePeriod=30 Feb 26 17:38:42 crc kubenswrapper[4805]: I0226 17:38:42.617801 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-69fdbbb8fb-b5bss" event={"ID":"ffe736fb-9cf2-4686-ac9f-d9da17f1e567","Type":"ContainerStarted","Data":"5d81611b984476f33a2b70506403b0d972de66d679bddc922ae4ff038638df3e"} Feb 26 17:38:42 crc kubenswrapper[4805]: I0226 17:38:42.618839 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-69fdbbb8fb-b5bss" Feb 26 17:38:42 crc kubenswrapper[4805]: I0226 17:38:42.643767 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-66c9c57f69-5r7hv" podStartSLOduration=8.643748303 podStartE2EDuration="8.643748303s" podCreationTimestamp="2026-02-26 17:38:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:38:42.634050608 +0000 UTC m=+1437.195804947" watchObservedRunningTime="2026-02-26 17:38:42.643748303 +0000 UTC m=+1437.205502632" Feb 26 17:38:42 crc kubenswrapper[4805]: I0226 17:38:42.644149 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56798b757f-6sfst" event={"ID":"c705c0f4-7fdd-44ed-8341-210b50ec8e3f","Type":"ContainerDied","Data":"0a861499e3d49ff2e2bce0ac4c9b44cdf270a7009087227fecf2796490cadf87"} Feb 26 17:38:42 crc kubenswrapper[4805]: I0226 17:38:42.644196 4805 scope.go:117] "RemoveContainer" containerID="8e0cfdb63960e6c4c984b6703fcb097bb2787b840a6872cb074c9dfa3bc2e5f1" Feb 26 17:38:42 crc kubenswrapper[4805]: I0226 17:38:42.644258 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56798b757f-6sfst" Feb 26 17:38:42 crc kubenswrapper[4805]: I0226 17:38:42.649419 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a466ee40-e6ef-4a36-96c6-88e7ce00a28c","Type":"ContainerStarted","Data":"30185570498f6cfeb4d43c8a4ec6d7baefbb367c755c9694c6a69d7be8141868"} Feb 26 17:38:42 crc kubenswrapper[4805]: I0226 17:38:42.649678 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a466ee40-e6ef-4a36-96c6-88e7ce00a28c","Type":"ContainerStarted","Data":"90c3cc6989f0552e7bbca8892df80d3aa1bf541038d2e1f6e7d14d13e7bcfbcc"} Feb 26 17:38:42 crc kubenswrapper[4805]: I0226 17:38:42.664471 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=35.664453855 podStartE2EDuration="35.664453855s" podCreationTimestamp="2026-02-26 17:38:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:38:42.662740732 +0000 UTC m=+1437.224495081" watchObservedRunningTime="2026-02-26 17:38:42.664453855 +0000 UTC m=+1437.226208194" Feb 26 17:38:42 crc kubenswrapper[4805]: I0226 17:38:42.666319 4805 generic.go:334] "Generic (PLEG): container finished" podID="70970a4c-b0b2-4648-b4c8-2ad75dfd14a6" containerID="e84a782878a5f7da958a196b8a0122c13307ebbcbf330b97e7d4a4b85c5ed3f1" exitCode=0 Feb 26 17:38:42 crc kubenswrapper[4805]: I0226 17:38:42.666535 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6c948c7-rsnj7" event={"ID":"70970a4c-b0b2-4648-b4c8-2ad75dfd14a6","Type":"ContainerDied","Data":"e84a782878a5f7da958a196b8a0122c13307ebbcbf330b97e7d4a4b85c5ed3f1"} Feb 26 17:38:42 crc kubenswrapper[4805]: I0226 17:38:42.666565 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6c948c7-rsnj7" event={"ID":"70970a4c-b0b2-4648-b4c8-2ad75dfd14a6","Type":"ContainerStarted","Data":"814569d5a52a6e36cae633aac50767773d43e1b81843ebaca3f5c8871513d430"} Feb 26 17:38:42 crc kubenswrapper[4805]: I0226 17:38:42.702152 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-69fdbbb8fb-b5bss" podStartSLOduration=10.702115496 podStartE2EDuration="10.702115496s" podCreationTimestamp="2026-02-26 17:38:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:38:42.685699452 +0000 UTC m=+1437.247453801" watchObservedRunningTime="2026-02-26 17:38:42.702115496 +0000 UTC m=+1437.263869845" Feb 26 17:38:42 crc kubenswrapper[4805]: I0226 17:38:42.720249 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b6c948c7-rsnj7" podStartSLOduration=10.720220102999999 podStartE2EDuration="10.720220103s" podCreationTimestamp="2026-02-26 17:38:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:38:42.715388001 +0000 UTC m=+1437.277142340" watchObservedRunningTime="2026-02-26 17:38:42.720220103 +0000 UTC m=+1437.281974442" Feb 26 17:38:42 crc kubenswrapper[4805]: I0226 17:38:42.769127 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-6sfst"] Feb 26 17:38:42 crc kubenswrapper[4805]: I0226 17:38:42.776765 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56798b757f-6sfst"] Feb 26 17:38:43 crc kubenswrapper[4805]: I0226 17:38:43.003957 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c705c0f4-7fdd-44ed-8341-210b50ec8e3f" path="/var/lib/kubelet/pods/c705c0f4-7fdd-44ed-8341-210b50ec8e3f/volumes" Feb 26 17:38:43 crc kubenswrapper[4805]: I0226 17:38:43.064342 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b6c948c7-rsnj7" Feb 26 17:38:43 crc kubenswrapper[4805]: I0226 17:38:43.679437 4805 generic.go:334] "Generic (PLEG): container finished" podID="aca42ff7-721d-4578-b0b9-d7905eb67afc" containerID="c3a592281ea48f892468880fe01ad89a1e941383763c34095886d7c7a13342ff" exitCode=143 Feb 26 17:38:43 crc kubenswrapper[4805]: I0226 17:38:43.679491 4805 generic.go:334] "Generic (PLEG): container finished" podID="aca42ff7-721d-4578-b0b9-d7905eb67afc" containerID="c7c1091ee56e42ac4522064377a7570e7fb361d89a82f24a0b487b5f2d271f27" exitCode=143 Feb 26 17:38:43 crc kubenswrapper[4805]: I0226 17:38:43.679575 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"aca42ff7-721d-4578-b0b9-d7905eb67afc","Type":"ContainerDied","Data":"c3a592281ea48f892468880fe01ad89a1e941383763c34095886d7c7a13342ff"} Feb 26 17:38:43 crc kubenswrapper[4805]: I0226 17:38:43.679637 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"aca42ff7-721d-4578-b0b9-d7905eb67afc","Type":"ContainerDied","Data":"c7c1091ee56e42ac4522064377a7570e7fb361d89a82f24a0b487b5f2d271f27"} Feb 26 17:38:43 crc kubenswrapper[4805]: I0226 17:38:43.679662 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3a821121-9d7e-4ff5-9e30-70bb10327945" containerName="glance-log" containerID="cri-o://58fc52c99076e2838d8bfcf3d2f8af6e5596d67cd37fd3decdfc29983cd2b591" gracePeriod=30 Feb 26 17:38:43 crc kubenswrapper[4805]: I0226 17:38:43.679746 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3a821121-9d7e-4ff5-9e30-70bb10327945" containerName="glance-httpd" containerID="cri-o://9718a253aa93240f56efe6ddb3b0c2cfd4cd75b9a841a2d747bc2b5fe20281c0" gracePeriod=30 Feb 26 17:38:43 crc kubenswrapper[4805]: I0226 17:38:43.708548 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=36.708532958 podStartE2EDuration="36.708532958s" podCreationTimestamp="2026-02-26 17:38:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:38:43.706850986 +0000 UTC m=+1438.268605325" watchObservedRunningTime="2026-02-26 17:38:43.708532958 +0000 UTC m=+1438.270287297" Feb 26 17:38:44 crc kubenswrapper[4805]: I0226 17:38:44.701569 4805 generic.go:334] "Generic (PLEG): container finished" podID="3a821121-9d7e-4ff5-9e30-70bb10327945" containerID="9718a253aa93240f56efe6ddb3b0c2cfd4cd75b9a841a2d747bc2b5fe20281c0" exitCode=0 Feb 26 17:38:44 crc kubenswrapper[4805]: I0226 17:38:44.705840 4805 generic.go:334] "Generic (PLEG): container finished" podID="3a821121-9d7e-4ff5-9e30-70bb10327945" containerID="58fc52c99076e2838d8bfcf3d2f8af6e5596d67cd37fd3decdfc29983cd2b591" exitCode=143 Feb 26 17:38:44 crc kubenswrapper[4805]: I0226 17:38:44.701623 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3a821121-9d7e-4ff5-9e30-70bb10327945","Type":"ContainerDied","Data":"9718a253aa93240f56efe6ddb3b0c2cfd4cd75b9a841a2d747bc2b5fe20281c0"} Feb 26 17:38:44 crc kubenswrapper[4805]: I0226 17:38:44.705953 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3a821121-9d7e-4ff5-9e30-70bb10327945","Type":"ContainerDied","Data":"58fc52c99076e2838d8bfcf3d2f8af6e5596d67cd37fd3decdfc29983cd2b591"} Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.234304 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.246266 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.332180 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca42ff7-721d-4578-b0b9-d7905eb67afc-config-data\") pod \"aca42ff7-721d-4578-b0b9-d7905eb67afc\" (UID: \"aca42ff7-721d-4578-b0b9-d7905eb67afc\") " Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.332234 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3a821121-9d7e-4ff5-9e30-70bb10327945-httpd-run\") pod \"3a821121-9d7e-4ff5-9e30-70bb10327945\" (UID: \"3a821121-9d7e-4ff5-9e30-70bb10327945\") " Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.332359 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74\") pod \"3a821121-9d7e-4ff5-9e30-70bb10327945\" (UID: \"3a821121-9d7e-4ff5-9e30-70bb10327945\") " Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.332404 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a821121-9d7e-4ff5-9e30-70bb10327945-scripts\") pod \"3a821121-9d7e-4ff5-9e30-70bb10327945\" (UID: \"3a821121-9d7e-4ff5-9e30-70bb10327945\") " Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.332433 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a821121-9d7e-4ff5-9e30-70bb10327945-logs\") pod \"3a821121-9d7e-4ff5-9e30-70bb10327945\" (UID: \"3a821121-9d7e-4ff5-9e30-70bb10327945\") " Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.332533 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca42ff7-721d-4578-b0b9-d7905eb67afc-combined-ca-bundle\") pod \"aca42ff7-721d-4578-b0b9-d7905eb67afc\" (UID: \"aca42ff7-721d-4578-b0b9-d7905eb67afc\") " Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.332581 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a821121-9d7e-4ff5-9e30-70bb10327945-config-data\") pod \"3a821121-9d7e-4ff5-9e30-70bb10327945\" (UID: \"3a821121-9d7e-4ff5-9e30-70bb10327945\") " Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.332626 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-256mk\" (UniqueName: \"kubernetes.io/projected/3a821121-9d7e-4ff5-9e30-70bb10327945-kube-api-access-256mk\") pod \"3a821121-9d7e-4ff5-9e30-70bb10327945\" (UID: \"3a821121-9d7e-4ff5-9e30-70bb10327945\") " Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.332663 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aca42ff7-721d-4578-b0b9-d7905eb67afc-logs\") pod \"aca42ff7-721d-4578-b0b9-d7905eb67afc\" (UID: \"aca42ff7-721d-4578-b0b9-d7905eb67afc\") " Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.332696 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a821121-9d7e-4ff5-9e30-70bb10327945-combined-ca-bundle\") pod \"3a821121-9d7e-4ff5-9e30-70bb10327945\" (UID: \"3a821121-9d7e-4ff5-9e30-70bb10327945\") " Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.332735 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aca42ff7-721d-4578-b0b9-d7905eb67afc-scripts\") pod \"aca42ff7-721d-4578-b0b9-d7905eb67afc\" (UID: \"aca42ff7-721d-4578-b0b9-d7905eb67afc\") " Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.332844 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64\") pod \"aca42ff7-721d-4578-b0b9-d7905eb67afc\" (UID: \"aca42ff7-721d-4578-b0b9-d7905eb67afc\") " Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.332958 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxnkc\" (UniqueName: \"kubernetes.io/projected/aca42ff7-721d-4578-b0b9-d7905eb67afc-kube-api-access-dxnkc\") pod \"aca42ff7-721d-4578-b0b9-d7905eb67afc\" (UID: \"aca42ff7-721d-4578-b0b9-d7905eb67afc\") " Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.333075 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aca42ff7-721d-4578-b0b9-d7905eb67afc-httpd-run\") pod \"aca42ff7-721d-4578-b0b9-d7905eb67afc\" (UID: \"aca42ff7-721d-4578-b0b9-d7905eb67afc\") " Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.333224 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a821121-9d7e-4ff5-9e30-70bb10327945-logs" (OuterVolumeSpecName: "logs") pod "3a821121-9d7e-4ff5-9e30-70bb10327945" (UID: "3a821121-9d7e-4ff5-9e30-70bb10327945"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.333483 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a821121-9d7e-4ff5-9e30-70bb10327945-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3a821121-9d7e-4ff5-9e30-70bb10327945" (UID: "3a821121-9d7e-4ff5-9e30-70bb10327945"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.333508 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aca42ff7-721d-4578-b0b9-d7905eb67afc-logs" (OuterVolumeSpecName: "logs") pod "aca42ff7-721d-4578-b0b9-d7905eb67afc" (UID: "aca42ff7-721d-4578-b0b9-d7905eb67afc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.333673 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a821121-9d7e-4ff5-9e30-70bb10327945-logs\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.333903 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aca42ff7-721d-4578-b0b9-d7905eb67afc-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "aca42ff7-721d-4578-b0b9-d7905eb67afc" (UID: "aca42ff7-721d-4578-b0b9-d7905eb67afc"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.343980 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aca42ff7-721d-4578-b0b9-d7905eb67afc-kube-api-access-dxnkc" (OuterVolumeSpecName: "kube-api-access-dxnkc") pod "aca42ff7-721d-4578-b0b9-d7905eb67afc" (UID: "aca42ff7-721d-4578-b0b9-d7905eb67afc"). InnerVolumeSpecName "kube-api-access-dxnkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.345195 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a821121-9d7e-4ff5-9e30-70bb10327945-kube-api-access-256mk" (OuterVolumeSpecName: "kube-api-access-256mk") pod "3a821121-9d7e-4ff5-9e30-70bb10327945" (UID: "3a821121-9d7e-4ff5-9e30-70bb10327945"). InnerVolumeSpecName "kube-api-access-256mk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.351700 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a821121-9d7e-4ff5-9e30-70bb10327945-scripts" (OuterVolumeSpecName: "scripts") pod "3a821121-9d7e-4ff5-9e30-70bb10327945" (UID: "3a821121-9d7e-4ff5-9e30-70bb10327945"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.361156 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aca42ff7-721d-4578-b0b9-d7905eb67afc-scripts" (OuterVolumeSpecName: "scripts") pod "aca42ff7-721d-4578-b0b9-d7905eb67afc" (UID: "aca42ff7-721d-4578-b0b9-d7905eb67afc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.389176 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74" (OuterVolumeSpecName: "glance") pod "3a821121-9d7e-4ff5-9e30-70bb10327945" (UID: "3a821121-9d7e-4ff5-9e30-70bb10327945"). InnerVolumeSpecName "pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.430729 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64" (OuterVolumeSpecName: "glance") pod "aca42ff7-721d-4578-b0b9-d7905eb67afc" (UID: "aca42ff7-721d-4578-b0b9-d7905eb67afc"). InnerVolumeSpecName "pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.438625 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-256mk\" (UniqueName: \"kubernetes.io/projected/3a821121-9d7e-4ff5-9e30-70bb10327945-kube-api-access-256mk\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.438666 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aca42ff7-721d-4578-b0b9-d7905eb67afc-logs\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.438679 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aca42ff7-721d-4578-b0b9-d7905eb67afc-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.438718 4805 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64\") on node \"crc\" " Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.438736 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxnkc\" (UniqueName: \"kubernetes.io/projected/aca42ff7-721d-4578-b0b9-d7905eb67afc-kube-api-access-dxnkc\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.438750 4805 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aca42ff7-721d-4578-b0b9-d7905eb67afc-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.438761 4805 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3a821121-9d7e-4ff5-9e30-70bb10327945-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.438780 4805 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74\") on node \"crc\" " Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.438793 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3a821121-9d7e-4ff5-9e30-70bb10327945-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.462605 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aca42ff7-721d-4578-b0b9-d7905eb67afc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aca42ff7-721d-4578-b0b9-d7905eb67afc" (UID: "aca42ff7-721d-4578-b0b9-d7905eb67afc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.493379 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a821121-9d7e-4ff5-9e30-70bb10327945-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a821121-9d7e-4ff5-9e30-70bb10327945" (UID: "3a821121-9d7e-4ff5-9e30-70bb10327945"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.518459 4805 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.518611 4805 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64") on node "crc" Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.518690 4805 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.518835 4805 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74") on node "crc" Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.522398 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aca42ff7-721d-4578-b0b9-d7905eb67afc-config-data" (OuterVolumeSpecName: "config-data") pod "aca42ff7-721d-4578-b0b9-d7905eb67afc" (UID: "aca42ff7-721d-4578-b0b9-d7905eb67afc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.525808 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a821121-9d7e-4ff5-9e30-70bb10327945-config-data" (OuterVolumeSpecName: "config-data") pod "3a821121-9d7e-4ff5-9e30-70bb10327945" (UID: "3a821121-9d7e-4ff5-9e30-70bb10327945"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.539997 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aca42ff7-721d-4578-b0b9-d7905eb67afc-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.540048 4805 reconciler_common.go:293] "Volume detached for volume \"pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.540058 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aca42ff7-721d-4578-b0b9-d7905eb67afc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.540068 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a821121-9d7e-4ff5-9e30-70bb10327945-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.540076 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a821121-9d7e-4ff5-9e30-70bb10327945-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.540085 4805 reconciler_common.go:293] "Volume detached for volume \"pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.729527 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"aca42ff7-721d-4578-b0b9-d7905eb67afc","Type":"ContainerDied","Data":"ca341b78d513adf25045a97655c7a4281c2ba7508a50b1cb80d6a36ca0daecc9"} Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.729583 4805 scope.go:117] "RemoveContainer" containerID="c3a592281ea48f892468880fe01ad89a1e941383763c34095886d7c7a13342ff" Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.729698 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.738717 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"137f038e-91ff-44e0-9f5c-616765295b23","Type":"ContainerStarted","Data":"8c104ad65c86546f104f419d7b22b73d49404959477ca78dd75b9e4053284bd2"} Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.752999 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a466ee40-e6ef-4a36-96c6-88e7ce00a28c","Type":"ContainerStarted","Data":"4a738c7b41bc7895291d5a1a1cf0b67d368fde759beac5de426fe4b8e8b2b932"} Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.753057 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a466ee40-e6ef-4a36-96c6-88e7ce00a28c","Type":"ContainerStarted","Data":"91818a8f791f5deecb2519e4850ae339b1cf75a36f06e509666fee3aa5a6129c"} Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.779192 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3a821121-9d7e-4ff5-9e30-70bb10327945","Type":"ContainerDied","Data":"c21709b0882f409a63cc3bb30f8e40a63dbbfddbacfd1f390271cfecf4576f98"} Feb 26 17:38:45 crc kubenswrapper[4805]: I0226 17:38:45.779285 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.391531 4805 scope.go:117] "RemoveContainer" containerID="c7c1091ee56e42ac4522064377a7570e7fb361d89a82f24a0b487b5f2d271f27" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.392234 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.437834 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.441183 4805 scope.go:117] "RemoveContainer" containerID="9718a253aa93240f56efe6ddb3b0c2cfd4cd75b9a841a2d747bc2b5fe20281c0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.474533 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.484402 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.515259 4805 scope.go:117] "RemoveContainer" containerID="58fc52c99076e2838d8bfcf3d2f8af6e5596d67cd37fd3decdfc29983cd2b591" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.515495 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 17:38:46 crc kubenswrapper[4805]: E0226 17:38:46.515920 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aca42ff7-721d-4578-b0b9-d7905eb67afc" containerName="glance-log" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.515978 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca42ff7-721d-4578-b0b9-d7905eb67afc" containerName="glance-log" Feb 26 17:38:46 crc kubenswrapper[4805]: E0226 17:38:46.516075 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a821121-9d7e-4ff5-9e30-70bb10327945" containerName="glance-log" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.516142 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a821121-9d7e-4ff5-9e30-70bb10327945" containerName="glance-log" Feb 26 17:38:46 crc kubenswrapper[4805]: E0226 17:38:46.516210 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aca42ff7-721d-4578-b0b9-d7905eb67afc" containerName="glance-httpd" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.516266 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca42ff7-721d-4578-b0b9-d7905eb67afc" containerName="glance-httpd" Feb 26 17:38:46 crc kubenswrapper[4805]: E0226 17:38:46.516351 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c705c0f4-7fdd-44ed-8341-210b50ec8e3f" containerName="init" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.516415 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c705c0f4-7fdd-44ed-8341-210b50ec8e3f" containerName="init" Feb 26 17:38:46 crc kubenswrapper[4805]: E0226 17:38:46.516502 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a821121-9d7e-4ff5-9e30-70bb10327945" containerName="glance-httpd" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.516567 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a821121-9d7e-4ff5-9e30-70bb10327945" containerName="glance-httpd" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.516804 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c705c0f4-7fdd-44ed-8341-210b50ec8e3f" containerName="init" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.516877 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a821121-9d7e-4ff5-9e30-70bb10327945" containerName="glance-log" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.516932 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a821121-9d7e-4ff5-9e30-70bb10327945" containerName="glance-httpd" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.516990 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="aca42ff7-721d-4578-b0b9-d7905eb67afc" containerName="glance-log" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.517064 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="aca42ff7-721d-4578-b0b9-d7905eb67afc" containerName="glance-httpd" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.518174 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.523527 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.523967 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.524044 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-ff6b8" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.524151 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.554053 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.558555 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klgtj\" (UniqueName: \"kubernetes.io/projected/3ad5bd95-e53c-443e-8204-69377ae9600c-kube-api-access-klgtj\") pod \"glance-default-internal-api-0\" (UID: \"3ad5bd95-e53c-443e-8204-69377ae9600c\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.558624 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3ad5bd95-e53c-443e-8204-69377ae9600c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3ad5bd95-e53c-443e-8204-69377ae9600c\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.558761 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ad5bd95-e53c-443e-8204-69377ae9600c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3ad5bd95-e53c-443e-8204-69377ae9600c\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.558802 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74\") pod \"glance-default-internal-api-0\" (UID: \"3ad5bd95-e53c-443e-8204-69377ae9600c\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.559089 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ad5bd95-e53c-443e-8204-69377ae9600c-logs\") pod \"glance-default-internal-api-0\" (UID: \"3ad5bd95-e53c-443e-8204-69377ae9600c\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.559128 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ad5bd95-e53c-443e-8204-69377ae9600c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3ad5bd95-e53c-443e-8204-69377ae9600c\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.559158 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ad5bd95-e53c-443e-8204-69377ae9600c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3ad5bd95-e53c-443e-8204-69377ae9600c\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.559186 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ad5bd95-e53c-443e-8204-69377ae9600c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3ad5bd95-e53c-443e-8204-69377ae9600c\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.567886 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.575763 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.577697 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.578669 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.613092 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.661599 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07f64bde-8a12-4512-a78a-2ba3e0077fa3-logs\") pod \"glance-default-external-api-0\" (UID: \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.662278 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klgtj\" (UniqueName: \"kubernetes.io/projected/3ad5bd95-e53c-443e-8204-69377ae9600c-kube-api-access-klgtj\") pod \"glance-default-internal-api-0\" (UID: \"3ad5bd95-e53c-443e-8204-69377ae9600c\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.662397 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07f64bde-8a12-4512-a78a-2ba3e0077fa3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.662513 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3ad5bd95-e53c-443e-8204-69377ae9600c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3ad5bd95-e53c-443e-8204-69377ae9600c\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.662823 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07f64bde-8a12-4512-a78a-2ba3e0077fa3-scripts\") pod \"glance-default-external-api-0\" (UID: \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.662954 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ad5bd95-e53c-443e-8204-69377ae9600c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3ad5bd95-e53c-443e-8204-69377ae9600c\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.663089 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74\") pod \"glance-default-internal-api-0\" (UID: \"3ad5bd95-e53c-443e-8204-69377ae9600c\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.663217 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ad5bd95-e53c-443e-8204-69377ae9600c-logs\") pod \"glance-default-internal-api-0\" (UID: \"3ad5bd95-e53c-443e-8204-69377ae9600c\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.663299 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3ad5bd95-e53c-443e-8204-69377ae9600c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3ad5bd95-e53c-443e-8204-69377ae9600c\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.663317 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ad5bd95-e53c-443e-8204-69377ae9600c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3ad5bd95-e53c-443e-8204-69377ae9600c\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.663484 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ad5bd95-e53c-443e-8204-69377ae9600c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3ad5bd95-e53c-443e-8204-69377ae9600c\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.663577 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ad5bd95-e53c-443e-8204-69377ae9600c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3ad5bd95-e53c-443e-8204-69377ae9600c\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.663679 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07f64bde-8a12-4512-a78a-2ba3e0077fa3-config-data\") pod \"glance-default-external-api-0\" (UID: \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.663795 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07f64bde-8a12-4512-a78a-2ba3e0077fa3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.663916 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64\") pod \"glance-default-external-api-0\" (UID: \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.664037 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8lc9\" (UniqueName: \"kubernetes.io/projected/07f64bde-8a12-4512-a78a-2ba3e0077fa3-kube-api-access-b8lc9\") pod \"glance-default-external-api-0\" (UID: \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.664227 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/07f64bde-8a12-4512-a78a-2ba3e0077fa3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.665065 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ad5bd95-e53c-443e-8204-69377ae9600c-logs\") pod \"glance-default-internal-api-0\" (UID: \"3ad5bd95-e53c-443e-8204-69377ae9600c\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.670860 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ad5bd95-e53c-443e-8204-69377ae9600c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3ad5bd95-e53c-443e-8204-69377ae9600c\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.670876 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ad5bd95-e53c-443e-8204-69377ae9600c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3ad5bd95-e53c-443e-8204-69377ae9600c\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.671980 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ad5bd95-e53c-443e-8204-69377ae9600c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3ad5bd95-e53c-443e-8204-69377ae9600c\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.672617 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ad5bd95-e53c-443e-8204-69377ae9600c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3ad5bd95-e53c-443e-8204-69377ae9600c\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.672655 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.672685 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74\") pod \"glance-default-internal-api-0\" (UID: \"3ad5bd95-e53c-443e-8204-69377ae9600c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0507a3f442b97fe1466b7dfe12c3b0da8e1c69cf48d5061e83bd97d1212f1f63/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.689432 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klgtj\" (UniqueName: \"kubernetes.io/projected/3ad5bd95-e53c-443e-8204-69377ae9600c-kube-api-access-klgtj\") pod \"glance-default-internal-api-0\" (UID: \"3ad5bd95-e53c-443e-8204-69377ae9600c\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.740040 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74\") pod \"glance-default-internal-api-0\" (UID: \"3ad5bd95-e53c-443e-8204-69377ae9600c\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.767069 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/07f64bde-8a12-4512-a78a-2ba3e0077fa3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.767242 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07f64bde-8a12-4512-a78a-2ba3e0077fa3-logs\") pod \"glance-default-external-api-0\" (UID: \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.767309 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07f64bde-8a12-4512-a78a-2ba3e0077fa3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.767390 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07f64bde-8a12-4512-a78a-2ba3e0077fa3-scripts\") pod \"glance-default-external-api-0\" (UID: \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.767506 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07f64bde-8a12-4512-a78a-2ba3e0077fa3-config-data\") pod \"glance-default-external-api-0\" (UID: \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.767558 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07f64bde-8a12-4512-a78a-2ba3e0077fa3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.767589 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64\") pod \"glance-default-external-api-0\" (UID: \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.769685 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8lc9\" (UniqueName: \"kubernetes.io/projected/07f64bde-8a12-4512-a78a-2ba3e0077fa3-kube-api-access-b8lc9\") pod \"glance-default-external-api-0\" (UID: \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.771284 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/07f64bde-8a12-4512-a78a-2ba3e0077fa3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.771608 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07f64bde-8a12-4512-a78a-2ba3e0077fa3-logs\") pod \"glance-default-external-api-0\" (UID: \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.781808 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.781862 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64\") pod \"glance-default-external-api-0\" (UID: \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a78d83e29b9efed98adc7cd32a238f67178734183d9dc19f1da819604a3e7a12/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.787815 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07f64bde-8a12-4512-a78a-2ba3e0077fa3-scripts\") pod \"glance-default-external-api-0\" (UID: \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.824130 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dgxjz" event={"ID":"8969f13b-8f7b-4e4e-a891-eac8a978bb42","Type":"ContainerStarted","Data":"fae150fe9c39f36ada3f8390a014777fbd381fda97923ec8607a1697117187e0"} Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.853099 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.861340 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-dgxjz" podStartSLOduration=4.077361718 podStartE2EDuration="50.861310269s" podCreationTimestamp="2026-02-26 17:37:56 +0000 UTC" firstStartedPulling="2026-02-26 17:37:58.433334191 +0000 UTC m=+1392.995088530" lastFinishedPulling="2026-02-26 17:38:45.217282742 +0000 UTC m=+1439.779037081" observedRunningTime="2026-02-26 17:38:46.852064926 +0000 UTC m=+1441.413819265" watchObservedRunningTime="2026-02-26 17:38:46.861310269 +0000 UTC m=+1441.423064618" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.866714 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64\") pod \"glance-default-external-api-0\" (UID: \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.868316 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07f64bde-8a12-4512-a78a-2ba3e0077fa3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.871367 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07f64bde-8a12-4512-a78a-2ba3e0077fa3-config-data\") pod \"glance-default-external-api-0\" (UID: \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.873116 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07f64bde-8a12-4512-a78a-2ba3e0077fa3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.873105 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a466ee40-e6ef-4a36-96c6-88e7ce00a28c","Type":"ContainerStarted","Data":"757d1d16ad84f384e13cd7c882ec3a82903618998e15fbd6eac8fcb206b40775"} Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.875844 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8lc9\" (UniqueName: \"kubernetes.io/projected/07f64bde-8a12-4512-a78a-2ba3e0077fa3-kube-api-access-b8lc9\") pod \"glance-default-external-api-0\" (UID: \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\") " pod="openstack/glance-default-external-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.927224 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.996328 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a821121-9d7e-4ff5-9e30-70bb10327945" path="/var/lib/kubelet/pods/3a821121-9d7e-4ff5-9e30-70bb10327945/volumes" Feb 26 17:38:46 crc kubenswrapper[4805]: I0226 17:38:46.997890 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aca42ff7-721d-4578-b0b9-d7905eb67afc" path="/var/lib/kubelet/pods/aca42ff7-721d-4578-b0b9-d7905eb67afc/volumes" Feb 26 17:38:47 crc kubenswrapper[4805]: I0226 17:38:47.003040 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=83.299275624 podStartE2EDuration="2m1.003007256s" podCreationTimestamp="2026-02-26 17:36:46 +0000 UTC" firstStartedPulling="2026-02-26 17:37:52.747112657 +0000 UTC m=+1387.308866996" lastFinishedPulling="2026-02-26 17:38:30.450844299 +0000 UTC m=+1425.012598628" observedRunningTime="2026-02-26 17:38:46.936182459 +0000 UTC m=+1441.497936818" watchObservedRunningTime="2026-02-26 17:38:47.003007256 +0000 UTC m=+1441.564761595" Feb 26 17:38:47 crc kubenswrapper[4805]: I0226 17:38:47.382713 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b6c948c7-rsnj7"] Feb 26 17:38:47 crc kubenswrapper[4805]: I0226 17:38:47.383358 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b6c948c7-rsnj7" podUID="70970a4c-b0b2-4648-b4c8-2ad75dfd14a6" containerName="dnsmasq-dns" containerID="cri-o://814569d5a52a6e36cae633aac50767773d43e1b81843ebaca3f5c8871513d430" gracePeriod=10 Feb 26 17:38:47 crc kubenswrapper[4805]: I0226 17:38:47.391389 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b6c948c7-rsnj7" Feb 26 17:38:47 crc kubenswrapper[4805]: I0226 17:38:47.475497 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-z2m6q"] Feb 26 17:38:47 crc kubenswrapper[4805]: I0226 17:38:47.477429 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-z2m6q" Feb 26 17:38:47 crc kubenswrapper[4805]: I0226 17:38:47.484299 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 26 17:38:47 crc kubenswrapper[4805]: I0226 17:38:47.539082 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-z2m6q"] Feb 26 17:38:47 crc kubenswrapper[4805]: I0226 17:38:47.623073 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4349a64d-0728-48b1-aada-6e7977e291af-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-z2m6q\" (UID: \"4349a64d-0728-48b1-aada-6e7977e291af\") " pod="openstack/dnsmasq-dns-6b7b667979-z2m6q" Feb 26 17:38:47 crc kubenswrapper[4805]: I0226 17:38:47.623146 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7vgf\" (UniqueName: \"kubernetes.io/projected/4349a64d-0728-48b1-aada-6e7977e291af-kube-api-access-s7vgf\") pod \"dnsmasq-dns-6b7b667979-z2m6q\" (UID: \"4349a64d-0728-48b1-aada-6e7977e291af\") " pod="openstack/dnsmasq-dns-6b7b667979-z2m6q" Feb 26 17:38:47 crc kubenswrapper[4805]: I0226 17:38:47.623199 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4349a64d-0728-48b1-aada-6e7977e291af-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-z2m6q\" (UID: \"4349a64d-0728-48b1-aada-6e7977e291af\") " pod="openstack/dnsmasq-dns-6b7b667979-z2m6q" Feb 26 17:38:47 crc kubenswrapper[4805]: I0226 17:38:47.623229 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4349a64d-0728-48b1-aada-6e7977e291af-dns-svc\") pod \"dnsmasq-dns-6b7b667979-z2m6q\" (UID: \"4349a64d-0728-48b1-aada-6e7977e291af\") " pod="openstack/dnsmasq-dns-6b7b667979-z2m6q" Feb 26 17:38:47 crc kubenswrapper[4805]: I0226 17:38:47.623252 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4349a64d-0728-48b1-aada-6e7977e291af-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-z2m6q\" (UID: \"4349a64d-0728-48b1-aada-6e7977e291af\") " pod="openstack/dnsmasq-dns-6b7b667979-z2m6q" Feb 26 17:38:47 crc kubenswrapper[4805]: I0226 17:38:47.623314 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4349a64d-0728-48b1-aada-6e7977e291af-config\") pod \"dnsmasq-dns-6b7b667979-z2m6q\" (UID: \"4349a64d-0728-48b1-aada-6e7977e291af\") " pod="openstack/dnsmasq-dns-6b7b667979-z2m6q" Feb 26 17:38:47 crc kubenswrapper[4805]: I0226 17:38:47.726861 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4349a64d-0728-48b1-aada-6e7977e291af-config\") pod \"dnsmasq-dns-6b7b667979-z2m6q\" (UID: \"4349a64d-0728-48b1-aada-6e7977e291af\") " pod="openstack/dnsmasq-dns-6b7b667979-z2m6q" Feb 26 17:38:47 crc kubenswrapper[4805]: I0226 17:38:47.726973 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4349a64d-0728-48b1-aada-6e7977e291af-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-z2m6q\" (UID: \"4349a64d-0728-48b1-aada-6e7977e291af\") " pod="openstack/dnsmasq-dns-6b7b667979-z2m6q" Feb 26 17:38:47 crc kubenswrapper[4805]: I0226 17:38:47.727075 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7vgf\" (UniqueName: \"kubernetes.io/projected/4349a64d-0728-48b1-aada-6e7977e291af-kube-api-access-s7vgf\") pod \"dnsmasq-dns-6b7b667979-z2m6q\" (UID: \"4349a64d-0728-48b1-aada-6e7977e291af\") " pod="openstack/dnsmasq-dns-6b7b667979-z2m6q" Feb 26 17:38:47 crc kubenswrapper[4805]: I0226 17:38:47.727179 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4349a64d-0728-48b1-aada-6e7977e291af-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-z2m6q\" (UID: \"4349a64d-0728-48b1-aada-6e7977e291af\") " pod="openstack/dnsmasq-dns-6b7b667979-z2m6q" Feb 26 17:38:47 crc kubenswrapper[4805]: I0226 17:38:47.727234 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4349a64d-0728-48b1-aada-6e7977e291af-dns-svc\") pod \"dnsmasq-dns-6b7b667979-z2m6q\" (UID: \"4349a64d-0728-48b1-aada-6e7977e291af\") " pod="openstack/dnsmasq-dns-6b7b667979-z2m6q" Feb 26 17:38:47 crc kubenswrapper[4805]: I0226 17:38:47.727273 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4349a64d-0728-48b1-aada-6e7977e291af-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-z2m6q\" (UID: \"4349a64d-0728-48b1-aada-6e7977e291af\") " pod="openstack/dnsmasq-dns-6b7b667979-z2m6q" Feb 26 17:38:47 crc kubenswrapper[4805]: I0226 17:38:47.728581 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4349a64d-0728-48b1-aada-6e7977e291af-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-z2m6q\" (UID: \"4349a64d-0728-48b1-aada-6e7977e291af\") " pod="openstack/dnsmasq-dns-6b7b667979-z2m6q" Feb 26 17:38:47 crc kubenswrapper[4805]: I0226 17:38:47.729367 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4349a64d-0728-48b1-aada-6e7977e291af-config\") pod \"dnsmasq-dns-6b7b667979-z2m6q\" (UID: \"4349a64d-0728-48b1-aada-6e7977e291af\") " pod="openstack/dnsmasq-dns-6b7b667979-z2m6q" Feb 26 17:38:47 crc kubenswrapper[4805]: I0226 17:38:47.730087 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4349a64d-0728-48b1-aada-6e7977e291af-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-z2m6q\" (UID: \"4349a64d-0728-48b1-aada-6e7977e291af\") " pod="openstack/dnsmasq-dns-6b7b667979-z2m6q" Feb 26 17:38:47 crc kubenswrapper[4805]: I0226 17:38:47.736878 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4349a64d-0728-48b1-aada-6e7977e291af-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-z2m6q\" (UID: \"4349a64d-0728-48b1-aada-6e7977e291af\") " pod="openstack/dnsmasq-dns-6b7b667979-z2m6q" Feb 26 17:38:47 crc kubenswrapper[4805]: I0226 17:38:47.739766 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4349a64d-0728-48b1-aada-6e7977e291af-dns-svc\") pod \"dnsmasq-dns-6b7b667979-z2m6q\" (UID: \"4349a64d-0728-48b1-aada-6e7977e291af\") " pod="openstack/dnsmasq-dns-6b7b667979-z2m6q" Feb 26 17:38:47 crc kubenswrapper[4805]: I0226 17:38:47.761078 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7vgf\" (UniqueName: \"kubernetes.io/projected/4349a64d-0728-48b1-aada-6e7977e291af-kube-api-access-s7vgf\") pod \"dnsmasq-dns-6b7b667979-z2m6q\" (UID: \"4349a64d-0728-48b1-aada-6e7977e291af\") " pod="openstack/dnsmasq-dns-6b7b667979-z2m6q" Feb 26 17:38:47 crc kubenswrapper[4805]: I0226 17:38:47.846836 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-z2m6q" Feb 26 17:38:47 crc kubenswrapper[4805]: I0226 17:38:47.902041 4805 generic.go:334] "Generic (PLEG): container finished" podID="70970a4c-b0b2-4648-b4c8-2ad75dfd14a6" containerID="814569d5a52a6e36cae633aac50767773d43e1b81843ebaca3f5c8871513d430" exitCode=0 Feb 26 17:38:47 crc kubenswrapper[4805]: I0226 17:38:47.902110 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6c948c7-rsnj7" event={"ID":"70970a4c-b0b2-4648-b4c8-2ad75dfd14a6","Type":"ContainerDied","Data":"814569d5a52a6e36cae633aac50767773d43e1b81843ebaca3f5c8871513d430"} Feb 26 17:38:47 crc kubenswrapper[4805]: I0226 17:38:47.905256 4805 generic.go:334] "Generic (PLEG): container finished" podID="4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4" containerID="f18cd5e30cd9c44aaa5bf52ec9a00c41d4f42071caad7ddb87a3efcc5e7c2330" exitCode=0 Feb 26 17:38:47 crc kubenswrapper[4805]: I0226 17:38:47.906788 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jlsft" event={"ID":"4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4","Type":"ContainerDied","Data":"f18cd5e30cd9c44aaa5bf52ec9a00c41d4f42071caad7ddb87a3efcc5e7c2330"} Feb 26 17:38:48 crc kubenswrapper[4805]: I0226 17:38:48.030763 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 17:38:48 crc kubenswrapper[4805]: I0226 17:38:48.072180 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b6c948c7-rsnj7" podUID="70970a4c-b0b2-4648-b4c8-2ad75dfd14a6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.175:5353: connect: connection refused" Feb 26 17:38:48 crc kubenswrapper[4805]: W0226 17:38:48.135382 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ad5bd95_e53c_443e_8204_69377ae9600c.slice/crio-31e6ff7463df7efa84faf4562817385d34f3ded147a83729f64830f1927f48ca WatchSource:0}: Error finding container 31e6ff7463df7efa84faf4562817385d34f3ded147a83729f64830f1927f48ca: Status 404 returned error can't find the container with id 31e6ff7463df7efa84faf4562817385d34f3ded147a83729f64830f1927f48ca Feb 26 17:38:48 crc kubenswrapper[4805]: I0226 17:38:48.136387 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 17:38:48 crc kubenswrapper[4805]: I0226 17:38:48.519394 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-z2m6q"] Feb 26 17:38:48 crc kubenswrapper[4805]: W0226 17:38:48.532895 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4349a64d_0728_48b1_aada_6e7977e291af.slice/crio-226ab347956e95d3142da2c157008fb441cb881e3be4ab0a281a15638cca6d6d WatchSource:0}: Error finding container 226ab347956e95d3142da2c157008fb441cb881e3be4ab0a281a15638cca6d6d: Status 404 returned error can't find the container with id 226ab347956e95d3142da2c157008fb441cb881e3be4ab0a281a15638cca6d6d Feb 26 17:38:51 crc kubenswrapper[4805]: E0226 17:38:51.246967 4805 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.394s" Feb 26 17:38:51 crc kubenswrapper[4805]: I0226 17:38:51.248641 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"07f64bde-8a12-4512-a78a-2ba3e0077fa3","Type":"ContainerStarted","Data":"d80b0f3ba8bed38ae8c2df0a58206a5a3d321b1eff6bbb91f84b043a2089be5e"} Feb 26 17:38:51 crc kubenswrapper[4805]: I0226 17:38:51.311118 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-z2m6q" event={"ID":"4349a64d-0728-48b1-aada-6e7977e291af","Type":"ContainerStarted","Data":"226ab347956e95d3142da2c157008fb441cb881e3be4ab0a281a15638cca6d6d"} Feb 26 17:38:51 crc kubenswrapper[4805]: I0226 17:38:51.311170 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3ad5bd95-e53c-443e-8204-69377ae9600c","Type":"ContainerStarted","Data":"31e6ff7463df7efa84faf4562817385d34f3ded147a83729f64830f1927f48ca"} Feb 26 17:38:51 crc kubenswrapper[4805]: I0226 17:38:51.491716 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-z2m6q" event={"ID":"4349a64d-0728-48b1-aada-6e7977e291af","Type":"ContainerStarted","Data":"433e3324004be683f373e40df57dc472fb5fc3f1af8627836d2fe8aac5b7ff21"} Feb 26 17:38:51 crc kubenswrapper[4805]: I0226 17:38:51.499402 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3ad5bd95-e53c-443e-8204-69377ae9600c","Type":"ContainerStarted","Data":"7cb331726954a0f3646c4ed8dbb743dca78dddad2acba37a2ca3d42f2edc5afa"} Feb 26 17:38:51 crc kubenswrapper[4805]: I0226 17:38:51.507238 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"07f64bde-8a12-4512-a78a-2ba3e0077fa3","Type":"ContainerStarted","Data":"85a94329c97ffad5ba4c23516d05fe998f94f87ac5bde025ab4799069bd4cffc"} Feb 26 17:38:51 crc kubenswrapper[4805]: I0226 17:38:51.700682 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jlsft" Feb 26 17:38:51 crc kubenswrapper[4805]: I0226 17:38:51.799827 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4-fernet-keys\") pod \"4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4\" (UID: \"4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4\") " Feb 26 17:38:51 crc kubenswrapper[4805]: I0226 17:38:51.799888 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq9dl\" (UniqueName: \"kubernetes.io/projected/4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4-kube-api-access-xq9dl\") pod \"4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4\" (UID: \"4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4\") " Feb 26 17:38:51 crc kubenswrapper[4805]: I0226 17:38:51.799947 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4-credential-keys\") pod \"4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4\" (UID: \"4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4\") " Feb 26 17:38:51 crc kubenswrapper[4805]: I0226 17:38:51.800055 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4-combined-ca-bundle\") pod \"4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4\" (UID: \"4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4\") " Feb 26 17:38:51 crc kubenswrapper[4805]: I0226 17:38:51.800129 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4-scripts\") pod \"4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4\" (UID: \"4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4\") " Feb 26 17:38:51 crc kubenswrapper[4805]: I0226 17:38:51.800207 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4-config-data\") pod \"4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4\" (UID: \"4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4\") " Feb 26 17:38:51 crc kubenswrapper[4805]: I0226 17:38:51.804191 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4" (UID: "4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:38:51 crc kubenswrapper[4805]: I0226 17:38:51.806836 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4" (UID: "4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:38:51 crc kubenswrapper[4805]: I0226 17:38:51.808996 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4-scripts" (OuterVolumeSpecName: "scripts") pod "4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4" (UID: "4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:38:51 crc kubenswrapper[4805]: I0226 17:38:51.809914 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4-kube-api-access-xq9dl" (OuterVolumeSpecName: "kube-api-access-xq9dl") pod "4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4" (UID: "4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4"). InnerVolumeSpecName "kube-api-access-xq9dl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:38:51 crc kubenswrapper[4805]: I0226 17:38:51.840724 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4" (UID: "4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:38:51 crc kubenswrapper[4805]: I0226 17:38:51.865896 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4-config-data" (OuterVolumeSpecName: "config-data") pod "4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4" (UID: "4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:38:51 crc kubenswrapper[4805]: I0226 17:38:51.902122 4805 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:51 crc kubenswrapper[4805]: I0226 17:38:51.902158 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xq9dl\" (UniqueName: \"kubernetes.io/projected/4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4-kube-api-access-xq9dl\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:51 crc kubenswrapper[4805]: I0226 17:38:51.902168 4805 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:51 crc kubenswrapper[4805]: I0226 17:38:51.902177 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:51 crc kubenswrapper[4805]: I0226 17:38:51.902186 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:51 crc kubenswrapper[4805]: I0226 17:38:51.902194 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.203373 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b6c948c7-rsnj7" Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.310473 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70970a4c-b0b2-4648-b4c8-2ad75dfd14a6-dns-svc\") pod \"70970a4c-b0b2-4648-b4c8-2ad75dfd14a6\" (UID: \"70970a4c-b0b2-4648-b4c8-2ad75dfd14a6\") " Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.310704 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntztd\" (UniqueName: \"kubernetes.io/projected/70970a4c-b0b2-4648-b4c8-2ad75dfd14a6-kube-api-access-ntztd\") pod \"70970a4c-b0b2-4648-b4c8-2ad75dfd14a6\" (UID: \"70970a4c-b0b2-4648-b4c8-2ad75dfd14a6\") " Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.310806 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70970a4c-b0b2-4648-b4c8-2ad75dfd14a6-config\") pod \"70970a4c-b0b2-4648-b4c8-2ad75dfd14a6\" (UID: \"70970a4c-b0b2-4648-b4c8-2ad75dfd14a6\") " Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.310838 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70970a4c-b0b2-4648-b4c8-2ad75dfd14a6-ovsdbserver-sb\") pod \"70970a4c-b0b2-4648-b4c8-2ad75dfd14a6\" (UID: \"70970a4c-b0b2-4648-b4c8-2ad75dfd14a6\") " Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.310999 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70970a4c-b0b2-4648-b4c8-2ad75dfd14a6-ovsdbserver-nb\") pod \"70970a4c-b0b2-4648-b4c8-2ad75dfd14a6\" (UID: \"70970a4c-b0b2-4648-b4c8-2ad75dfd14a6\") " Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.319884 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70970a4c-b0b2-4648-b4c8-2ad75dfd14a6-kube-api-access-ntztd" (OuterVolumeSpecName: "kube-api-access-ntztd") pod "70970a4c-b0b2-4648-b4c8-2ad75dfd14a6" (UID: "70970a4c-b0b2-4648-b4c8-2ad75dfd14a6"). InnerVolumeSpecName "kube-api-access-ntztd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.367718 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70970a4c-b0b2-4648-b4c8-2ad75dfd14a6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "70970a4c-b0b2-4648-b4c8-2ad75dfd14a6" (UID: "70970a4c-b0b2-4648-b4c8-2ad75dfd14a6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.383196 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70970a4c-b0b2-4648-b4c8-2ad75dfd14a6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "70970a4c-b0b2-4648-b4c8-2ad75dfd14a6" (UID: "70970a4c-b0b2-4648-b4c8-2ad75dfd14a6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.385226 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70970a4c-b0b2-4648-b4c8-2ad75dfd14a6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "70970a4c-b0b2-4648-b4c8-2ad75dfd14a6" (UID: "70970a4c-b0b2-4648-b4c8-2ad75dfd14a6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.403160 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70970a4c-b0b2-4648-b4c8-2ad75dfd14a6-config" (OuterVolumeSpecName: "config") pod "70970a4c-b0b2-4648-b4c8-2ad75dfd14a6" (UID: "70970a4c-b0b2-4648-b4c8-2ad75dfd14a6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.413671 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntztd\" (UniqueName: \"kubernetes.io/projected/70970a4c-b0b2-4648-b4c8-2ad75dfd14a6-kube-api-access-ntztd\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.413711 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70970a4c-b0b2-4648-b4c8-2ad75dfd14a6-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.413726 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70970a4c-b0b2-4648-b4c8-2ad75dfd14a6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.413737 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70970a4c-b0b2-4648-b4c8-2ad75dfd14a6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.413749 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70970a4c-b0b2-4648-b4c8-2ad75dfd14a6-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.522277 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b6c948c7-rsnj7" Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.522289 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b6c948c7-rsnj7" event={"ID":"70970a4c-b0b2-4648-b4c8-2ad75dfd14a6","Type":"ContainerDied","Data":"48b9cc38af575ee160c400a0f06eb5620c3a6ecd00be0cba5348b1d4f90d5f90"} Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.522342 4805 scope.go:117] "RemoveContainer" containerID="814569d5a52a6e36cae633aac50767773d43e1b81843ebaca3f5c8871513d430" Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.527671 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jlsft" event={"ID":"4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4","Type":"ContainerDied","Data":"d9b1e512eaf81d4e1c533a09caf87d9efb76051102fcd8191012b926b5f69901"} Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.527709 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9b1e512eaf81d4e1c533a09caf87d9efb76051102fcd8191012b926b5f69901" Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.527758 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jlsft" Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.534898 4805 generic.go:334] "Generic (PLEG): container finished" podID="4349a64d-0728-48b1-aada-6e7977e291af" containerID="433e3324004be683f373e40df57dc472fb5fc3f1af8627836d2fe8aac5b7ff21" exitCode=0 Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.534966 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-z2m6q" event={"ID":"4349a64d-0728-48b1-aada-6e7977e291af","Type":"ContainerDied","Data":"433e3324004be683f373e40df57dc472fb5fc3f1af8627836d2fe8aac5b7ff21"} Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.539219 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-xgqc6" event={"ID":"96d28605-2282-4e5f-93d6-5a3023c7bc9c","Type":"ContainerStarted","Data":"5d3e2c52590370cb3bb0c91f5dc57c28c1677079f74b3e277f915a0266c9046b"} Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.543700 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3ad5bd95-e53c-443e-8204-69377ae9600c","Type":"ContainerStarted","Data":"db82634890c0444fb4366b978e1d6658530b01df1c672681376cf76fed06ad90"} Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.579632 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b6c948c7-rsnj7"] Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.589892 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b6c948c7-rsnj7"] Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.614261 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-db-sync-xgqc6" podStartSLOduration=3.385194157 podStartE2EDuration="56.614237359s" podCreationTimestamp="2026-02-26 17:37:56 +0000 UTC" firstStartedPulling="2026-02-26 17:37:58.671228836 +0000 UTC m=+1393.232983175" lastFinishedPulling="2026-02-26 17:38:51.900272038 +0000 UTC m=+1446.462026377" observedRunningTime="2026-02-26 17:38:52.610106705 +0000 UTC m=+1447.171861064" watchObservedRunningTime="2026-02-26 17:38:52.614237359 +0000 UTC m=+1447.175991698" Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.646498 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.646476683 podStartE2EDuration="6.646476683s" podCreationTimestamp="2026-02-26 17:38:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:38:52.635731752 +0000 UTC m=+1447.197486101" watchObservedRunningTime="2026-02-26 17:38:52.646476683 +0000 UTC m=+1447.208231022" Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.826091 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-847cbf4c89-k2szx"] Feb 26 17:38:52 crc kubenswrapper[4805]: E0226 17:38:52.826664 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70970a4c-b0b2-4648-b4c8-2ad75dfd14a6" containerName="dnsmasq-dns" Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.826687 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="70970a4c-b0b2-4648-b4c8-2ad75dfd14a6" containerName="dnsmasq-dns" Feb 26 17:38:52 crc kubenswrapper[4805]: E0226 17:38:52.826702 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70970a4c-b0b2-4648-b4c8-2ad75dfd14a6" containerName="init" Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.826710 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="70970a4c-b0b2-4648-b4c8-2ad75dfd14a6" containerName="init" Feb 26 17:38:52 crc kubenswrapper[4805]: E0226 17:38:52.826848 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4" containerName="keystone-bootstrap" Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.826863 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4" containerName="keystone-bootstrap" Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.827128 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4" containerName="keystone-bootstrap" Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.827162 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="70970a4c-b0b2-4648-b4c8-2ad75dfd14a6" containerName="dnsmasq-dns" Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.828088 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-847cbf4c89-k2szx" Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.831754 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.833985 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.834444 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.834639 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bvnnf" Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.834807 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.834954 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.872105 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-847cbf4c89-k2szx"] Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.922465 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba-internal-tls-certs\") pod \"keystone-847cbf4c89-k2szx\" (UID: \"2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba\") " pod="openstack/keystone-847cbf4c89-k2szx" Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.922536 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba-public-tls-certs\") pod \"keystone-847cbf4c89-k2szx\" (UID: \"2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba\") " pod="openstack/keystone-847cbf4c89-k2szx" Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.922715 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba-scripts\") pod \"keystone-847cbf4c89-k2szx\" (UID: \"2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba\") " pod="openstack/keystone-847cbf4c89-k2szx" Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.922785 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba-combined-ca-bundle\") pod \"keystone-847cbf4c89-k2szx\" (UID: \"2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba\") " pod="openstack/keystone-847cbf4c89-k2szx" Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.922871 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxxsf\" (UniqueName: \"kubernetes.io/projected/2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba-kube-api-access-xxxsf\") pod \"keystone-847cbf4c89-k2szx\" (UID: \"2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba\") " pod="openstack/keystone-847cbf4c89-k2szx" Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.923007 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba-credential-keys\") pod \"keystone-847cbf4c89-k2szx\" (UID: \"2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba\") " pod="openstack/keystone-847cbf4c89-k2szx" Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.923291 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba-fernet-keys\") pod \"keystone-847cbf4c89-k2szx\" (UID: \"2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba\") " pod="openstack/keystone-847cbf4c89-k2szx" Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.923336 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba-config-data\") pod \"keystone-847cbf4c89-k2szx\" (UID: \"2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba\") " pod="openstack/keystone-847cbf4c89-k2szx" Feb 26 17:38:52 crc kubenswrapper[4805]: I0226 17:38:52.974063 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70970a4c-b0b2-4648-b4c8-2ad75dfd14a6" path="/var/lib/kubelet/pods/70970a4c-b0b2-4648-b4c8-2ad75dfd14a6/volumes" Feb 26 17:38:53 crc kubenswrapper[4805]: I0226 17:38:53.024969 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba-scripts\") pod \"keystone-847cbf4c89-k2szx\" (UID: \"2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba\") " pod="openstack/keystone-847cbf4c89-k2szx" Feb 26 17:38:53 crc kubenswrapper[4805]: I0226 17:38:53.025181 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba-combined-ca-bundle\") pod \"keystone-847cbf4c89-k2szx\" (UID: \"2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba\") " pod="openstack/keystone-847cbf4c89-k2szx" Feb 26 17:38:53 crc kubenswrapper[4805]: I0226 17:38:53.025234 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxxsf\" (UniqueName: \"kubernetes.io/projected/2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba-kube-api-access-xxxsf\") pod \"keystone-847cbf4c89-k2szx\" (UID: \"2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba\") " pod="openstack/keystone-847cbf4c89-k2szx" Feb 26 17:38:53 crc kubenswrapper[4805]: I0226 17:38:53.025286 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba-credential-keys\") pod \"keystone-847cbf4c89-k2szx\" (UID: \"2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba\") " pod="openstack/keystone-847cbf4c89-k2szx" Feb 26 17:38:53 crc kubenswrapper[4805]: I0226 17:38:53.025429 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba-fernet-keys\") pod \"keystone-847cbf4c89-k2szx\" (UID: \"2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba\") " pod="openstack/keystone-847cbf4c89-k2szx" Feb 26 17:38:53 crc kubenswrapper[4805]: I0226 17:38:53.025461 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba-config-data\") pod \"keystone-847cbf4c89-k2szx\" (UID: \"2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba\") " pod="openstack/keystone-847cbf4c89-k2szx" Feb 26 17:38:53 crc kubenswrapper[4805]: I0226 17:38:53.025513 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba-public-tls-certs\") pod \"keystone-847cbf4c89-k2szx\" (UID: \"2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba\") " pod="openstack/keystone-847cbf4c89-k2szx" Feb 26 17:38:53 crc kubenswrapper[4805]: I0226 17:38:53.025535 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba-internal-tls-certs\") pod \"keystone-847cbf4c89-k2szx\" (UID: \"2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba\") " pod="openstack/keystone-847cbf4c89-k2szx" Feb 26 17:38:53 crc kubenswrapper[4805]: I0226 17:38:53.031206 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba-config-data\") pod \"keystone-847cbf4c89-k2szx\" (UID: \"2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba\") " pod="openstack/keystone-847cbf4c89-k2szx" Feb 26 17:38:53 crc kubenswrapper[4805]: I0226 17:38:53.034349 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba-scripts\") pod \"keystone-847cbf4c89-k2szx\" (UID: \"2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba\") " pod="openstack/keystone-847cbf4c89-k2szx" Feb 26 17:38:53 crc kubenswrapper[4805]: I0226 17:38:53.035219 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba-public-tls-certs\") pod \"keystone-847cbf4c89-k2szx\" (UID: \"2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba\") " pod="openstack/keystone-847cbf4c89-k2szx" Feb 26 17:38:53 crc kubenswrapper[4805]: I0226 17:38:53.035956 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba-internal-tls-certs\") pod \"keystone-847cbf4c89-k2szx\" (UID: \"2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba\") " pod="openstack/keystone-847cbf4c89-k2szx" Feb 26 17:38:53 crc kubenswrapper[4805]: I0226 17:38:53.036933 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba-credential-keys\") pod \"keystone-847cbf4c89-k2szx\" (UID: \"2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba\") " pod="openstack/keystone-847cbf4c89-k2szx" Feb 26 17:38:53 crc kubenswrapper[4805]: I0226 17:38:53.037438 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba-combined-ca-bundle\") pod \"keystone-847cbf4c89-k2szx\" (UID: \"2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba\") " pod="openstack/keystone-847cbf4c89-k2szx" Feb 26 17:38:53 crc kubenswrapper[4805]: I0226 17:38:53.056414 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba-fernet-keys\") pod \"keystone-847cbf4c89-k2szx\" (UID: \"2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba\") " pod="openstack/keystone-847cbf4c89-k2szx" Feb 26 17:38:53 crc kubenswrapper[4805]: I0226 17:38:53.058130 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxxsf\" (UniqueName: \"kubernetes.io/projected/2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba-kube-api-access-xxxsf\") pod \"keystone-847cbf4c89-k2szx\" (UID: \"2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba\") " pod="openstack/keystone-847cbf4c89-k2szx" Feb 26 17:38:53 crc kubenswrapper[4805]: I0226 17:38:53.159357 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-847cbf4c89-k2szx" Feb 26 17:38:53 crc kubenswrapper[4805]: I0226 17:38:53.558975 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"07f64bde-8a12-4512-a78a-2ba3e0077fa3","Type":"ContainerStarted","Data":"7b007b98697e38999f4d180dc1a5dac324d0b5e128ea21e29d5db8dc6917692b"} Feb 26 17:38:53 crc kubenswrapper[4805]: I0226 17:38:53.561389 4805 generic.go:334] "Generic (PLEG): container finished" podID="5c0f3346-b7f3-48c3-b177-86d8adb7d190" containerID="5e8b2dc525618a74dce4e1d421f9b858f0f55249f5cd42c645e6d4d81f6dd1ea" exitCode=0 Feb 26 17:38:53 crc kubenswrapper[4805]: I0226 17:38:53.561517 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-79wtt" event={"ID":"5c0f3346-b7f3-48c3-b177-86d8adb7d190","Type":"ContainerDied","Data":"5e8b2dc525618a74dce4e1d421f9b858f0f55249f5cd42c645e6d4d81f6dd1ea"} Feb 26 17:38:53 crc kubenswrapper[4805]: I0226 17:38:53.600393 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.600355621 podStartE2EDuration="7.600355621s" podCreationTimestamp="2026-02-26 17:38:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:38:53.581384602 +0000 UTC m=+1448.143138941" watchObservedRunningTime="2026-02-26 17:38:53.600355621 +0000 UTC m=+1448.162109960" Feb 26 17:38:55 crc kubenswrapper[4805]: I0226 17:38:55.595464 4805 generic.go:334] "Generic (PLEG): container finished" podID="1627cc17-6ee2-4176-b719-aa04e00aa881" containerID="56941ff444a4e8d6538e0392b00c752ffa2d15a3bb5f28f4f4b3d1fb49310946" exitCode=0 Feb 26 17:38:55 crc kubenswrapper[4805]: I0226 17:38:55.595655 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wg9r2" event={"ID":"1627cc17-6ee2-4176-b719-aa04e00aa881","Type":"ContainerDied","Data":"56941ff444a4e8d6538e0392b00c752ffa2d15a3bb5f28f4f4b3d1fb49310946"} Feb 26 17:38:56 crc kubenswrapper[4805]: I0226 17:38:56.856671 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 26 17:38:56 crc kubenswrapper[4805]: I0226 17:38:56.857038 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 26 17:38:56 crc kubenswrapper[4805]: I0226 17:38:56.925703 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 26 17:38:56 crc kubenswrapper[4805]: I0226 17:38:56.934399 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 26 17:38:56 crc kubenswrapper[4805]: I0226 17:38:56.934448 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 26 17:38:57 crc kubenswrapper[4805]: I0226 17:38:57.092649 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 26 17:38:57 crc kubenswrapper[4805]: I0226 17:38:57.098636 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 26 17:38:57 crc kubenswrapper[4805]: I0226 17:38:57.119302 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 26 17:38:57 crc kubenswrapper[4805]: I0226 17:38:57.436565 4805 scope.go:117] "RemoveContainer" containerID="e84a782878a5f7da958a196b8a0122c13307ebbcbf330b97e7d4a4b85c5ed3f1" Feb 26 17:38:57 crc kubenswrapper[4805]: I0226 17:38:57.635610 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wg9r2" event={"ID":"1627cc17-6ee2-4176-b719-aa04e00aa881","Type":"ContainerDied","Data":"1057c4aab3ef7e12bde5a49f07fe16f246c02dc63644f1ea10456957cd9f2312"} Feb 26 17:38:57 crc kubenswrapper[4805]: I0226 17:38:57.635866 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1057c4aab3ef7e12bde5a49f07fe16f246c02dc63644f1ea10456957cd9f2312" Feb 26 17:38:57 crc kubenswrapper[4805]: I0226 17:38:57.646977 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-79wtt" event={"ID":"5c0f3346-b7f3-48c3-b177-86d8adb7d190","Type":"ContainerDied","Data":"64f4d4735a14313fd70a0c8594bee07292a5a083dcf9c883c01b528909a5271c"} Feb 26 17:38:57 crc kubenswrapper[4805]: I0226 17:38:57.647047 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64f4d4735a14313fd70a0c8594bee07292a5a083dcf9c883c01b528909a5271c" Feb 26 17:38:57 crc kubenswrapper[4805]: I0226 17:38:57.647570 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 26 17:38:57 crc kubenswrapper[4805]: I0226 17:38:57.647761 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 26 17:38:57 crc kubenswrapper[4805]: I0226 17:38:57.647875 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 26 17:38:57 crc kubenswrapper[4805]: I0226 17:38:57.648102 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 26 17:38:57 crc kubenswrapper[4805]: I0226 17:38:57.727318 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-79wtt" Feb 26 17:38:57 crc kubenswrapper[4805]: I0226 17:38:57.826987 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wg9r2" Feb 26 17:38:57 crc kubenswrapper[4805]: I0226 17:38:57.905332 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c0f3346-b7f3-48c3-b177-86d8adb7d190-scripts\") pod \"5c0f3346-b7f3-48c3-b177-86d8adb7d190\" (UID: \"5c0f3346-b7f3-48c3-b177-86d8adb7d190\") " Feb 26 17:38:57 crc kubenswrapper[4805]: I0226 17:38:57.905551 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c0f3346-b7f3-48c3-b177-86d8adb7d190-combined-ca-bundle\") pod \"5c0f3346-b7f3-48c3-b177-86d8adb7d190\" (UID: \"5c0f3346-b7f3-48c3-b177-86d8adb7d190\") " Feb 26 17:38:57 crc kubenswrapper[4805]: I0226 17:38:57.905617 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9m9l\" (UniqueName: \"kubernetes.io/projected/5c0f3346-b7f3-48c3-b177-86d8adb7d190-kube-api-access-r9m9l\") pod \"5c0f3346-b7f3-48c3-b177-86d8adb7d190\" (UID: \"5c0f3346-b7f3-48c3-b177-86d8adb7d190\") " Feb 26 17:38:57 crc kubenswrapper[4805]: I0226 17:38:57.905792 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c0f3346-b7f3-48c3-b177-86d8adb7d190-logs\") pod \"5c0f3346-b7f3-48c3-b177-86d8adb7d190\" (UID: \"5c0f3346-b7f3-48c3-b177-86d8adb7d190\") " Feb 26 17:38:57 crc kubenswrapper[4805]: I0226 17:38:57.905955 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c0f3346-b7f3-48c3-b177-86d8adb7d190-config-data\") pod \"5c0f3346-b7f3-48c3-b177-86d8adb7d190\" (UID: \"5c0f3346-b7f3-48c3-b177-86d8adb7d190\") " Feb 26 17:38:57 crc kubenswrapper[4805]: I0226 17:38:57.908662 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c0f3346-b7f3-48c3-b177-86d8adb7d190-logs" (OuterVolumeSpecName: "logs") pod "5c0f3346-b7f3-48c3-b177-86d8adb7d190" (UID: "5c0f3346-b7f3-48c3-b177-86d8adb7d190"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:38:57 crc kubenswrapper[4805]: I0226 17:38:57.916523 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c0f3346-b7f3-48c3-b177-86d8adb7d190-scripts" (OuterVolumeSpecName: "scripts") pod "5c0f3346-b7f3-48c3-b177-86d8adb7d190" (UID: "5c0f3346-b7f3-48c3-b177-86d8adb7d190"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:38:57 crc kubenswrapper[4805]: I0226 17:38:57.916973 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c0f3346-b7f3-48c3-b177-86d8adb7d190-kube-api-access-r9m9l" (OuterVolumeSpecName: "kube-api-access-r9m9l") pod "5c0f3346-b7f3-48c3-b177-86d8adb7d190" (UID: "5c0f3346-b7f3-48c3-b177-86d8adb7d190"). InnerVolumeSpecName "kube-api-access-r9m9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:38:57 crc kubenswrapper[4805]: I0226 17:38:57.942659 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c0f3346-b7f3-48c3-b177-86d8adb7d190-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c0f3346-b7f3-48c3-b177-86d8adb7d190" (UID: "5c0f3346-b7f3-48c3-b177-86d8adb7d190"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:38:57 crc kubenswrapper[4805]: I0226 17:38:57.987666 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c0f3346-b7f3-48c3-b177-86d8adb7d190-config-data" (OuterVolumeSpecName: "config-data") pod "5c0f3346-b7f3-48c3-b177-86d8adb7d190" (UID: "5c0f3346-b7f3-48c3-b177-86d8adb7d190"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.008347 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1627cc17-6ee2-4176-b719-aa04e00aa881-combined-ca-bundle\") pod \"1627cc17-6ee2-4176-b719-aa04e00aa881\" (UID: \"1627cc17-6ee2-4176-b719-aa04e00aa881\") " Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.008461 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wp4cx\" (UniqueName: \"kubernetes.io/projected/1627cc17-6ee2-4176-b719-aa04e00aa881-kube-api-access-wp4cx\") pod \"1627cc17-6ee2-4176-b719-aa04e00aa881\" (UID: \"1627cc17-6ee2-4176-b719-aa04e00aa881\") " Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.008514 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1627cc17-6ee2-4176-b719-aa04e00aa881-db-sync-config-data\") pod \"1627cc17-6ee2-4176-b719-aa04e00aa881\" (UID: \"1627cc17-6ee2-4176-b719-aa04e00aa881\") " Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.008922 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c0f3346-b7f3-48c3-b177-86d8adb7d190-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.008935 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c0f3346-b7f3-48c3-b177-86d8adb7d190-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.008944 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c0f3346-b7f3-48c3-b177-86d8adb7d190-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.008953 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9m9l\" (UniqueName: \"kubernetes.io/projected/5c0f3346-b7f3-48c3-b177-86d8adb7d190-kube-api-access-r9m9l\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.008961 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c0f3346-b7f3-48c3-b177-86d8adb7d190-logs\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.014742 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1627cc17-6ee2-4176-b719-aa04e00aa881-kube-api-access-wp4cx" (OuterVolumeSpecName: "kube-api-access-wp4cx") pod "1627cc17-6ee2-4176-b719-aa04e00aa881" (UID: "1627cc17-6ee2-4176-b719-aa04e00aa881"). InnerVolumeSpecName "kube-api-access-wp4cx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.024376 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1627cc17-6ee2-4176-b719-aa04e00aa881-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1627cc17-6ee2-4176-b719-aa04e00aa881" (UID: "1627cc17-6ee2-4176-b719-aa04e00aa881"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:38:58 crc kubenswrapper[4805]: W0226 17:38:58.072735 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e15f4c2_3c78_4bb0_b1e0_ae8876e28cba.slice/crio-4cfd29efb6aae01905a8acc68886a94625d51fdc895d79773567ace0e7584dfc WatchSource:0}: Error finding container 4cfd29efb6aae01905a8acc68886a94625d51fdc895d79773567ace0e7584dfc: Status 404 returned error can't find the container with id 4cfd29efb6aae01905a8acc68886a94625d51fdc895d79773567ace0e7584dfc Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.075244 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1627cc17-6ee2-4176-b719-aa04e00aa881-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1627cc17-6ee2-4176-b719-aa04e00aa881" (UID: "1627cc17-6ee2-4176-b719-aa04e00aa881"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.076545 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-847cbf4c89-k2szx"] Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.110712 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1627cc17-6ee2-4176-b719-aa04e00aa881-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.110746 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wp4cx\" (UniqueName: \"kubernetes.io/projected/1627cc17-6ee2-4176-b719-aa04e00aa881-kube-api-access-wp4cx\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.110758 4805 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1627cc17-6ee2-4176-b719-aa04e00aa881-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.658040 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"137f038e-91ff-44e0-9f5c-616765295b23","Type":"ContainerStarted","Data":"2e1d8c580374d2d29492306f907602a4e7ea110578934b831a747c61073f6454"} Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.661046 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-847cbf4c89-k2szx" event={"ID":"2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba","Type":"ContainerStarted","Data":"12a2dfbc6b0782d6266bc0ff906354e0aa385215649fc02da2fdcdf5ee43f488"} Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.661156 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-847cbf4c89-k2szx" event={"ID":"2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba","Type":"ContainerStarted","Data":"4cfd29efb6aae01905a8acc68886a94625d51fdc895d79773567ace0e7584dfc"} Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.662440 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-847cbf4c89-k2szx" Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.664167 4805 generic.go:334] "Generic (PLEG): container finished" podID="3ea12120-9c03-4819-a2ae-61bf83333dea" containerID="3a6f647bb63e20e9c1916667cba8f847a673b3c9f5180dbc11f6fe19a8817066" exitCode=0 Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.664269 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535458-b6pvp" event={"ID":"3ea12120-9c03-4819-a2ae-61bf83333dea","Type":"ContainerDied","Data":"3a6f647bb63e20e9c1916667cba8f847a673b3c9f5180dbc11f6fe19a8817066"} Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.667222 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-z2m6q" event={"ID":"4349a64d-0728-48b1-aada-6e7977e291af","Type":"ContainerStarted","Data":"3811715fc62118c941eec324d0d63cc788fe89cdb37f88bf188ae11502ecb56e"} Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.667341 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-z2m6q" Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.667445 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wg9r2" Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.670055 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-79wtt" Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.682740 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-847cbf4c89-k2szx" podStartSLOduration=6.682717395 podStartE2EDuration="6.682717395s" podCreationTimestamp="2026-02-26 17:38:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:38:58.682055728 +0000 UTC m=+1453.243810087" watchObservedRunningTime="2026-02-26 17:38:58.682717395 +0000 UTC m=+1453.244471724" Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.713237 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-z2m6q" podStartSLOduration=11.713214315 podStartE2EDuration="11.713214315s" podCreationTimestamp="2026-02-26 17:38:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:38:58.707527541 +0000 UTC m=+1453.269281880" watchObservedRunningTime="2026-02-26 17:38:58.713214315 +0000 UTC m=+1453.274968654" Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.858470 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-59c7c6ffc6-vlxnj"] Feb 26 17:38:58 crc kubenswrapper[4805]: E0226 17:38:58.859318 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c0f3346-b7f3-48c3-b177-86d8adb7d190" containerName="placement-db-sync" Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.859344 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c0f3346-b7f3-48c3-b177-86d8adb7d190" containerName="placement-db-sync" Feb 26 17:38:58 crc kubenswrapper[4805]: E0226 17:38:58.859365 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1627cc17-6ee2-4176-b719-aa04e00aa881" containerName="barbican-db-sync" Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.859392 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1627cc17-6ee2-4176-b719-aa04e00aa881" containerName="barbican-db-sync" Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.859666 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c0f3346-b7f3-48c3-b177-86d8adb7d190" containerName="placement-db-sync" Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.859698 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1627cc17-6ee2-4176-b719-aa04e00aa881" containerName="barbican-db-sync" Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.861003 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-59c7c6ffc6-vlxnj" Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.874744 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.875007 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.875163 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-fdwmg" Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.875281 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.875428 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.899830 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-59c7c6ffc6-vlxnj"] Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.926795 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91531636-caf5-4454-8cff-96134d359116-scripts\") pod \"placement-59c7c6ffc6-vlxnj\" (UID: \"91531636-caf5-4454-8cff-96134d359116\") " pod="openstack/placement-59c7c6ffc6-vlxnj" Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.926971 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj5xh\" (UniqueName: \"kubernetes.io/projected/91531636-caf5-4454-8cff-96134d359116-kube-api-access-cj5xh\") pod \"placement-59c7c6ffc6-vlxnj\" (UID: \"91531636-caf5-4454-8cff-96134d359116\") " pod="openstack/placement-59c7c6ffc6-vlxnj" Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.927108 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91531636-caf5-4454-8cff-96134d359116-internal-tls-certs\") pod \"placement-59c7c6ffc6-vlxnj\" (UID: \"91531636-caf5-4454-8cff-96134d359116\") " pod="openstack/placement-59c7c6ffc6-vlxnj" Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.927160 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91531636-caf5-4454-8cff-96134d359116-logs\") pod \"placement-59c7c6ffc6-vlxnj\" (UID: \"91531636-caf5-4454-8cff-96134d359116\") " pod="openstack/placement-59c7c6ffc6-vlxnj" Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.927267 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91531636-caf5-4454-8cff-96134d359116-combined-ca-bundle\") pod \"placement-59c7c6ffc6-vlxnj\" (UID: \"91531636-caf5-4454-8cff-96134d359116\") " pod="openstack/placement-59c7c6ffc6-vlxnj" Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.927355 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91531636-caf5-4454-8cff-96134d359116-config-data\") pod \"placement-59c7c6ffc6-vlxnj\" (UID: \"91531636-caf5-4454-8cff-96134d359116\") " pod="openstack/placement-59c7c6ffc6-vlxnj" Feb 26 17:38:58 crc kubenswrapper[4805]: I0226 17:38:58.927521 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91531636-caf5-4454-8cff-96134d359116-public-tls-certs\") pod \"placement-59c7c6ffc6-vlxnj\" (UID: \"91531636-caf5-4454-8cff-96134d359116\") " pod="openstack/placement-59c7c6ffc6-vlxnj" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.029438 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj5xh\" (UniqueName: \"kubernetes.io/projected/91531636-caf5-4454-8cff-96134d359116-kube-api-access-cj5xh\") pod \"placement-59c7c6ffc6-vlxnj\" (UID: \"91531636-caf5-4454-8cff-96134d359116\") " pod="openstack/placement-59c7c6ffc6-vlxnj" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.029520 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91531636-caf5-4454-8cff-96134d359116-internal-tls-certs\") pod \"placement-59c7c6ffc6-vlxnj\" (UID: \"91531636-caf5-4454-8cff-96134d359116\") " pod="openstack/placement-59c7c6ffc6-vlxnj" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.029550 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91531636-caf5-4454-8cff-96134d359116-logs\") pod \"placement-59c7c6ffc6-vlxnj\" (UID: \"91531636-caf5-4454-8cff-96134d359116\") " pod="openstack/placement-59c7c6ffc6-vlxnj" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.029627 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91531636-caf5-4454-8cff-96134d359116-combined-ca-bundle\") pod \"placement-59c7c6ffc6-vlxnj\" (UID: \"91531636-caf5-4454-8cff-96134d359116\") " pod="openstack/placement-59c7c6ffc6-vlxnj" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.029688 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91531636-caf5-4454-8cff-96134d359116-config-data\") pod \"placement-59c7c6ffc6-vlxnj\" (UID: \"91531636-caf5-4454-8cff-96134d359116\") " pod="openstack/placement-59c7c6ffc6-vlxnj" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.029780 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91531636-caf5-4454-8cff-96134d359116-public-tls-certs\") pod \"placement-59c7c6ffc6-vlxnj\" (UID: \"91531636-caf5-4454-8cff-96134d359116\") " pod="openstack/placement-59c7c6ffc6-vlxnj" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.029862 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91531636-caf5-4454-8cff-96134d359116-scripts\") pod \"placement-59c7c6ffc6-vlxnj\" (UID: \"91531636-caf5-4454-8cff-96134d359116\") " pod="openstack/placement-59c7c6ffc6-vlxnj" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.030277 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91531636-caf5-4454-8cff-96134d359116-logs\") pod \"placement-59c7c6ffc6-vlxnj\" (UID: \"91531636-caf5-4454-8cff-96134d359116\") " pod="openstack/placement-59c7c6ffc6-vlxnj" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.036886 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91531636-caf5-4454-8cff-96134d359116-scripts\") pod \"placement-59c7c6ffc6-vlxnj\" (UID: \"91531636-caf5-4454-8cff-96134d359116\") " pod="openstack/placement-59c7c6ffc6-vlxnj" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.037296 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91531636-caf5-4454-8cff-96134d359116-combined-ca-bundle\") pod \"placement-59c7c6ffc6-vlxnj\" (UID: \"91531636-caf5-4454-8cff-96134d359116\") " pod="openstack/placement-59c7c6ffc6-vlxnj" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.037739 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91531636-caf5-4454-8cff-96134d359116-public-tls-certs\") pod \"placement-59c7c6ffc6-vlxnj\" (UID: \"91531636-caf5-4454-8cff-96134d359116\") " pod="openstack/placement-59c7c6ffc6-vlxnj" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.040136 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91531636-caf5-4454-8cff-96134d359116-internal-tls-certs\") pod \"placement-59c7c6ffc6-vlxnj\" (UID: \"91531636-caf5-4454-8cff-96134d359116\") " pod="openstack/placement-59c7c6ffc6-vlxnj" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.048729 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91531636-caf5-4454-8cff-96134d359116-config-data\") pod \"placement-59c7c6ffc6-vlxnj\" (UID: \"91531636-caf5-4454-8cff-96134d359116\") " pod="openstack/placement-59c7c6ffc6-vlxnj" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.049625 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj5xh\" (UniqueName: \"kubernetes.io/projected/91531636-caf5-4454-8cff-96134d359116-kube-api-access-cj5xh\") pod \"placement-59c7c6ffc6-vlxnj\" (UID: \"91531636-caf5-4454-8cff-96134d359116\") " pod="openstack/placement-59c7c6ffc6-vlxnj" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.195850 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-664585c8f5-7x9kv"] Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.199670 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-664585c8f5-7x9kv" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.218771 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-nxm6r" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.218837 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-59c7c6ffc6-vlxnj" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.218997 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.225193 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.251581 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-664585c8f5-7x9kv"] Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.265885 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-74b677ff4d-kgqc6"] Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.280390 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-74b677ff4d-kgqc6" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.295154 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.337189 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcsmd\" (UniqueName: \"kubernetes.io/projected/1232829e-cb44-4a21-b33b-c58be5fbd656-kube-api-access-gcsmd\") pod \"barbican-worker-664585c8f5-7x9kv\" (UID: \"1232829e-cb44-4a21-b33b-c58be5fbd656\") " pod="openstack/barbican-worker-664585c8f5-7x9kv" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.337597 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1232829e-cb44-4a21-b33b-c58be5fbd656-logs\") pod \"barbican-worker-664585c8f5-7x9kv\" (UID: \"1232829e-cb44-4a21-b33b-c58be5fbd656\") " pod="openstack/barbican-worker-664585c8f5-7x9kv" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.337813 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1232829e-cb44-4a21-b33b-c58be5fbd656-config-data\") pod \"barbican-worker-664585c8f5-7x9kv\" (UID: \"1232829e-cb44-4a21-b33b-c58be5fbd656\") " pod="openstack/barbican-worker-664585c8f5-7x9kv" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.337889 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1232829e-cb44-4a21-b33b-c58be5fbd656-combined-ca-bundle\") pod \"barbican-worker-664585c8f5-7x9kv\" (UID: \"1232829e-cb44-4a21-b33b-c58be5fbd656\") " pod="openstack/barbican-worker-664585c8f5-7x9kv" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.337936 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1232829e-cb44-4a21-b33b-c58be5fbd656-config-data-custom\") pod \"barbican-worker-664585c8f5-7x9kv\" (UID: \"1232829e-cb44-4a21-b33b-c58be5fbd656\") " pod="openstack/barbican-worker-664585c8f5-7x9kv" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.365899 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-74b677ff4d-kgqc6"] Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.427469 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-z2m6q"] Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.440464 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1232829e-cb44-4a21-b33b-c58be5fbd656-config-data\") pod \"barbican-worker-664585c8f5-7x9kv\" (UID: \"1232829e-cb44-4a21-b33b-c58be5fbd656\") " pod="openstack/barbican-worker-664585c8f5-7x9kv" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.440608 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1232829e-cb44-4a21-b33b-c58be5fbd656-combined-ca-bundle\") pod \"barbican-worker-664585c8f5-7x9kv\" (UID: \"1232829e-cb44-4a21-b33b-c58be5fbd656\") " pod="openstack/barbican-worker-664585c8f5-7x9kv" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.440672 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3cf852a-8ab9-4f96-b639-cdf3d7cf8407-combined-ca-bundle\") pod \"barbican-keystone-listener-74b677ff4d-kgqc6\" (UID: \"b3cf852a-8ab9-4f96-b639-cdf3d7cf8407\") " pod="openstack/barbican-keystone-listener-74b677ff4d-kgqc6" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.440739 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1232829e-cb44-4a21-b33b-c58be5fbd656-config-data-custom\") pod \"barbican-worker-664585c8f5-7x9kv\" (UID: \"1232829e-cb44-4a21-b33b-c58be5fbd656\") " pod="openstack/barbican-worker-664585c8f5-7x9kv" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.440792 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3cf852a-8ab9-4f96-b639-cdf3d7cf8407-config-data\") pod \"barbican-keystone-listener-74b677ff4d-kgqc6\" (UID: \"b3cf852a-8ab9-4f96-b639-cdf3d7cf8407\") " pod="openstack/barbican-keystone-listener-74b677ff4d-kgqc6" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.440848 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3cf852a-8ab9-4f96-b639-cdf3d7cf8407-logs\") pod \"barbican-keystone-listener-74b677ff4d-kgqc6\" (UID: \"b3cf852a-8ab9-4f96-b639-cdf3d7cf8407\") " pod="openstack/barbican-keystone-listener-74b677ff4d-kgqc6" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.440910 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57m65\" (UniqueName: \"kubernetes.io/projected/b3cf852a-8ab9-4f96-b639-cdf3d7cf8407-kube-api-access-57m65\") pod \"barbican-keystone-listener-74b677ff4d-kgqc6\" (UID: \"b3cf852a-8ab9-4f96-b639-cdf3d7cf8407\") " pod="openstack/barbican-keystone-listener-74b677ff4d-kgqc6" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.440948 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcsmd\" (UniqueName: \"kubernetes.io/projected/1232829e-cb44-4a21-b33b-c58be5fbd656-kube-api-access-gcsmd\") pod \"barbican-worker-664585c8f5-7x9kv\" (UID: \"1232829e-cb44-4a21-b33b-c58be5fbd656\") " pod="openstack/barbican-worker-664585c8f5-7x9kv" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.440996 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3cf852a-8ab9-4f96-b639-cdf3d7cf8407-config-data-custom\") pod \"barbican-keystone-listener-74b677ff4d-kgqc6\" (UID: \"b3cf852a-8ab9-4f96-b639-cdf3d7cf8407\") " pod="openstack/barbican-keystone-listener-74b677ff4d-kgqc6" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.441044 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1232829e-cb44-4a21-b33b-c58be5fbd656-logs\") pod \"barbican-worker-664585c8f5-7x9kv\" (UID: \"1232829e-cb44-4a21-b33b-c58be5fbd656\") " pod="openstack/barbican-worker-664585c8f5-7x9kv" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.456071 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1232829e-cb44-4a21-b33b-c58be5fbd656-logs\") pod \"barbican-worker-664585c8f5-7x9kv\" (UID: \"1232829e-cb44-4a21-b33b-c58be5fbd656\") " pod="openstack/barbican-worker-664585c8f5-7x9kv" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.470827 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-6bc5p"] Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.472607 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-6bc5p" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.487821 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-6bc5p"] Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.504278 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1232829e-cb44-4a21-b33b-c58be5fbd656-config-data\") pod \"barbican-worker-664585c8f5-7x9kv\" (UID: \"1232829e-cb44-4a21-b33b-c58be5fbd656\") " pod="openstack/barbican-worker-664585c8f5-7x9kv" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.520860 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1232829e-cb44-4a21-b33b-c58be5fbd656-combined-ca-bundle\") pod \"barbican-worker-664585c8f5-7x9kv\" (UID: \"1232829e-cb44-4a21-b33b-c58be5fbd656\") " pod="openstack/barbican-worker-664585c8f5-7x9kv" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.545973 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcsmd\" (UniqueName: \"kubernetes.io/projected/1232829e-cb44-4a21-b33b-c58be5fbd656-kube-api-access-gcsmd\") pod \"barbican-worker-664585c8f5-7x9kv\" (UID: \"1232829e-cb44-4a21-b33b-c58be5fbd656\") " pod="openstack/barbican-worker-664585c8f5-7x9kv" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.546494 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1232829e-cb44-4a21-b33b-c58be5fbd656-config-data-custom\") pod \"barbican-worker-664585c8f5-7x9kv\" (UID: \"1232829e-cb44-4a21-b33b-c58be5fbd656\") " pod="openstack/barbican-worker-664585c8f5-7x9kv" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.548073 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3cf852a-8ab9-4f96-b639-cdf3d7cf8407-config-data-custom\") pod \"barbican-keystone-listener-74b677ff4d-kgqc6\" (UID: \"b3cf852a-8ab9-4f96-b639-cdf3d7cf8407\") " pod="openstack/barbican-keystone-listener-74b677ff4d-kgqc6" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.548170 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45bd8308-31de-4e9a-b0cc-d60f191e9f3a-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-6bc5p\" (UID: \"45bd8308-31de-4e9a-b0cc-d60f191e9f3a\") " pod="openstack/dnsmasq-dns-848cf88cfc-6bc5p" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.548203 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45bd8308-31de-4e9a-b0cc-d60f191e9f3a-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-6bc5p\" (UID: \"45bd8308-31de-4e9a-b0cc-d60f191e9f3a\") " pod="openstack/dnsmasq-dns-848cf88cfc-6bc5p" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.548241 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmjx5\" (UniqueName: \"kubernetes.io/projected/45bd8308-31de-4e9a-b0cc-d60f191e9f3a-kube-api-access-jmjx5\") pod \"dnsmasq-dns-848cf88cfc-6bc5p\" (UID: \"45bd8308-31de-4e9a-b0cc-d60f191e9f3a\") " pod="openstack/dnsmasq-dns-848cf88cfc-6bc5p" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.548279 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45bd8308-31de-4e9a-b0cc-d60f191e9f3a-config\") pod \"dnsmasq-dns-848cf88cfc-6bc5p\" (UID: \"45bd8308-31de-4e9a-b0cc-d60f191e9f3a\") " pod="openstack/dnsmasq-dns-848cf88cfc-6bc5p" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.548342 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/45bd8308-31de-4e9a-b0cc-d60f191e9f3a-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-6bc5p\" (UID: \"45bd8308-31de-4e9a-b0cc-d60f191e9f3a\") " pod="openstack/dnsmasq-dns-848cf88cfc-6bc5p" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.548374 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3cf852a-8ab9-4f96-b639-cdf3d7cf8407-combined-ca-bundle\") pod \"barbican-keystone-listener-74b677ff4d-kgqc6\" (UID: \"b3cf852a-8ab9-4f96-b639-cdf3d7cf8407\") " pod="openstack/barbican-keystone-listener-74b677ff4d-kgqc6" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.548422 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45bd8308-31de-4e9a-b0cc-d60f191e9f3a-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-6bc5p\" (UID: \"45bd8308-31de-4e9a-b0cc-d60f191e9f3a\") " pod="openstack/dnsmasq-dns-848cf88cfc-6bc5p" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.548474 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3cf852a-8ab9-4f96-b639-cdf3d7cf8407-config-data\") pod \"barbican-keystone-listener-74b677ff4d-kgqc6\" (UID: \"b3cf852a-8ab9-4f96-b639-cdf3d7cf8407\") " pod="openstack/barbican-keystone-listener-74b677ff4d-kgqc6" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.548507 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3cf852a-8ab9-4f96-b639-cdf3d7cf8407-logs\") pod \"barbican-keystone-listener-74b677ff4d-kgqc6\" (UID: \"b3cf852a-8ab9-4f96-b639-cdf3d7cf8407\") " pod="openstack/barbican-keystone-listener-74b677ff4d-kgqc6" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.548534 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57m65\" (UniqueName: \"kubernetes.io/projected/b3cf852a-8ab9-4f96-b639-cdf3d7cf8407-kube-api-access-57m65\") pod \"barbican-keystone-listener-74b677ff4d-kgqc6\" (UID: \"b3cf852a-8ab9-4f96-b639-cdf3d7cf8407\") " pod="openstack/barbican-keystone-listener-74b677ff4d-kgqc6" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.550642 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3cf852a-8ab9-4f96-b639-cdf3d7cf8407-logs\") pod \"barbican-keystone-listener-74b677ff4d-kgqc6\" (UID: \"b3cf852a-8ab9-4f96-b639-cdf3d7cf8407\") " pod="openstack/barbican-keystone-listener-74b677ff4d-kgqc6" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.552711 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-664585c8f5-7x9kv" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.559172 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3cf852a-8ab9-4f96-b639-cdf3d7cf8407-combined-ca-bundle\") pod \"barbican-keystone-listener-74b677ff4d-kgqc6\" (UID: \"b3cf852a-8ab9-4f96-b639-cdf3d7cf8407\") " pod="openstack/barbican-keystone-listener-74b677ff4d-kgqc6" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.559331 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3cf852a-8ab9-4f96-b639-cdf3d7cf8407-config-data\") pod \"barbican-keystone-listener-74b677ff4d-kgqc6\" (UID: \"b3cf852a-8ab9-4f96-b639-cdf3d7cf8407\") " pod="openstack/barbican-keystone-listener-74b677ff4d-kgqc6" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.574276 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3cf852a-8ab9-4f96-b639-cdf3d7cf8407-config-data-custom\") pod \"barbican-keystone-listener-74b677ff4d-kgqc6\" (UID: \"b3cf852a-8ab9-4f96-b639-cdf3d7cf8407\") " pod="openstack/barbican-keystone-listener-74b677ff4d-kgqc6" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.576673 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57m65\" (UniqueName: \"kubernetes.io/projected/b3cf852a-8ab9-4f96-b639-cdf3d7cf8407-kube-api-access-57m65\") pod \"barbican-keystone-listener-74b677ff4d-kgqc6\" (UID: \"b3cf852a-8ab9-4f96-b639-cdf3d7cf8407\") " pod="openstack/barbican-keystone-listener-74b677ff4d-kgqc6" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.586085 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7cf7d6fbd8-c2wwp"] Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.587754 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7cf7d6fbd8-c2wwp" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.593926 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.598714 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7cf7d6fbd8-c2wwp"] Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.651503 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45bd8308-31de-4e9a-b0cc-d60f191e9f3a-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-6bc5p\" (UID: \"45bd8308-31de-4e9a-b0cc-d60f191e9f3a\") " pod="openstack/dnsmasq-dns-848cf88cfc-6bc5p" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.651567 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmjx5\" (UniqueName: \"kubernetes.io/projected/45bd8308-31de-4e9a-b0cc-d60f191e9f3a-kube-api-access-jmjx5\") pod \"dnsmasq-dns-848cf88cfc-6bc5p\" (UID: \"45bd8308-31de-4e9a-b0cc-d60f191e9f3a\") " pod="openstack/dnsmasq-dns-848cf88cfc-6bc5p" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.651605 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45bd8308-31de-4e9a-b0cc-d60f191e9f3a-config\") pod \"dnsmasq-dns-848cf88cfc-6bc5p\" (UID: \"45bd8308-31de-4e9a-b0cc-d60f191e9f3a\") " pod="openstack/dnsmasq-dns-848cf88cfc-6bc5p" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.651635 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/45bd8308-31de-4e9a-b0cc-d60f191e9f3a-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-6bc5p\" (UID: \"45bd8308-31de-4e9a-b0cc-d60f191e9f3a\") " pod="openstack/dnsmasq-dns-848cf88cfc-6bc5p" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.651675 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmg22\" (UniqueName: \"kubernetes.io/projected/1fef9a78-09bd-4e03-b7be-0f92d3aa705b-kube-api-access-bmg22\") pod \"barbican-api-7cf7d6fbd8-c2wwp\" (UID: \"1fef9a78-09bd-4e03-b7be-0f92d3aa705b\") " pod="openstack/barbican-api-7cf7d6fbd8-c2wwp" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.651709 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1fef9a78-09bd-4e03-b7be-0f92d3aa705b-config-data-custom\") pod \"barbican-api-7cf7d6fbd8-c2wwp\" (UID: \"1fef9a78-09bd-4e03-b7be-0f92d3aa705b\") " pod="openstack/barbican-api-7cf7d6fbd8-c2wwp" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.651742 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45bd8308-31de-4e9a-b0cc-d60f191e9f3a-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-6bc5p\" (UID: \"45bd8308-31de-4e9a-b0cc-d60f191e9f3a\") " pod="openstack/dnsmasq-dns-848cf88cfc-6bc5p" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.651768 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fef9a78-09bd-4e03-b7be-0f92d3aa705b-logs\") pod \"barbican-api-7cf7d6fbd8-c2wwp\" (UID: \"1fef9a78-09bd-4e03-b7be-0f92d3aa705b\") " pod="openstack/barbican-api-7cf7d6fbd8-c2wwp" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.651888 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fef9a78-09bd-4e03-b7be-0f92d3aa705b-config-data\") pod \"barbican-api-7cf7d6fbd8-c2wwp\" (UID: \"1fef9a78-09bd-4e03-b7be-0f92d3aa705b\") " pod="openstack/barbican-api-7cf7d6fbd8-c2wwp" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.651914 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fef9a78-09bd-4e03-b7be-0f92d3aa705b-combined-ca-bundle\") pod \"barbican-api-7cf7d6fbd8-c2wwp\" (UID: \"1fef9a78-09bd-4e03-b7be-0f92d3aa705b\") " pod="openstack/barbican-api-7cf7d6fbd8-c2wwp" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.651955 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45bd8308-31de-4e9a-b0cc-d60f191e9f3a-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-6bc5p\" (UID: \"45bd8308-31de-4e9a-b0cc-d60f191e9f3a\") " pod="openstack/dnsmasq-dns-848cf88cfc-6bc5p" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.653677 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45bd8308-31de-4e9a-b0cc-d60f191e9f3a-config\") pod \"dnsmasq-dns-848cf88cfc-6bc5p\" (UID: \"45bd8308-31de-4e9a-b0cc-d60f191e9f3a\") " pod="openstack/dnsmasq-dns-848cf88cfc-6bc5p" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.653921 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45bd8308-31de-4e9a-b0cc-d60f191e9f3a-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-6bc5p\" (UID: \"45bd8308-31de-4e9a-b0cc-d60f191e9f3a\") " pod="openstack/dnsmasq-dns-848cf88cfc-6bc5p" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.654198 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45bd8308-31de-4e9a-b0cc-d60f191e9f3a-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-6bc5p\" (UID: \"45bd8308-31de-4e9a-b0cc-d60f191e9f3a\") " pod="openstack/dnsmasq-dns-848cf88cfc-6bc5p" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.654216 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/45bd8308-31de-4e9a-b0cc-d60f191e9f3a-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-6bc5p\" (UID: \"45bd8308-31de-4e9a-b0cc-d60f191e9f3a\") " pod="openstack/dnsmasq-dns-848cf88cfc-6bc5p" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.654467 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45bd8308-31de-4e9a-b0cc-d60f191e9f3a-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-6bc5p\" (UID: \"45bd8308-31de-4e9a-b0cc-d60f191e9f3a\") " pod="openstack/dnsmasq-dns-848cf88cfc-6bc5p" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.675556 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmjx5\" (UniqueName: \"kubernetes.io/projected/45bd8308-31de-4e9a-b0cc-d60f191e9f3a-kube-api-access-jmjx5\") pod \"dnsmasq-dns-848cf88cfc-6bc5p\" (UID: \"45bd8308-31de-4e9a-b0cc-d60f191e9f3a\") " pod="openstack/dnsmasq-dns-848cf88cfc-6bc5p" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.679517 4805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.679779 4805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.754495 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fef9a78-09bd-4e03-b7be-0f92d3aa705b-logs\") pod \"barbican-api-7cf7d6fbd8-c2wwp\" (UID: \"1fef9a78-09bd-4e03-b7be-0f92d3aa705b\") " pod="openstack/barbican-api-7cf7d6fbd8-c2wwp" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.754873 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fef9a78-09bd-4e03-b7be-0f92d3aa705b-logs\") pod \"barbican-api-7cf7d6fbd8-c2wwp\" (UID: \"1fef9a78-09bd-4e03-b7be-0f92d3aa705b\") " pod="openstack/barbican-api-7cf7d6fbd8-c2wwp" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.756555 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fef9a78-09bd-4e03-b7be-0f92d3aa705b-config-data\") pod \"barbican-api-7cf7d6fbd8-c2wwp\" (UID: \"1fef9a78-09bd-4e03-b7be-0f92d3aa705b\") " pod="openstack/barbican-api-7cf7d6fbd8-c2wwp" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.756590 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fef9a78-09bd-4e03-b7be-0f92d3aa705b-combined-ca-bundle\") pod \"barbican-api-7cf7d6fbd8-c2wwp\" (UID: \"1fef9a78-09bd-4e03-b7be-0f92d3aa705b\") " pod="openstack/barbican-api-7cf7d6fbd8-c2wwp" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.756932 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmg22\" (UniqueName: \"kubernetes.io/projected/1fef9a78-09bd-4e03-b7be-0f92d3aa705b-kube-api-access-bmg22\") pod \"barbican-api-7cf7d6fbd8-c2wwp\" (UID: \"1fef9a78-09bd-4e03-b7be-0f92d3aa705b\") " pod="openstack/barbican-api-7cf7d6fbd8-c2wwp" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.757004 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1fef9a78-09bd-4e03-b7be-0f92d3aa705b-config-data-custom\") pod \"barbican-api-7cf7d6fbd8-c2wwp\" (UID: \"1fef9a78-09bd-4e03-b7be-0f92d3aa705b\") " pod="openstack/barbican-api-7cf7d6fbd8-c2wwp" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.767525 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fef9a78-09bd-4e03-b7be-0f92d3aa705b-config-data\") pod \"barbican-api-7cf7d6fbd8-c2wwp\" (UID: \"1fef9a78-09bd-4e03-b7be-0f92d3aa705b\") " pod="openstack/barbican-api-7cf7d6fbd8-c2wwp" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.777438 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-74b677ff4d-kgqc6" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.778546 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1fef9a78-09bd-4e03-b7be-0f92d3aa705b-config-data-custom\") pod \"barbican-api-7cf7d6fbd8-c2wwp\" (UID: \"1fef9a78-09bd-4e03-b7be-0f92d3aa705b\") " pod="openstack/barbican-api-7cf7d6fbd8-c2wwp" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.779722 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fef9a78-09bd-4e03-b7be-0f92d3aa705b-combined-ca-bundle\") pod \"barbican-api-7cf7d6fbd8-c2wwp\" (UID: \"1fef9a78-09bd-4e03-b7be-0f92d3aa705b\") " pod="openstack/barbican-api-7cf7d6fbd8-c2wwp" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.784648 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmg22\" (UniqueName: \"kubernetes.io/projected/1fef9a78-09bd-4e03-b7be-0f92d3aa705b-kube-api-access-bmg22\") pod \"barbican-api-7cf7d6fbd8-c2wwp\" (UID: \"1fef9a78-09bd-4e03-b7be-0f92d3aa705b\") " pod="openstack/barbican-api-7cf7d6fbd8-c2wwp" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.949261 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-6bc5p" Feb 26 17:38:59 crc kubenswrapper[4805]: I0226 17:38:59.966732 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7cf7d6fbd8-c2wwp" Feb 26 17:39:00 crc kubenswrapper[4805]: I0226 17:39:00.150300 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535458-b6pvp" Feb 26 17:39:00 crc kubenswrapper[4805]: I0226 17:39:00.268261 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nw5pm\" (UniqueName: \"kubernetes.io/projected/3ea12120-9c03-4819-a2ae-61bf83333dea-kube-api-access-nw5pm\") pod \"3ea12120-9c03-4819-a2ae-61bf83333dea\" (UID: \"3ea12120-9c03-4819-a2ae-61bf83333dea\") " Feb 26 17:39:00 crc kubenswrapper[4805]: I0226 17:39:00.275231 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ea12120-9c03-4819-a2ae-61bf83333dea-kube-api-access-nw5pm" (OuterVolumeSpecName: "kube-api-access-nw5pm") pod "3ea12120-9c03-4819-a2ae-61bf83333dea" (UID: "3ea12120-9c03-4819-a2ae-61bf83333dea"). InnerVolumeSpecName "kube-api-access-nw5pm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:39:00 crc kubenswrapper[4805]: I0226 17:39:00.371347 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nw5pm\" (UniqueName: \"kubernetes.io/projected/3ea12120-9c03-4819-a2ae-61bf83333dea-kube-api-access-nw5pm\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:00 crc kubenswrapper[4805]: I0226 17:39:00.639951 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-59c7c6ffc6-vlxnj"] Feb 26 17:39:00 crc kubenswrapper[4805]: I0226 17:39:00.709497 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-59c7c6ffc6-vlxnj" event={"ID":"91531636-caf5-4454-8cff-96134d359116","Type":"ContainerStarted","Data":"32b25e0849ed6a85bdfcecf8c727645239887a646bb5813d8c78efa622ca716a"} Feb 26 17:39:00 crc kubenswrapper[4805]: I0226 17:39:00.713403 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-z2m6q" podUID="4349a64d-0728-48b1-aada-6e7977e291af" containerName="dnsmasq-dns" containerID="cri-o://3811715fc62118c941eec324d0d63cc788fe89cdb37f88bf188ae11502ecb56e" gracePeriod=10 Feb 26 17:39:00 crc kubenswrapper[4805]: I0226 17:39:00.713707 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535458-b6pvp" Feb 26 17:39:00 crc kubenswrapper[4805]: I0226 17:39:00.713772 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535458-b6pvp" event={"ID":"3ea12120-9c03-4819-a2ae-61bf83333dea","Type":"ContainerDied","Data":"bf0065d9d8810d5b97eada495acfd6b3ccf7bbe3dc976c9480b5b1fbffe5df49"} Feb 26 17:39:00 crc kubenswrapper[4805]: I0226 17:39:00.713850 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf0065d9d8810d5b97eada495acfd6b3ccf7bbe3dc976c9480b5b1fbffe5df49" Feb 26 17:39:01 crc kubenswrapper[4805]: I0226 17:39:01.132090 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-6bc5p"] Feb 26 17:39:01 crc kubenswrapper[4805]: I0226 17:39:01.163799 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7cf7d6fbd8-c2wwp"] Feb 26 17:39:01 crc kubenswrapper[4805]: I0226 17:39:01.202427 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-74b677ff4d-kgqc6"] Feb 26 17:39:01 crc kubenswrapper[4805]: I0226 17:39:01.212879 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-664585c8f5-7x9kv"] Feb 26 17:39:01 crc kubenswrapper[4805]: I0226 17:39:01.260133 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535452-cmdkc"] Feb 26 17:39:01 crc kubenswrapper[4805]: I0226 17:39:01.276103 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535452-cmdkc"] Feb 26 17:39:01 crc kubenswrapper[4805]: I0226 17:39:01.460723 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-z2m6q" Feb 26 17:39:01 crc kubenswrapper[4805]: I0226 17:39:01.517374 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4349a64d-0728-48b1-aada-6e7977e291af-dns-swift-storage-0\") pod \"4349a64d-0728-48b1-aada-6e7977e291af\" (UID: \"4349a64d-0728-48b1-aada-6e7977e291af\") " Feb 26 17:39:01 crc kubenswrapper[4805]: I0226 17:39:01.517580 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4349a64d-0728-48b1-aada-6e7977e291af-config\") pod \"4349a64d-0728-48b1-aada-6e7977e291af\" (UID: \"4349a64d-0728-48b1-aada-6e7977e291af\") " Feb 26 17:39:01 crc kubenswrapper[4805]: I0226 17:39:01.517684 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4349a64d-0728-48b1-aada-6e7977e291af-ovsdbserver-sb\") pod \"4349a64d-0728-48b1-aada-6e7977e291af\" (UID: \"4349a64d-0728-48b1-aada-6e7977e291af\") " Feb 26 17:39:01 crc kubenswrapper[4805]: I0226 17:39:01.517723 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4349a64d-0728-48b1-aada-6e7977e291af-dns-svc\") pod \"4349a64d-0728-48b1-aada-6e7977e291af\" (UID: \"4349a64d-0728-48b1-aada-6e7977e291af\") " Feb 26 17:39:01 crc kubenswrapper[4805]: I0226 17:39:01.517756 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4349a64d-0728-48b1-aada-6e7977e291af-ovsdbserver-nb\") pod \"4349a64d-0728-48b1-aada-6e7977e291af\" (UID: \"4349a64d-0728-48b1-aada-6e7977e291af\") " Feb 26 17:39:01 crc kubenswrapper[4805]: I0226 17:39:01.517802 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7vgf\" (UniqueName: \"kubernetes.io/projected/4349a64d-0728-48b1-aada-6e7977e291af-kube-api-access-s7vgf\") pod \"4349a64d-0728-48b1-aada-6e7977e291af\" (UID: \"4349a64d-0728-48b1-aada-6e7977e291af\") " Feb 26 17:39:01 crc kubenswrapper[4805]: I0226 17:39:01.550145 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4349a64d-0728-48b1-aada-6e7977e291af-kube-api-access-s7vgf" (OuterVolumeSpecName: "kube-api-access-s7vgf") pod "4349a64d-0728-48b1-aada-6e7977e291af" (UID: "4349a64d-0728-48b1-aada-6e7977e291af"). InnerVolumeSpecName "kube-api-access-s7vgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:39:01 crc kubenswrapper[4805]: I0226 17:39:01.621113 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7vgf\" (UniqueName: \"kubernetes.io/projected/4349a64d-0728-48b1-aada-6e7977e291af-kube-api-access-s7vgf\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:01 crc kubenswrapper[4805]: I0226 17:39:01.733379 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7cf7d6fbd8-c2wwp" event={"ID":"1fef9a78-09bd-4e03-b7be-0f92d3aa705b","Type":"ContainerStarted","Data":"2be13c1d0b225c7d4e5be6029fab2803e8901769e8c99716c35ba7df19c01e53"} Feb 26 17:39:01 crc kubenswrapper[4805]: I0226 17:39:01.742822 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-74b677ff4d-kgqc6" event={"ID":"b3cf852a-8ab9-4f96-b639-cdf3d7cf8407","Type":"ContainerStarted","Data":"09469149f2035aa095fff034105ddc80dc7ca44ac9b6f80e16d933d9a4216ab1"} Feb 26 17:39:01 crc kubenswrapper[4805]: I0226 17:39:01.764734 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-59c7c6ffc6-vlxnj" event={"ID":"91531636-caf5-4454-8cff-96134d359116","Type":"ContainerStarted","Data":"00dd1fdc289b3c4e6d3ea2245b53385023340003b4c912bd06a5fbaef8275e9e"} Feb 26 17:39:01 crc kubenswrapper[4805]: I0226 17:39:01.773698 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-664585c8f5-7x9kv" event={"ID":"1232829e-cb44-4a21-b33b-c58be5fbd656","Type":"ContainerStarted","Data":"281153e2f3f245181ab5137043bf808aef8e677fc6cacbf3694d5830648659c4"} Feb 26 17:39:01 crc kubenswrapper[4805]: I0226 17:39:01.811884 4805 generic.go:334] "Generic (PLEG): container finished" podID="4349a64d-0728-48b1-aada-6e7977e291af" containerID="3811715fc62118c941eec324d0d63cc788fe89cdb37f88bf188ae11502ecb56e" exitCode=0 Feb 26 17:39:01 crc kubenswrapper[4805]: I0226 17:39:01.812030 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-z2m6q" event={"ID":"4349a64d-0728-48b1-aada-6e7977e291af","Type":"ContainerDied","Data":"3811715fc62118c941eec324d0d63cc788fe89cdb37f88bf188ae11502ecb56e"} Feb 26 17:39:01 crc kubenswrapper[4805]: I0226 17:39:01.812045 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-z2m6q" Feb 26 17:39:01 crc kubenswrapper[4805]: I0226 17:39:01.812067 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-z2m6q" event={"ID":"4349a64d-0728-48b1-aada-6e7977e291af","Type":"ContainerDied","Data":"226ab347956e95d3142da2c157008fb441cb881e3be4ab0a281a15638cca6d6d"} Feb 26 17:39:01 crc kubenswrapper[4805]: I0226 17:39:01.812089 4805 scope.go:117] "RemoveContainer" containerID="3811715fc62118c941eec324d0d63cc788fe89cdb37f88bf188ae11502ecb56e" Feb 26 17:39:01 crc kubenswrapper[4805]: I0226 17:39:01.818205 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-6bc5p" event={"ID":"45bd8308-31de-4e9a-b0cc-d60f191e9f3a","Type":"ContainerStarted","Data":"ff5190eb6cb75d27aaf463e077ffaf5180d15f971207b7f65aed7e21b3e03598"} Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.028710 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-69668dd468-5vzck"] Feb 26 17:39:02 crc kubenswrapper[4805]: E0226 17:39:02.029230 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4349a64d-0728-48b1-aada-6e7977e291af" containerName="dnsmasq-dns" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.029255 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="4349a64d-0728-48b1-aada-6e7977e291af" containerName="dnsmasq-dns" Feb 26 17:39:02 crc kubenswrapper[4805]: E0226 17:39:02.029269 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4349a64d-0728-48b1-aada-6e7977e291af" containerName="init" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.029277 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="4349a64d-0728-48b1-aada-6e7977e291af" containerName="init" Feb 26 17:39:02 crc kubenswrapper[4805]: E0226 17:39:02.029287 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ea12120-9c03-4819-a2ae-61bf83333dea" containerName="oc" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.029293 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea12120-9c03-4819-a2ae-61bf83333dea" containerName="oc" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.029474 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="4349a64d-0728-48b1-aada-6e7977e291af" containerName="dnsmasq-dns" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.029492 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ea12120-9c03-4819-a2ae-61bf83333dea" containerName="oc" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.030816 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-69668dd468-5vzck" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.036376 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.036625 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.059047 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4349a64d-0728-48b1-aada-6e7977e291af-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4349a64d-0728-48b1-aada-6e7977e291af" (UID: "4349a64d-0728-48b1-aada-6e7977e291af"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.059607 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4349a64d-0728-48b1-aada-6e7977e291af-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4349a64d-0728-48b1-aada-6e7977e291af" (UID: "4349a64d-0728-48b1-aada-6e7977e291af"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.067468 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-69668dd468-5vzck"] Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.161862 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4349a64d-0728-48b1-aada-6e7977e291af-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4349a64d-0728-48b1-aada-6e7977e291af" (UID: "4349a64d-0728-48b1-aada-6e7977e291af"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.167799 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4349a64d-0728-48b1-aada-6e7977e291af-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4349a64d-0728-48b1-aada-6e7977e291af" (UID: "4349a64d-0728-48b1-aada-6e7977e291af"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.174602 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4349a64d-0728-48b1-aada-6e7977e291af-ovsdbserver-sb\") pod \"4349a64d-0728-48b1-aada-6e7977e291af\" (UID: \"4349a64d-0728-48b1-aada-6e7977e291af\") " Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.175211 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70e967a7-f90a-42a8-9d73-087c05c2ad6f-public-tls-certs\") pod \"barbican-api-69668dd468-5vzck\" (UID: \"70e967a7-f90a-42a8-9d73-087c05c2ad6f\") " pod="openstack/barbican-api-69668dd468-5vzck" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.175283 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70e967a7-f90a-42a8-9d73-087c05c2ad6f-internal-tls-certs\") pod \"barbican-api-69668dd468-5vzck\" (UID: \"70e967a7-f90a-42a8-9d73-087c05c2ad6f\") " pod="openstack/barbican-api-69668dd468-5vzck" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.175308 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/70e967a7-f90a-42a8-9d73-087c05c2ad6f-config-data-custom\") pod \"barbican-api-69668dd468-5vzck\" (UID: \"70e967a7-f90a-42a8-9d73-087c05c2ad6f\") " pod="openstack/barbican-api-69668dd468-5vzck" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.175346 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70e967a7-f90a-42a8-9d73-087c05c2ad6f-combined-ca-bundle\") pod \"barbican-api-69668dd468-5vzck\" (UID: \"70e967a7-f90a-42a8-9d73-087c05c2ad6f\") " pod="openstack/barbican-api-69668dd468-5vzck" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.175384 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nds2s\" (UniqueName: \"kubernetes.io/projected/70e967a7-f90a-42a8-9d73-087c05c2ad6f-kube-api-access-nds2s\") pod \"barbican-api-69668dd468-5vzck\" (UID: \"70e967a7-f90a-42a8-9d73-087c05c2ad6f\") " pod="openstack/barbican-api-69668dd468-5vzck" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.175465 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70e967a7-f90a-42a8-9d73-087c05c2ad6f-config-data\") pod \"barbican-api-69668dd468-5vzck\" (UID: \"70e967a7-f90a-42a8-9d73-087c05c2ad6f\") " pod="openstack/barbican-api-69668dd468-5vzck" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.175553 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70e967a7-f90a-42a8-9d73-087c05c2ad6f-logs\") pod \"barbican-api-69668dd468-5vzck\" (UID: \"70e967a7-f90a-42a8-9d73-087c05c2ad6f\") " pod="openstack/barbican-api-69668dd468-5vzck" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.175651 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4349a64d-0728-48b1-aada-6e7977e291af-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.175666 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4349a64d-0728-48b1-aada-6e7977e291af-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.175678 4805 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4349a64d-0728-48b1-aada-6e7977e291af-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:02 crc kubenswrapper[4805]: W0226 17:39:02.175779 4805 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/4349a64d-0728-48b1-aada-6e7977e291af/volumes/kubernetes.io~configmap/ovsdbserver-sb Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.175792 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4349a64d-0728-48b1-aada-6e7977e291af-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4349a64d-0728-48b1-aada-6e7977e291af" (UID: "4349a64d-0728-48b1-aada-6e7977e291af"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.176639 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4349a64d-0728-48b1-aada-6e7977e291af-config" (OuterVolumeSpecName: "config") pod "4349a64d-0728-48b1-aada-6e7977e291af" (UID: "4349a64d-0728-48b1-aada-6e7977e291af"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.279245 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70e967a7-f90a-42a8-9d73-087c05c2ad6f-logs\") pod \"barbican-api-69668dd468-5vzck\" (UID: \"70e967a7-f90a-42a8-9d73-087c05c2ad6f\") " pod="openstack/barbican-api-69668dd468-5vzck" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.279428 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70e967a7-f90a-42a8-9d73-087c05c2ad6f-public-tls-certs\") pod \"barbican-api-69668dd468-5vzck\" (UID: \"70e967a7-f90a-42a8-9d73-087c05c2ad6f\") " pod="openstack/barbican-api-69668dd468-5vzck" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.279479 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70e967a7-f90a-42a8-9d73-087c05c2ad6f-internal-tls-certs\") pod \"barbican-api-69668dd468-5vzck\" (UID: \"70e967a7-f90a-42a8-9d73-087c05c2ad6f\") " pod="openstack/barbican-api-69668dd468-5vzck" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.279509 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/70e967a7-f90a-42a8-9d73-087c05c2ad6f-config-data-custom\") pod \"barbican-api-69668dd468-5vzck\" (UID: \"70e967a7-f90a-42a8-9d73-087c05c2ad6f\") " pod="openstack/barbican-api-69668dd468-5vzck" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.279541 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70e967a7-f90a-42a8-9d73-087c05c2ad6f-combined-ca-bundle\") pod \"barbican-api-69668dd468-5vzck\" (UID: \"70e967a7-f90a-42a8-9d73-087c05c2ad6f\") " pod="openstack/barbican-api-69668dd468-5vzck" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.279574 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nds2s\" (UniqueName: \"kubernetes.io/projected/70e967a7-f90a-42a8-9d73-087c05c2ad6f-kube-api-access-nds2s\") pod \"barbican-api-69668dd468-5vzck\" (UID: \"70e967a7-f90a-42a8-9d73-087c05c2ad6f\") " pod="openstack/barbican-api-69668dd468-5vzck" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.279619 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70e967a7-f90a-42a8-9d73-087c05c2ad6f-config-data\") pod \"barbican-api-69668dd468-5vzck\" (UID: \"70e967a7-f90a-42a8-9d73-087c05c2ad6f\") " pod="openstack/barbican-api-69668dd468-5vzck" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.279691 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4349a64d-0728-48b1-aada-6e7977e291af-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.279702 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4349a64d-0728-48b1-aada-6e7977e291af-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.282868 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70e967a7-f90a-42a8-9d73-087c05c2ad6f-logs\") pod \"barbican-api-69668dd468-5vzck\" (UID: \"70e967a7-f90a-42a8-9d73-087c05c2ad6f\") " pod="openstack/barbican-api-69668dd468-5vzck" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.283816 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70e967a7-f90a-42a8-9d73-087c05c2ad6f-config-data\") pod \"barbican-api-69668dd468-5vzck\" (UID: \"70e967a7-f90a-42a8-9d73-087c05c2ad6f\") " pod="openstack/barbican-api-69668dd468-5vzck" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.285864 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/70e967a7-f90a-42a8-9d73-087c05c2ad6f-config-data-custom\") pod \"barbican-api-69668dd468-5vzck\" (UID: \"70e967a7-f90a-42a8-9d73-087c05c2ad6f\") " pod="openstack/barbican-api-69668dd468-5vzck" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.286559 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70e967a7-f90a-42a8-9d73-087c05c2ad6f-public-tls-certs\") pod \"barbican-api-69668dd468-5vzck\" (UID: \"70e967a7-f90a-42a8-9d73-087c05c2ad6f\") " pod="openstack/barbican-api-69668dd468-5vzck" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.286892 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70e967a7-f90a-42a8-9d73-087c05c2ad6f-internal-tls-certs\") pod \"barbican-api-69668dd468-5vzck\" (UID: \"70e967a7-f90a-42a8-9d73-087c05c2ad6f\") " pod="openstack/barbican-api-69668dd468-5vzck" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.290772 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70e967a7-f90a-42a8-9d73-087c05c2ad6f-combined-ca-bundle\") pod \"barbican-api-69668dd468-5vzck\" (UID: \"70e967a7-f90a-42a8-9d73-087c05c2ad6f\") " pod="openstack/barbican-api-69668dd468-5vzck" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.311317 4805 scope.go:117] "RemoveContainer" containerID="433e3324004be683f373e40df57dc472fb5fc3f1af8627836d2fe8aac5b7ff21" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.318862 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nds2s\" (UniqueName: \"kubernetes.io/projected/70e967a7-f90a-42a8-9d73-087c05c2ad6f-kube-api-access-nds2s\") pod \"barbican-api-69668dd468-5vzck\" (UID: \"70e967a7-f90a-42a8-9d73-087c05c2ad6f\") " pod="openstack/barbican-api-69668dd468-5vzck" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.376909 4805 scope.go:117] "RemoveContainer" containerID="3811715fc62118c941eec324d0d63cc788fe89cdb37f88bf188ae11502ecb56e" Feb 26 17:39:02 crc kubenswrapper[4805]: E0226 17:39:02.378044 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3811715fc62118c941eec324d0d63cc788fe89cdb37f88bf188ae11502ecb56e\": container with ID starting with 3811715fc62118c941eec324d0d63cc788fe89cdb37f88bf188ae11502ecb56e not found: ID does not exist" containerID="3811715fc62118c941eec324d0d63cc788fe89cdb37f88bf188ae11502ecb56e" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.378082 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3811715fc62118c941eec324d0d63cc788fe89cdb37f88bf188ae11502ecb56e"} err="failed to get container status \"3811715fc62118c941eec324d0d63cc788fe89cdb37f88bf188ae11502ecb56e\": rpc error: code = NotFound desc = could not find container \"3811715fc62118c941eec324d0d63cc788fe89cdb37f88bf188ae11502ecb56e\": container with ID starting with 3811715fc62118c941eec324d0d63cc788fe89cdb37f88bf188ae11502ecb56e not found: ID does not exist" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.378109 4805 scope.go:117] "RemoveContainer" containerID="433e3324004be683f373e40df57dc472fb5fc3f1af8627836d2fe8aac5b7ff21" Feb 26 17:39:02 crc kubenswrapper[4805]: E0226 17:39:02.378728 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"433e3324004be683f373e40df57dc472fb5fc3f1af8627836d2fe8aac5b7ff21\": container with ID starting with 433e3324004be683f373e40df57dc472fb5fc3f1af8627836d2fe8aac5b7ff21 not found: ID does not exist" containerID="433e3324004be683f373e40df57dc472fb5fc3f1af8627836d2fe8aac5b7ff21" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.378752 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"433e3324004be683f373e40df57dc472fb5fc3f1af8627836d2fe8aac5b7ff21"} err="failed to get container status \"433e3324004be683f373e40df57dc472fb5fc3f1af8627836d2fe8aac5b7ff21\": rpc error: code = NotFound desc = could not find container \"433e3324004be683f373e40df57dc472fb5fc3f1af8627836d2fe8aac5b7ff21\": container with ID starting with 433e3324004be683f373e40df57dc472fb5fc3f1af8627836d2fe8aac5b7ff21 not found: ID does not exist" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.478727 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-z2m6q"] Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.499434 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-z2m6q"] Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.619658 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-69668dd468-5vzck" Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.890347 4805 generic.go:334] "Generic (PLEG): container finished" podID="45bd8308-31de-4e9a-b0cc-d60f191e9f3a" containerID="4c9f921177f9e86148eead87a649e0cedb1d4ce62e84e748155467e4e85f58ef" exitCode=0 Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.890478 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-6bc5p" event={"ID":"45bd8308-31de-4e9a-b0cc-d60f191e9f3a","Type":"ContainerDied","Data":"4c9f921177f9e86148eead87a649e0cedb1d4ce62e84e748155467e4e85f58ef"} Feb 26 17:39:02 crc kubenswrapper[4805]: I0226 17:39:02.930862 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7cf7d6fbd8-c2wwp" event={"ID":"1fef9a78-09bd-4e03-b7be-0f92d3aa705b","Type":"ContainerStarted","Data":"69f3081626bc75e68bbd59df574ddf15ced84bc4a9e4f0d15bd4b6b3e5c2490b"} Feb 26 17:39:03 crc kubenswrapper[4805]: I0226 17:39:03.038113 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bd043d5-4b5d-47f7-887b-5e1685b4c0ce" path="/var/lib/kubelet/pods/2bd043d5-4b5d-47f7-887b-5e1685b4c0ce/volumes" Feb 26 17:39:03 crc kubenswrapper[4805]: I0226 17:39:03.039207 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4349a64d-0728-48b1-aada-6e7977e291af" path="/var/lib/kubelet/pods/4349a64d-0728-48b1-aada-6e7977e291af/volumes" Feb 26 17:39:03 crc kubenswrapper[4805]: I0226 17:39:03.042834 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-59c7c6ffc6-vlxnj" Feb 26 17:39:03 crc kubenswrapper[4805]: I0226 17:39:03.042877 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-59c7c6ffc6-vlxnj" event={"ID":"91531636-caf5-4454-8cff-96134d359116","Type":"ContainerStarted","Data":"c0b39c74dad1e0842deaf5ff01197ca8fec19fff90317a931bdcffda247e5912"} Feb 26 17:39:03 crc kubenswrapper[4805]: I0226 17:39:03.042899 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-59c7c6ffc6-vlxnj" Feb 26 17:39:03 crc kubenswrapper[4805]: I0226 17:39:03.043280 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-59c7c6ffc6-vlxnj" podStartSLOduration=5.04326233 podStartE2EDuration="5.04326233s" podCreationTimestamp="2026-02-26 17:38:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:39:03.024988199 +0000 UTC m=+1457.586742548" watchObservedRunningTime="2026-02-26 17:39:03.04326233 +0000 UTC m=+1457.605016669" Feb 26 17:39:03 crc kubenswrapper[4805]: I0226 17:39:03.254451 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 26 17:39:03 crc kubenswrapper[4805]: I0226 17:39:03.254604 4805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 17:39:03 crc kubenswrapper[4805]: I0226 17:39:03.332193 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-69668dd468-5vzck"] Feb 26 17:39:03 crc kubenswrapper[4805]: W0226 17:39:03.357143 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70e967a7_f90a_42a8_9d73_087c05c2ad6f.slice/crio-f894d40dbc0fe3c9803e44c28bc05ab21534ea15336505b71815eca7821d3ee1 WatchSource:0}: Error finding container f894d40dbc0fe3c9803e44c28bc05ab21534ea15336505b71815eca7821d3ee1: Status 404 returned error can't find the container with id f894d40dbc0fe3c9803e44c28bc05ab21534ea15336505b71815eca7821d3ee1 Feb 26 17:39:03 crc kubenswrapper[4805]: I0226 17:39:03.433137 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-69fdbbb8fb-b5bss" Feb 26 17:39:03 crc kubenswrapper[4805]: I0226 17:39:03.570671 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 26 17:39:03 crc kubenswrapper[4805]: I0226 17:39:03.570786 4805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 17:39:03 crc kubenswrapper[4805]: I0226 17:39:03.588420 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 26 17:39:03 crc kubenswrapper[4805]: I0226 17:39:03.634813 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 26 17:39:03 crc kubenswrapper[4805]: I0226 17:39:03.903950 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-66c9c57f69-5r7hv"] Feb 26 17:39:03 crc kubenswrapper[4805]: I0226 17:39:03.904470 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-66c9c57f69-5r7hv" podUID="04a821a2-53df-4081-a120-61b7b90b3120" containerName="neutron-api" containerID="cri-o://ba7f10b014dccb0e6ad27fd1d69ec2a4e15adba750fa28c92b9528f3da0a5662" gracePeriod=30 Feb 26 17:39:03 crc kubenswrapper[4805]: I0226 17:39:03.905326 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-66c9c57f69-5r7hv" podUID="04a821a2-53df-4081-a120-61b7b90b3120" containerName="neutron-httpd" containerID="cri-o://1016df8f9d8a15b6c98fe96ede4c5294b7dc0bac93ab731f36779f30155311b4" gracePeriod=30 Feb 26 17:39:03 crc kubenswrapper[4805]: I0226 17:39:03.927185 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7cb9cbd877-vr7d7"] Feb 26 17:39:03 crc kubenswrapper[4805]: I0226 17:39:03.929576 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7cb9cbd877-vr7d7" Feb 26 17:39:03 crc kubenswrapper[4805]: I0226 17:39:03.937661 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7cb9cbd877-vr7d7"] Feb 26 17:39:04 crc kubenswrapper[4805]: I0226 17:39:04.016122 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-66c9c57f69-5r7hv" podUID="04a821a2-53df-4081-a120-61b7b90b3120" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.176:9696/\": read tcp 10.217.0.2:37964->10.217.0.176:9696: read: connection reset by peer" Feb 26 17:39:04 crc kubenswrapper[4805]: I0226 17:39:04.028516 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7cf7d6fbd8-c2wwp" event={"ID":"1fef9a78-09bd-4e03-b7be-0f92d3aa705b","Type":"ContainerStarted","Data":"812aaa90d7c85aefa95f8af2f204c58b96a105e80a0926d93ef0839a20cb6c0a"} Feb 26 17:39:04 crc kubenswrapper[4805]: I0226 17:39:04.028620 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7cf7d6fbd8-c2wwp" Feb 26 17:39:04 crc kubenswrapper[4805]: I0226 17:39:04.028648 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7cf7d6fbd8-c2wwp" Feb 26 17:39:04 crc kubenswrapper[4805]: I0226 17:39:04.063306 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-6bc5p" event={"ID":"45bd8308-31de-4e9a-b0cc-d60f191e9f3a","Type":"ContainerStarted","Data":"a3bcd7d0409f3b9a2bcc40555c21dd12fc3259f4315ff85210cd1622306063ac"} Feb 26 17:39:04 crc kubenswrapper[4805]: I0226 17:39:04.066475 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-848cf88cfc-6bc5p" Feb 26 17:39:04 crc kubenswrapper[4805]: I0226 17:39:04.077929 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df0744eb-153e-4f1e-a678-cd7ed256e5dc-internal-tls-certs\") pod \"neutron-7cb9cbd877-vr7d7\" (UID: \"df0744eb-153e-4f1e-a678-cd7ed256e5dc\") " pod="openstack/neutron-7cb9cbd877-vr7d7" Feb 26 17:39:04 crc kubenswrapper[4805]: I0226 17:39:04.077972 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/df0744eb-153e-4f1e-a678-cd7ed256e5dc-httpd-config\") pod \"neutron-7cb9cbd877-vr7d7\" (UID: \"df0744eb-153e-4f1e-a678-cd7ed256e5dc\") " pod="openstack/neutron-7cb9cbd877-vr7d7" Feb 26 17:39:04 crc kubenswrapper[4805]: I0226 17:39:04.078075 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lxkg\" (UniqueName: \"kubernetes.io/projected/df0744eb-153e-4f1e-a678-cd7ed256e5dc-kube-api-access-6lxkg\") pod \"neutron-7cb9cbd877-vr7d7\" (UID: \"df0744eb-153e-4f1e-a678-cd7ed256e5dc\") " pod="openstack/neutron-7cb9cbd877-vr7d7" Feb 26 17:39:04 crc kubenswrapper[4805]: I0226 17:39:04.078110 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df0744eb-153e-4f1e-a678-cd7ed256e5dc-public-tls-certs\") pod \"neutron-7cb9cbd877-vr7d7\" (UID: \"df0744eb-153e-4f1e-a678-cd7ed256e5dc\") " pod="openstack/neutron-7cb9cbd877-vr7d7" Feb 26 17:39:04 crc kubenswrapper[4805]: I0226 17:39:04.078203 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/df0744eb-153e-4f1e-a678-cd7ed256e5dc-config\") pod \"neutron-7cb9cbd877-vr7d7\" (UID: \"df0744eb-153e-4f1e-a678-cd7ed256e5dc\") " pod="openstack/neutron-7cb9cbd877-vr7d7" Feb 26 17:39:04 crc kubenswrapper[4805]: I0226 17:39:04.078261 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df0744eb-153e-4f1e-a678-cd7ed256e5dc-combined-ca-bundle\") pod \"neutron-7cb9cbd877-vr7d7\" (UID: \"df0744eb-153e-4f1e-a678-cd7ed256e5dc\") " pod="openstack/neutron-7cb9cbd877-vr7d7" Feb 26 17:39:04 crc kubenswrapper[4805]: I0226 17:39:04.078327 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df0744eb-153e-4f1e-a678-cd7ed256e5dc-ovndb-tls-certs\") pod \"neutron-7cb9cbd877-vr7d7\" (UID: \"df0744eb-153e-4f1e-a678-cd7ed256e5dc\") " pod="openstack/neutron-7cb9cbd877-vr7d7" Feb 26 17:39:04 crc kubenswrapper[4805]: I0226 17:39:04.079758 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7cf7d6fbd8-c2wwp" podStartSLOduration=5.079744822 podStartE2EDuration="5.079744822s" podCreationTimestamp="2026-02-26 17:38:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:39:04.059370488 +0000 UTC m=+1458.621124837" watchObservedRunningTime="2026-02-26 17:39:04.079744822 +0000 UTC m=+1458.641499161" Feb 26 17:39:04 crc kubenswrapper[4805]: I0226 17:39:04.079967 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69668dd468-5vzck" event={"ID":"70e967a7-f90a-42a8-9d73-087c05c2ad6f","Type":"ContainerStarted","Data":"bbc07d675bf05d28b7bd8df3316dab534f6e9c64ee8912acf8615b7a0aa5beed"} Feb 26 17:39:04 crc kubenswrapper[4805]: I0226 17:39:04.080001 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69668dd468-5vzck" event={"ID":"70e967a7-f90a-42a8-9d73-087c05c2ad6f","Type":"ContainerStarted","Data":"f894d40dbc0fe3c9803e44c28bc05ab21534ea15336505b71815eca7821d3ee1"} Feb 26 17:39:04 crc kubenswrapper[4805]: I0226 17:39:04.098659 4805 generic.go:334] "Generic (PLEG): container finished" podID="8969f13b-8f7b-4e4e-a891-eac8a978bb42" containerID="fae150fe9c39f36ada3f8390a014777fbd381fda97923ec8607a1697117187e0" exitCode=0 Feb 26 17:39:04 crc kubenswrapper[4805]: I0226 17:39:04.099220 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dgxjz" event={"ID":"8969f13b-8f7b-4e4e-a891-eac8a978bb42","Type":"ContainerDied","Data":"fae150fe9c39f36ada3f8390a014777fbd381fda97923ec8607a1697117187e0"} Feb 26 17:39:04 crc kubenswrapper[4805]: I0226 17:39:04.111053 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-848cf88cfc-6bc5p" podStartSLOduration=5.111012302 podStartE2EDuration="5.111012302s" podCreationTimestamp="2026-02-26 17:38:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:39:04.088292938 +0000 UTC m=+1458.650047297" watchObservedRunningTime="2026-02-26 17:39:04.111012302 +0000 UTC m=+1458.672766641" Feb 26 17:39:04 crc kubenswrapper[4805]: I0226 17:39:04.180814 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df0744eb-153e-4f1e-a678-cd7ed256e5dc-internal-tls-certs\") pod \"neutron-7cb9cbd877-vr7d7\" (UID: \"df0744eb-153e-4f1e-a678-cd7ed256e5dc\") " pod="openstack/neutron-7cb9cbd877-vr7d7" Feb 26 17:39:04 crc kubenswrapper[4805]: I0226 17:39:04.180866 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/df0744eb-153e-4f1e-a678-cd7ed256e5dc-httpd-config\") pod \"neutron-7cb9cbd877-vr7d7\" (UID: \"df0744eb-153e-4f1e-a678-cd7ed256e5dc\") " pod="openstack/neutron-7cb9cbd877-vr7d7" Feb 26 17:39:04 crc kubenswrapper[4805]: I0226 17:39:04.181035 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lxkg\" (UniqueName: \"kubernetes.io/projected/df0744eb-153e-4f1e-a678-cd7ed256e5dc-kube-api-access-6lxkg\") pod \"neutron-7cb9cbd877-vr7d7\" (UID: \"df0744eb-153e-4f1e-a678-cd7ed256e5dc\") " pod="openstack/neutron-7cb9cbd877-vr7d7" Feb 26 17:39:04 crc kubenswrapper[4805]: I0226 17:39:04.181085 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df0744eb-153e-4f1e-a678-cd7ed256e5dc-public-tls-certs\") pod \"neutron-7cb9cbd877-vr7d7\" (UID: \"df0744eb-153e-4f1e-a678-cd7ed256e5dc\") " pod="openstack/neutron-7cb9cbd877-vr7d7" Feb 26 17:39:04 crc kubenswrapper[4805]: I0226 17:39:04.181195 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/df0744eb-153e-4f1e-a678-cd7ed256e5dc-config\") pod \"neutron-7cb9cbd877-vr7d7\" (UID: \"df0744eb-153e-4f1e-a678-cd7ed256e5dc\") " pod="openstack/neutron-7cb9cbd877-vr7d7" Feb 26 17:39:04 crc kubenswrapper[4805]: I0226 17:39:04.181259 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df0744eb-153e-4f1e-a678-cd7ed256e5dc-combined-ca-bundle\") pod \"neutron-7cb9cbd877-vr7d7\" (UID: \"df0744eb-153e-4f1e-a678-cd7ed256e5dc\") " pod="openstack/neutron-7cb9cbd877-vr7d7" Feb 26 17:39:04 crc kubenswrapper[4805]: I0226 17:39:04.181489 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df0744eb-153e-4f1e-a678-cd7ed256e5dc-ovndb-tls-certs\") pod \"neutron-7cb9cbd877-vr7d7\" (UID: \"df0744eb-153e-4f1e-a678-cd7ed256e5dc\") " pod="openstack/neutron-7cb9cbd877-vr7d7" Feb 26 17:39:04 crc kubenswrapper[4805]: I0226 17:39:04.186996 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df0744eb-153e-4f1e-a678-cd7ed256e5dc-ovndb-tls-certs\") pod \"neutron-7cb9cbd877-vr7d7\" (UID: \"df0744eb-153e-4f1e-a678-cd7ed256e5dc\") " pod="openstack/neutron-7cb9cbd877-vr7d7" Feb 26 17:39:04 crc kubenswrapper[4805]: I0226 17:39:04.188198 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/df0744eb-153e-4f1e-a678-cd7ed256e5dc-internal-tls-certs\") pod \"neutron-7cb9cbd877-vr7d7\" (UID: \"df0744eb-153e-4f1e-a678-cd7ed256e5dc\") " pod="openstack/neutron-7cb9cbd877-vr7d7" Feb 26 17:39:04 crc kubenswrapper[4805]: I0226 17:39:04.188742 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/df0744eb-153e-4f1e-a678-cd7ed256e5dc-public-tls-certs\") pod \"neutron-7cb9cbd877-vr7d7\" (UID: \"df0744eb-153e-4f1e-a678-cd7ed256e5dc\") " pod="openstack/neutron-7cb9cbd877-vr7d7" Feb 26 17:39:04 crc kubenswrapper[4805]: I0226 17:39:04.191049 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/df0744eb-153e-4f1e-a678-cd7ed256e5dc-httpd-config\") pod \"neutron-7cb9cbd877-vr7d7\" (UID: \"df0744eb-153e-4f1e-a678-cd7ed256e5dc\") " pod="openstack/neutron-7cb9cbd877-vr7d7" Feb 26 17:39:04 crc kubenswrapper[4805]: I0226 17:39:04.197713 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df0744eb-153e-4f1e-a678-cd7ed256e5dc-combined-ca-bundle\") pod \"neutron-7cb9cbd877-vr7d7\" (UID: \"df0744eb-153e-4f1e-a678-cd7ed256e5dc\") " pod="openstack/neutron-7cb9cbd877-vr7d7" Feb 26 17:39:04 crc kubenswrapper[4805]: I0226 17:39:04.201232 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/df0744eb-153e-4f1e-a678-cd7ed256e5dc-config\") pod \"neutron-7cb9cbd877-vr7d7\" (UID: \"df0744eb-153e-4f1e-a678-cd7ed256e5dc\") " pod="openstack/neutron-7cb9cbd877-vr7d7" Feb 26 17:39:04 crc kubenswrapper[4805]: I0226 17:39:04.203532 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lxkg\" (UniqueName: \"kubernetes.io/projected/df0744eb-153e-4f1e-a678-cd7ed256e5dc-kube-api-access-6lxkg\") pod \"neutron-7cb9cbd877-vr7d7\" (UID: \"df0744eb-153e-4f1e-a678-cd7ed256e5dc\") " pod="openstack/neutron-7cb9cbd877-vr7d7" Feb 26 17:39:04 crc kubenswrapper[4805]: I0226 17:39:04.277265 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7cb9cbd877-vr7d7" Feb 26 17:39:05 crc kubenswrapper[4805]: I0226 17:39:05.126840 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69668dd468-5vzck" event={"ID":"70e967a7-f90a-42a8-9d73-087c05c2ad6f","Type":"ContainerStarted","Data":"37c75d88488b7e76bfeaee71260bbaf43b72d2b04d1f8cd0b87fb1811dd12f8d"} Feb 26 17:39:05 crc kubenswrapper[4805]: I0226 17:39:05.127152 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-69668dd468-5vzck" Feb 26 17:39:05 crc kubenswrapper[4805]: I0226 17:39:05.127166 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-69668dd468-5vzck" Feb 26 17:39:05 crc kubenswrapper[4805]: I0226 17:39:05.130419 4805 generic.go:334] "Generic (PLEG): container finished" podID="04a821a2-53df-4081-a120-61b7b90b3120" containerID="1016df8f9d8a15b6c98fe96ede4c5294b7dc0bac93ab731f36779f30155311b4" exitCode=0 Feb 26 17:39:05 crc kubenswrapper[4805]: I0226 17:39:05.130571 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66c9c57f69-5r7hv" event={"ID":"04a821a2-53df-4081-a120-61b7b90b3120","Type":"ContainerDied","Data":"1016df8f9d8a15b6c98fe96ede4c5294b7dc0bac93ab731f36779f30155311b4"} Feb 26 17:39:05 crc kubenswrapper[4805]: I0226 17:39:05.157851 4805 generic.go:334] "Generic (PLEG): container finished" podID="96d28605-2282-4e5f-93d6-5a3023c7bc9c" containerID="5d3e2c52590370cb3bb0c91f5dc57c28c1677079f74b3e277f915a0266c9046b" exitCode=0 Feb 26 17:39:05 crc kubenswrapper[4805]: I0226 17:39:05.158298 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-xgqc6" event={"ID":"96d28605-2282-4e5f-93d6-5a3023c7bc9c","Type":"ContainerDied","Data":"5d3e2c52590370cb3bb0c91f5dc57c28c1677079f74b3e277f915a0266c9046b"} Feb 26 17:39:05 crc kubenswrapper[4805]: I0226 17:39:05.160145 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-69668dd468-5vzck" podStartSLOduration=4.160123822 podStartE2EDuration="4.160123822s" podCreationTimestamp="2026-02-26 17:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:39:05.15648865 +0000 UTC m=+1459.718242979" watchObservedRunningTime="2026-02-26 17:39:05.160123822 +0000 UTC m=+1459.721878161" Feb 26 17:39:05 crc kubenswrapper[4805]: I0226 17:39:05.408835 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-66c9c57f69-5r7hv" podUID="04a821a2-53df-4081-a120-61b7b90b3120" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.176:9696/\": dial tcp 10.217.0.176:9696: connect: connection refused" Feb 26 17:39:06 crc kubenswrapper[4805]: I0226 17:39:06.160227 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dgxjz" Feb 26 17:39:06 crc kubenswrapper[4805]: I0226 17:39:06.175099 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dgxjz" Feb 26 17:39:06 crc kubenswrapper[4805]: I0226 17:39:06.175355 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dgxjz" event={"ID":"8969f13b-8f7b-4e4e-a891-eac8a978bb42","Type":"ContainerDied","Data":"3e2376219ea3e4f9170fac82e02d4cae84985828c02abae3ee7f7d0983bbf932"} Feb 26 17:39:06 crc kubenswrapper[4805]: I0226 17:39:06.175397 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e2376219ea3e4f9170fac82e02d4cae84985828c02abae3ee7f7d0983bbf932" Feb 26 17:39:06 crc kubenswrapper[4805]: I0226 17:39:06.291555 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n58gb\" (UniqueName: \"kubernetes.io/projected/8969f13b-8f7b-4e4e-a891-eac8a978bb42-kube-api-access-n58gb\") pod \"8969f13b-8f7b-4e4e-a891-eac8a978bb42\" (UID: \"8969f13b-8f7b-4e4e-a891-eac8a978bb42\") " Feb 26 17:39:06 crc kubenswrapper[4805]: I0226 17:39:06.291876 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8969f13b-8f7b-4e4e-a891-eac8a978bb42-scripts\") pod \"8969f13b-8f7b-4e4e-a891-eac8a978bb42\" (UID: \"8969f13b-8f7b-4e4e-a891-eac8a978bb42\") " Feb 26 17:39:06 crc kubenswrapper[4805]: I0226 17:39:06.291934 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8969f13b-8f7b-4e4e-a891-eac8a978bb42-etc-machine-id\") pod \"8969f13b-8f7b-4e4e-a891-eac8a978bb42\" (UID: \"8969f13b-8f7b-4e4e-a891-eac8a978bb42\") " Feb 26 17:39:06 crc kubenswrapper[4805]: I0226 17:39:06.291988 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8969f13b-8f7b-4e4e-a891-eac8a978bb42-combined-ca-bundle\") pod \"8969f13b-8f7b-4e4e-a891-eac8a978bb42\" (UID: \"8969f13b-8f7b-4e4e-a891-eac8a978bb42\") " Feb 26 17:39:06 crc kubenswrapper[4805]: I0226 17:39:06.292010 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8969f13b-8f7b-4e4e-a891-eac8a978bb42-db-sync-config-data\") pod \"8969f13b-8f7b-4e4e-a891-eac8a978bb42\" (UID: \"8969f13b-8f7b-4e4e-a891-eac8a978bb42\") " Feb 26 17:39:06 crc kubenswrapper[4805]: I0226 17:39:06.292090 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8969f13b-8f7b-4e4e-a891-eac8a978bb42-config-data\") pod \"8969f13b-8f7b-4e4e-a891-eac8a978bb42\" (UID: \"8969f13b-8f7b-4e4e-a891-eac8a978bb42\") " Feb 26 17:39:06 crc kubenswrapper[4805]: I0226 17:39:06.294144 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8969f13b-8f7b-4e4e-a891-eac8a978bb42-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8969f13b-8f7b-4e4e-a891-eac8a978bb42" (UID: "8969f13b-8f7b-4e4e-a891-eac8a978bb42"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 17:39:06 crc kubenswrapper[4805]: I0226 17:39:06.313918 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8969f13b-8f7b-4e4e-a891-eac8a978bb42-kube-api-access-n58gb" (OuterVolumeSpecName: "kube-api-access-n58gb") pod "8969f13b-8f7b-4e4e-a891-eac8a978bb42" (UID: "8969f13b-8f7b-4e4e-a891-eac8a978bb42"). InnerVolumeSpecName "kube-api-access-n58gb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:39:06 crc kubenswrapper[4805]: I0226 17:39:06.314553 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8969f13b-8f7b-4e4e-a891-eac8a978bb42-scripts" (OuterVolumeSpecName: "scripts") pod "8969f13b-8f7b-4e4e-a891-eac8a978bb42" (UID: "8969f13b-8f7b-4e4e-a891-eac8a978bb42"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:06 crc kubenswrapper[4805]: I0226 17:39:06.314738 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8969f13b-8f7b-4e4e-a891-eac8a978bb42-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8969f13b-8f7b-4e4e-a891-eac8a978bb42" (UID: "8969f13b-8f7b-4e4e-a891-eac8a978bb42"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:06 crc kubenswrapper[4805]: I0226 17:39:06.351741 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8969f13b-8f7b-4e4e-a891-eac8a978bb42-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8969f13b-8f7b-4e4e-a891-eac8a978bb42" (UID: "8969f13b-8f7b-4e4e-a891-eac8a978bb42"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:06 crc kubenswrapper[4805]: I0226 17:39:06.377105 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8969f13b-8f7b-4e4e-a891-eac8a978bb42-config-data" (OuterVolumeSpecName: "config-data") pod "8969f13b-8f7b-4e4e-a891-eac8a978bb42" (UID: "8969f13b-8f7b-4e4e-a891-eac8a978bb42"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:06 crc kubenswrapper[4805]: I0226 17:39:06.393587 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8969f13b-8f7b-4e4e-a891-eac8a978bb42-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:06 crc kubenswrapper[4805]: I0226 17:39:06.393622 4805 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8969f13b-8f7b-4e4e-a891-eac8a978bb42-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:06 crc kubenswrapper[4805]: I0226 17:39:06.393631 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8969f13b-8f7b-4e4e-a891-eac8a978bb42-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:06 crc kubenswrapper[4805]: I0226 17:39:06.393640 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n58gb\" (UniqueName: \"kubernetes.io/projected/8969f13b-8f7b-4e4e-a891-eac8a978bb42-kube-api-access-n58gb\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:06 crc kubenswrapper[4805]: I0226 17:39:06.393652 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8969f13b-8f7b-4e4e-a891-eac8a978bb42-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:06 crc kubenswrapper[4805]: I0226 17:39:06.393663 4805 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8969f13b-8f7b-4e4e-a891-eac8a978bb42-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:06 crc kubenswrapper[4805]: W0226 17:39:06.498106 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf0744eb_153e_4f1e_a678_cd7ed256e5dc.slice/crio-ee23617074f236fec173bf9a8bbdba69efffd2b3e62d435bb22a917685957141 WatchSource:0}: Error finding container ee23617074f236fec173bf9a8bbdba69efffd2b3e62d435bb22a917685957141: Status 404 returned error can't find the container with id ee23617074f236fec173bf9a8bbdba69efffd2b3e62d435bb22a917685957141 Feb 26 17:39:06 crc kubenswrapper[4805]: I0226 17:39:06.499522 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7cb9cbd877-vr7d7"] Feb 26 17:39:06 crc kubenswrapper[4805]: I0226 17:39:06.658378 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-xgqc6" Feb 26 17:39:06 crc kubenswrapper[4805]: I0226 17:39:06.702731 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gf7p\" (UniqueName: \"kubernetes.io/projected/96d28605-2282-4e5f-93d6-5a3023c7bc9c-kube-api-access-9gf7p\") pod \"96d28605-2282-4e5f-93d6-5a3023c7bc9c\" (UID: \"96d28605-2282-4e5f-93d6-5a3023c7bc9c\") " Feb 26 17:39:06 crc kubenswrapper[4805]: I0226 17:39:06.702914 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96d28605-2282-4e5f-93d6-5a3023c7bc9c-scripts\") pod \"96d28605-2282-4e5f-93d6-5a3023c7bc9c\" (UID: \"96d28605-2282-4e5f-93d6-5a3023c7bc9c\") " Feb 26 17:39:06 crc kubenswrapper[4805]: I0226 17:39:06.702935 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96d28605-2282-4e5f-93d6-5a3023c7bc9c-config-data\") pod \"96d28605-2282-4e5f-93d6-5a3023c7bc9c\" (UID: \"96d28605-2282-4e5f-93d6-5a3023c7bc9c\") " Feb 26 17:39:06 crc kubenswrapper[4805]: I0226 17:39:06.702982 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96d28605-2282-4e5f-93d6-5a3023c7bc9c-combined-ca-bundle\") pod \"96d28605-2282-4e5f-93d6-5a3023c7bc9c\" (UID: \"96d28605-2282-4e5f-93d6-5a3023c7bc9c\") " Feb 26 17:39:06 crc kubenswrapper[4805]: I0226 17:39:06.703055 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/96d28605-2282-4e5f-93d6-5a3023c7bc9c-certs\") pod \"96d28605-2282-4e5f-93d6-5a3023c7bc9c\" (UID: \"96d28605-2282-4e5f-93d6-5a3023c7bc9c\") " Feb 26 17:39:06 crc kubenswrapper[4805]: I0226 17:39:06.707164 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96d28605-2282-4e5f-93d6-5a3023c7bc9c-certs" (OuterVolumeSpecName: "certs") pod "96d28605-2282-4e5f-93d6-5a3023c7bc9c" (UID: "96d28605-2282-4e5f-93d6-5a3023c7bc9c"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:39:06 crc kubenswrapper[4805]: I0226 17:39:06.709957 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96d28605-2282-4e5f-93d6-5a3023c7bc9c-kube-api-access-9gf7p" (OuterVolumeSpecName: "kube-api-access-9gf7p") pod "96d28605-2282-4e5f-93d6-5a3023c7bc9c" (UID: "96d28605-2282-4e5f-93d6-5a3023c7bc9c"). InnerVolumeSpecName "kube-api-access-9gf7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:39:06 crc kubenswrapper[4805]: I0226 17:39:06.714729 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96d28605-2282-4e5f-93d6-5a3023c7bc9c-scripts" (OuterVolumeSpecName: "scripts") pod "96d28605-2282-4e5f-93d6-5a3023c7bc9c" (UID: "96d28605-2282-4e5f-93d6-5a3023c7bc9c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:06 crc kubenswrapper[4805]: I0226 17:39:06.805904 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gf7p\" (UniqueName: \"kubernetes.io/projected/96d28605-2282-4e5f-93d6-5a3023c7bc9c-kube-api-access-9gf7p\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:06 crc kubenswrapper[4805]: I0226 17:39:06.805944 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96d28605-2282-4e5f-93d6-5a3023c7bc9c-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:06 crc kubenswrapper[4805]: I0226 17:39:06.805958 4805 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/96d28605-2282-4e5f-93d6-5a3023c7bc9c-certs\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:06 crc kubenswrapper[4805]: I0226 17:39:06.810980 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96d28605-2282-4e5f-93d6-5a3023c7bc9c-config-data" (OuterVolumeSpecName: "config-data") pod "96d28605-2282-4e5f-93d6-5a3023c7bc9c" (UID: "96d28605-2282-4e5f-93d6-5a3023c7bc9c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:06 crc kubenswrapper[4805]: I0226 17:39:06.815120 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96d28605-2282-4e5f-93d6-5a3023c7bc9c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96d28605-2282-4e5f-93d6-5a3023c7bc9c" (UID: "96d28605-2282-4e5f-93d6-5a3023c7bc9c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:06 crc kubenswrapper[4805]: I0226 17:39:06.908528 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96d28605-2282-4e5f-93d6-5a3023c7bc9c-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:06 crc kubenswrapper[4805]: I0226 17:39:06.909010 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96d28605-2282-4e5f-93d6-5a3023c7bc9c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.215503 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-xgqc6" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.215529 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-xgqc6" event={"ID":"96d28605-2282-4e5f-93d6-5a3023c7bc9c","Type":"ContainerDied","Data":"2dbd58277dae4a81b363eba2481ae68eb589de3e839b54475217bdc6f86b03c8"} Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.215573 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dbd58277dae4a81b363eba2481ae68eb589de3e839b54475217bdc6f86b03c8" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.224428 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-74b677ff4d-kgqc6" event={"ID":"b3cf852a-8ab9-4f96-b639-cdf3d7cf8407","Type":"ContainerStarted","Data":"1d292bafebcee631ebf42279d811060ab8e0351c0b335170f738093c7525159d"} Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.224476 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-74b677ff4d-kgqc6" event={"ID":"b3cf852a-8ab9-4f96-b639-cdf3d7cf8407","Type":"ContainerStarted","Data":"a1a029552b00b69aa190be43b17469143e722c2c55eaae287b1f48bf166513ea"} Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.245475 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-664585c8f5-7x9kv" event={"ID":"1232829e-cb44-4a21-b33b-c58be5fbd656","Type":"ContainerStarted","Data":"95cc6b9f74be73ec71bdb53f0c06964031e81736bc881a4b33f2a0be70a6f242"} Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.245525 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-664585c8f5-7x9kv" event={"ID":"1232829e-cb44-4a21-b33b-c58be5fbd656","Type":"ContainerStarted","Data":"d38164e84428fc8ee81b5c2431500cc3a42ac4c6cf13abd75bd9f5d9c0adca7c"} Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.254974 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cb9cbd877-vr7d7" event={"ID":"df0744eb-153e-4f1e-a678-cd7ed256e5dc","Type":"ContainerStarted","Data":"717cff7726156ae170f97ef81e8228e19b1e38237bba54590fc3cc59e811881a"} Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.255060 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cb9cbd877-vr7d7" event={"ID":"df0744eb-153e-4f1e-a678-cd7ed256e5dc","Type":"ContainerStarted","Data":"ca34bcafe00a898bc06bc9a7a0c7737a39fff340fd0b9cf1ffbdb875edd15a0e"} Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.255073 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cb9cbd877-vr7d7" event={"ID":"df0744eb-153e-4f1e-a678-cd7ed256e5dc","Type":"ContainerStarted","Data":"ee23617074f236fec173bf9a8bbdba69efffd2b3e62d435bb22a917685957141"} Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.255260 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7cb9cbd877-vr7d7" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.274241 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-74b677ff4d-kgqc6" podStartSLOduration=3.733145414 podStartE2EDuration="8.274223085s" podCreationTimestamp="2026-02-26 17:38:59 +0000 UTC" firstStartedPulling="2026-02-26 17:39:01.283894892 +0000 UTC m=+1455.845649231" lastFinishedPulling="2026-02-26 17:39:05.824972573 +0000 UTC m=+1460.386726902" observedRunningTime="2026-02-26 17:39:07.261858613 +0000 UTC m=+1461.823612952" watchObservedRunningTime="2026-02-26 17:39:07.274223085 +0000 UTC m=+1461.835977424" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.295998 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7cb9cbd877-vr7d7" podStartSLOduration=4.295978774 podStartE2EDuration="4.295978774s" podCreationTimestamp="2026-02-26 17:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:39:07.293370768 +0000 UTC m=+1461.855125107" watchObservedRunningTime="2026-02-26 17:39:07.295978774 +0000 UTC m=+1461.857733103" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.346982 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-664585c8f5-7x9kv" podStartSLOduration=3.822497189 podStartE2EDuration="8.34695568s" podCreationTimestamp="2026-02-26 17:38:59 +0000 UTC" firstStartedPulling="2026-02-26 17:39:01.300621315 +0000 UTC m=+1455.862375644" lastFinishedPulling="2026-02-26 17:39:05.825079796 +0000 UTC m=+1460.386834135" observedRunningTime="2026-02-26 17:39:07.337393109 +0000 UTC m=+1461.899147458" watchObservedRunningTime="2026-02-26 17:39:07.34695568 +0000 UTC m=+1461.908710019" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.382806 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-storageinit-zzvd9"] Feb 26 17:39:07 crc kubenswrapper[4805]: E0226 17:39:07.383374 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96d28605-2282-4e5f-93d6-5a3023c7bc9c" containerName="cloudkitty-db-sync" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.383399 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="96d28605-2282-4e5f-93d6-5a3023c7bc9c" containerName="cloudkitty-db-sync" Feb 26 17:39:07 crc kubenswrapper[4805]: E0226 17:39:07.383443 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8969f13b-8f7b-4e4e-a891-eac8a978bb42" containerName="cinder-db-sync" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.383452 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8969f13b-8f7b-4e4e-a891-eac8a978bb42" containerName="cinder-db-sync" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.383689 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="96d28605-2282-4e5f-93d6-5a3023c7bc9c" containerName="cloudkitty-db-sync" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.383736 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8969f13b-8f7b-4e4e-a891-eac8a978bb42" containerName="cinder-db-sync" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.384509 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-zzvd9" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.389167 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-ttgs5" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.389248 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.389536 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.389651 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.389765 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.414259 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-zzvd9"] Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.422697 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f369c5e8-1932-4293-9e75-6b74c4d4eb21-config-data\") pod \"cloudkitty-storageinit-zzvd9\" (UID: \"f369c5e8-1932-4293-9e75-6b74c4d4eb21\") " pod="openstack/cloudkitty-storageinit-zzvd9" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.423185 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f369c5e8-1932-4293-9e75-6b74c4d4eb21-combined-ca-bundle\") pod \"cloudkitty-storageinit-zzvd9\" (UID: \"f369c5e8-1932-4293-9e75-6b74c4d4eb21\") " pod="openstack/cloudkitty-storageinit-zzvd9" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.423318 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f369c5e8-1932-4293-9e75-6b74c4d4eb21-scripts\") pod \"cloudkitty-storageinit-zzvd9\" (UID: \"f369c5e8-1932-4293-9e75-6b74c4d4eb21\") " pod="openstack/cloudkitty-storageinit-zzvd9" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.423451 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n4cj\" (UniqueName: \"kubernetes.io/projected/f369c5e8-1932-4293-9e75-6b74c4d4eb21-kube-api-access-5n4cj\") pod \"cloudkitty-storageinit-zzvd9\" (UID: \"f369c5e8-1932-4293-9e75-6b74c4d4eb21\") " pod="openstack/cloudkitty-storageinit-zzvd9" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.423601 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/f369c5e8-1932-4293-9e75-6b74c4d4eb21-certs\") pod \"cloudkitty-storageinit-zzvd9\" (UID: \"f369c5e8-1932-4293-9e75-6b74c4d4eb21\") " pod="openstack/cloudkitty-storageinit-zzvd9" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.503248 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.508366 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.517702 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.517942 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-rw98k" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.518100 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.518239 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.525268 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/f369c5e8-1932-4293-9e75-6b74c4d4eb21-certs\") pod \"cloudkitty-storageinit-zzvd9\" (UID: \"f369c5e8-1932-4293-9e75-6b74c4d4eb21\") " pod="openstack/cloudkitty-storageinit-zzvd9" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.525344 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/712095a2-42ac-4ef6-a164-fbd4dd993948-scripts\") pod \"cinder-scheduler-0\" (UID: \"712095a2-42ac-4ef6-a164-fbd4dd993948\") " pod="openstack/cinder-scheduler-0" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.525380 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f369c5e8-1932-4293-9e75-6b74c4d4eb21-config-data\") pod \"cloudkitty-storageinit-zzvd9\" (UID: \"f369c5e8-1932-4293-9e75-6b74c4d4eb21\") " pod="openstack/cloudkitty-storageinit-zzvd9" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.525443 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f369c5e8-1932-4293-9e75-6b74c4d4eb21-combined-ca-bundle\") pod \"cloudkitty-storageinit-zzvd9\" (UID: \"f369c5e8-1932-4293-9e75-6b74c4d4eb21\") " pod="openstack/cloudkitty-storageinit-zzvd9" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.525465 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/712095a2-42ac-4ef6-a164-fbd4dd993948-config-data\") pod \"cinder-scheduler-0\" (UID: \"712095a2-42ac-4ef6-a164-fbd4dd993948\") " pod="openstack/cinder-scheduler-0" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.525521 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5g6q\" (UniqueName: \"kubernetes.io/projected/712095a2-42ac-4ef6-a164-fbd4dd993948-kube-api-access-z5g6q\") pod \"cinder-scheduler-0\" (UID: \"712095a2-42ac-4ef6-a164-fbd4dd993948\") " pod="openstack/cinder-scheduler-0" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.525568 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f369c5e8-1932-4293-9e75-6b74c4d4eb21-scripts\") pod \"cloudkitty-storageinit-zzvd9\" (UID: \"f369c5e8-1932-4293-9e75-6b74c4d4eb21\") " pod="openstack/cloudkitty-storageinit-zzvd9" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.525596 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/712095a2-42ac-4ef6-a164-fbd4dd993948-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"712095a2-42ac-4ef6-a164-fbd4dd993948\") " pod="openstack/cinder-scheduler-0" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.525626 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/712095a2-42ac-4ef6-a164-fbd4dd993948-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"712095a2-42ac-4ef6-a164-fbd4dd993948\") " pod="openstack/cinder-scheduler-0" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.525711 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/712095a2-42ac-4ef6-a164-fbd4dd993948-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"712095a2-42ac-4ef6-a164-fbd4dd993948\") " pod="openstack/cinder-scheduler-0" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.525743 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n4cj\" (UniqueName: \"kubernetes.io/projected/f369c5e8-1932-4293-9e75-6b74c4d4eb21-kube-api-access-5n4cj\") pod \"cloudkitty-storageinit-zzvd9\" (UID: \"f369c5e8-1932-4293-9e75-6b74c4d4eb21\") " pod="openstack/cloudkitty-storageinit-zzvd9" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.535764 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.545473 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f369c5e8-1932-4293-9e75-6b74c4d4eb21-scripts\") pod \"cloudkitty-storageinit-zzvd9\" (UID: \"f369c5e8-1932-4293-9e75-6b74c4d4eb21\") " pod="openstack/cloudkitty-storageinit-zzvd9" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.549753 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f369c5e8-1932-4293-9e75-6b74c4d4eb21-config-data\") pod \"cloudkitty-storageinit-zzvd9\" (UID: \"f369c5e8-1932-4293-9e75-6b74c4d4eb21\") " pod="openstack/cloudkitty-storageinit-zzvd9" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.553342 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f369c5e8-1932-4293-9e75-6b74c4d4eb21-combined-ca-bundle\") pod \"cloudkitty-storageinit-zzvd9\" (UID: \"f369c5e8-1932-4293-9e75-6b74c4d4eb21\") " pod="openstack/cloudkitty-storageinit-zzvd9" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.571576 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/f369c5e8-1932-4293-9e75-6b74c4d4eb21-certs\") pod \"cloudkitty-storageinit-zzvd9\" (UID: \"f369c5e8-1932-4293-9e75-6b74c4d4eb21\") " pod="openstack/cloudkitty-storageinit-zzvd9" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.625739 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n4cj\" (UniqueName: \"kubernetes.io/projected/f369c5e8-1932-4293-9e75-6b74c4d4eb21-kube-api-access-5n4cj\") pod \"cloudkitty-storageinit-zzvd9\" (UID: \"f369c5e8-1932-4293-9e75-6b74c4d4eb21\") " pod="openstack/cloudkitty-storageinit-zzvd9" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.627150 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/712095a2-42ac-4ef6-a164-fbd4dd993948-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"712095a2-42ac-4ef6-a164-fbd4dd993948\") " pod="openstack/cinder-scheduler-0" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.627260 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/712095a2-42ac-4ef6-a164-fbd4dd993948-scripts\") pod \"cinder-scheduler-0\" (UID: \"712095a2-42ac-4ef6-a164-fbd4dd993948\") " pod="openstack/cinder-scheduler-0" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.627336 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/712095a2-42ac-4ef6-a164-fbd4dd993948-config-data\") pod \"cinder-scheduler-0\" (UID: \"712095a2-42ac-4ef6-a164-fbd4dd993948\") " pod="openstack/cinder-scheduler-0" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.627389 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5g6q\" (UniqueName: \"kubernetes.io/projected/712095a2-42ac-4ef6-a164-fbd4dd993948-kube-api-access-z5g6q\") pod \"cinder-scheduler-0\" (UID: \"712095a2-42ac-4ef6-a164-fbd4dd993948\") " pod="openstack/cinder-scheduler-0" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.627434 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/712095a2-42ac-4ef6-a164-fbd4dd993948-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"712095a2-42ac-4ef6-a164-fbd4dd993948\") " pod="openstack/cinder-scheduler-0" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.627458 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/712095a2-42ac-4ef6-a164-fbd4dd993948-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"712095a2-42ac-4ef6-a164-fbd4dd993948\") " pod="openstack/cinder-scheduler-0" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.627559 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/712095a2-42ac-4ef6-a164-fbd4dd993948-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"712095a2-42ac-4ef6-a164-fbd4dd993948\") " pod="openstack/cinder-scheduler-0" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.632069 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/712095a2-42ac-4ef6-a164-fbd4dd993948-config-data\") pod \"cinder-scheduler-0\" (UID: \"712095a2-42ac-4ef6-a164-fbd4dd993948\") " pod="openstack/cinder-scheduler-0" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.640918 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/712095a2-42ac-4ef6-a164-fbd4dd993948-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"712095a2-42ac-4ef6-a164-fbd4dd993948\") " pod="openstack/cinder-scheduler-0" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.644541 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/712095a2-42ac-4ef6-a164-fbd4dd993948-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"712095a2-42ac-4ef6-a164-fbd4dd993948\") " pod="openstack/cinder-scheduler-0" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.646561 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/712095a2-42ac-4ef6-a164-fbd4dd993948-scripts\") pod \"cinder-scheduler-0\" (UID: \"712095a2-42ac-4ef6-a164-fbd4dd993948\") " pod="openstack/cinder-scheduler-0" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.651912 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-6bc5p"] Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.652179 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-848cf88cfc-6bc5p" podUID="45bd8308-31de-4e9a-b0cc-d60f191e9f3a" containerName="dnsmasq-dns" containerID="cri-o://a3bcd7d0409f3b9a2bcc40555c21dd12fc3259f4315ff85210cd1622306063ac" gracePeriod=10 Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.683126 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5g6q\" (UniqueName: \"kubernetes.io/projected/712095a2-42ac-4ef6-a164-fbd4dd993948-kube-api-access-z5g6q\") pod \"cinder-scheduler-0\" (UID: \"712095a2-42ac-4ef6-a164-fbd4dd993948\") " pod="openstack/cinder-scheduler-0" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.710993 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-xhgvb"] Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.712643 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-zzvd9" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.739563 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-xhgvb"] Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.739770 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-xhgvb" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.848366 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1db182f9-cd35-48d6-90d0-270eb8a25e4c-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-xhgvb\" (UID: \"1db182f9-cd35-48d6-90d0-270eb8a25e4c\") " pod="openstack/dnsmasq-dns-6578955fd5-xhgvb" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.849053 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1db182f9-cd35-48d6-90d0-270eb8a25e4c-dns-svc\") pod \"dnsmasq-dns-6578955fd5-xhgvb\" (UID: \"1db182f9-cd35-48d6-90d0-270eb8a25e4c\") " pod="openstack/dnsmasq-dns-6578955fd5-xhgvb" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.849192 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1db182f9-cd35-48d6-90d0-270eb8a25e4c-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-xhgvb\" (UID: \"1db182f9-cd35-48d6-90d0-270eb8a25e4c\") " pod="openstack/dnsmasq-dns-6578955fd5-xhgvb" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.849264 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4pzt\" (UniqueName: \"kubernetes.io/projected/1db182f9-cd35-48d6-90d0-270eb8a25e4c-kube-api-access-d4pzt\") pod \"dnsmasq-dns-6578955fd5-xhgvb\" (UID: \"1db182f9-cd35-48d6-90d0-270eb8a25e4c\") " pod="openstack/dnsmasq-dns-6578955fd5-xhgvb" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.849341 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1db182f9-cd35-48d6-90d0-270eb8a25e4c-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-xhgvb\" (UID: \"1db182f9-cd35-48d6-90d0-270eb8a25e4c\") " pod="openstack/dnsmasq-dns-6578955fd5-xhgvb" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.849394 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1db182f9-cd35-48d6-90d0-270eb8a25e4c-config\") pod \"dnsmasq-dns-6578955fd5-xhgvb\" (UID: \"1db182f9-cd35-48d6-90d0-270eb8a25e4c\") " pod="openstack/dnsmasq-dns-6578955fd5-xhgvb" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.945431 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.949439 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.956636 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.958754 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1db182f9-cd35-48d6-90d0-270eb8a25e4c-dns-svc\") pod \"dnsmasq-dns-6578955fd5-xhgvb\" (UID: \"1db182f9-cd35-48d6-90d0-270eb8a25e4c\") " pod="openstack/dnsmasq-dns-6578955fd5-xhgvb" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.958808 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1db182f9-cd35-48d6-90d0-270eb8a25e4c-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-xhgvb\" (UID: \"1db182f9-cd35-48d6-90d0-270eb8a25e4c\") " pod="openstack/dnsmasq-dns-6578955fd5-xhgvb" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.958844 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4pzt\" (UniqueName: \"kubernetes.io/projected/1db182f9-cd35-48d6-90d0-270eb8a25e4c-kube-api-access-d4pzt\") pod \"dnsmasq-dns-6578955fd5-xhgvb\" (UID: \"1db182f9-cd35-48d6-90d0-270eb8a25e4c\") " pod="openstack/dnsmasq-dns-6578955fd5-xhgvb" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.958876 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1db182f9-cd35-48d6-90d0-270eb8a25e4c-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-xhgvb\" (UID: \"1db182f9-cd35-48d6-90d0-270eb8a25e4c\") " pod="openstack/dnsmasq-dns-6578955fd5-xhgvb" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.958900 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1db182f9-cd35-48d6-90d0-270eb8a25e4c-config\") pod \"dnsmasq-dns-6578955fd5-xhgvb\" (UID: \"1db182f9-cd35-48d6-90d0-270eb8a25e4c\") " pod="openstack/dnsmasq-dns-6578955fd5-xhgvb" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.959039 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1db182f9-cd35-48d6-90d0-270eb8a25e4c-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-xhgvb\" (UID: \"1db182f9-cd35-48d6-90d0-270eb8a25e4c\") " pod="openstack/dnsmasq-dns-6578955fd5-xhgvb" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.963068 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1db182f9-cd35-48d6-90d0-270eb8a25e4c-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-xhgvb\" (UID: \"1db182f9-cd35-48d6-90d0-270eb8a25e4c\") " pod="openstack/dnsmasq-dns-6578955fd5-xhgvb" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.963159 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1db182f9-cd35-48d6-90d0-270eb8a25e4c-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-xhgvb\" (UID: \"1db182f9-cd35-48d6-90d0-270eb8a25e4c\") " pod="openstack/dnsmasq-dns-6578955fd5-xhgvb" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.963804 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1db182f9-cd35-48d6-90d0-270eb8a25e4c-config\") pod \"dnsmasq-dns-6578955fd5-xhgvb\" (UID: \"1db182f9-cd35-48d6-90d0-270eb8a25e4c\") " pod="openstack/dnsmasq-dns-6578955fd5-xhgvb" Feb 26 17:39:07 crc kubenswrapper[4805]: I0226 17:39:07.980473 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 26 17:39:08 crc kubenswrapper[4805]: I0226 17:39:08.031644 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1db182f9-cd35-48d6-90d0-270eb8a25e4c-dns-svc\") pod \"dnsmasq-dns-6578955fd5-xhgvb\" (UID: \"1db182f9-cd35-48d6-90d0-270eb8a25e4c\") " pod="openstack/dnsmasq-dns-6578955fd5-xhgvb" Feb 26 17:39:08 crc kubenswrapper[4805]: I0226 17:39:08.031888 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1db182f9-cd35-48d6-90d0-270eb8a25e4c-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-xhgvb\" (UID: \"1db182f9-cd35-48d6-90d0-270eb8a25e4c\") " pod="openstack/dnsmasq-dns-6578955fd5-xhgvb" Feb 26 17:39:08 crc kubenswrapper[4805]: I0226 17:39:08.055914 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4pzt\" (UniqueName: \"kubernetes.io/projected/1db182f9-cd35-48d6-90d0-270eb8a25e4c-kube-api-access-d4pzt\") pod \"dnsmasq-dns-6578955fd5-xhgvb\" (UID: \"1db182f9-cd35-48d6-90d0-270eb8a25e4c\") " pod="openstack/dnsmasq-dns-6578955fd5-xhgvb" Feb 26 17:39:08 crc kubenswrapper[4805]: I0226 17:39:08.063211 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62fcq\" (UniqueName: \"kubernetes.io/projected/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-kube-api-access-62fcq\") pod \"cinder-api-0\" (UID: \"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc\") " pod="openstack/cinder-api-0" Feb 26 17:39:08 crc kubenswrapper[4805]: I0226 17:39:08.063302 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-etc-machine-id\") pod \"cinder-api-0\" (UID: \"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc\") " pod="openstack/cinder-api-0" Feb 26 17:39:08 crc kubenswrapper[4805]: I0226 17:39:08.063412 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-logs\") pod \"cinder-api-0\" (UID: \"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc\") " pod="openstack/cinder-api-0" Feb 26 17:39:08 crc kubenswrapper[4805]: I0226 17:39:08.063520 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-config-data-custom\") pod \"cinder-api-0\" (UID: \"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc\") " pod="openstack/cinder-api-0" Feb 26 17:39:08 crc kubenswrapper[4805]: I0226 17:39:08.063568 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc\") " pod="openstack/cinder-api-0" Feb 26 17:39:08 crc kubenswrapper[4805]: I0226 17:39:08.063675 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-config-data\") pod \"cinder-api-0\" (UID: \"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc\") " pod="openstack/cinder-api-0" Feb 26 17:39:08 crc kubenswrapper[4805]: I0226 17:39:08.063721 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-scripts\") pod \"cinder-api-0\" (UID: \"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc\") " pod="openstack/cinder-api-0" Feb 26 17:39:08 crc kubenswrapper[4805]: I0226 17:39:08.122502 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 26 17:39:08 crc kubenswrapper[4805]: I0226 17:39:08.165391 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-config-data\") pod \"cinder-api-0\" (UID: \"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc\") " pod="openstack/cinder-api-0" Feb 26 17:39:08 crc kubenswrapper[4805]: I0226 17:39:08.165462 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-scripts\") pod \"cinder-api-0\" (UID: \"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc\") " pod="openstack/cinder-api-0" Feb 26 17:39:08 crc kubenswrapper[4805]: I0226 17:39:08.165530 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62fcq\" (UniqueName: \"kubernetes.io/projected/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-kube-api-access-62fcq\") pod \"cinder-api-0\" (UID: \"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc\") " pod="openstack/cinder-api-0" Feb 26 17:39:08 crc kubenswrapper[4805]: I0226 17:39:08.165584 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-etc-machine-id\") pod \"cinder-api-0\" (UID: \"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc\") " pod="openstack/cinder-api-0" Feb 26 17:39:08 crc kubenswrapper[4805]: I0226 17:39:08.165630 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-logs\") pod \"cinder-api-0\" (UID: \"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc\") " pod="openstack/cinder-api-0" Feb 26 17:39:08 crc kubenswrapper[4805]: I0226 17:39:08.165716 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-config-data-custom\") pod \"cinder-api-0\" (UID: \"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc\") " pod="openstack/cinder-api-0" Feb 26 17:39:08 crc kubenswrapper[4805]: I0226 17:39:08.165757 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc\") " pod="openstack/cinder-api-0" Feb 26 17:39:08 crc kubenswrapper[4805]: I0226 17:39:08.169251 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-logs\") pod \"cinder-api-0\" (UID: \"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc\") " pod="openstack/cinder-api-0" Feb 26 17:39:08 crc kubenswrapper[4805]: I0226 17:39:08.169599 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-etc-machine-id\") pod \"cinder-api-0\" (UID: \"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc\") " pod="openstack/cinder-api-0" Feb 26 17:39:08 crc kubenswrapper[4805]: I0226 17:39:08.174181 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc\") " pod="openstack/cinder-api-0" Feb 26 17:39:08 crc kubenswrapper[4805]: I0226 17:39:08.179386 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-config-data\") pod \"cinder-api-0\" (UID: \"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc\") " pod="openstack/cinder-api-0" Feb 26 17:39:08 crc kubenswrapper[4805]: I0226 17:39:08.182841 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-scripts\") pod \"cinder-api-0\" (UID: \"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc\") " pod="openstack/cinder-api-0" Feb 26 17:39:08 crc kubenswrapper[4805]: I0226 17:39:08.184967 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-config-data-custom\") pod \"cinder-api-0\" (UID: \"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc\") " pod="openstack/cinder-api-0" Feb 26 17:39:08 crc kubenswrapper[4805]: I0226 17:39:08.193231 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62fcq\" (UniqueName: \"kubernetes.io/projected/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-kube-api-access-62fcq\") pod \"cinder-api-0\" (UID: \"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc\") " pod="openstack/cinder-api-0" Feb 26 17:39:08 crc kubenswrapper[4805]: I0226 17:39:08.276909 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-xhgvb" Feb 26 17:39:08 crc kubenswrapper[4805]: I0226 17:39:08.288882 4805 generic.go:334] "Generic (PLEG): container finished" podID="45bd8308-31de-4e9a-b0cc-d60f191e9f3a" containerID="a3bcd7d0409f3b9a2bcc40555c21dd12fc3259f4315ff85210cd1622306063ac" exitCode=0 Feb 26 17:39:08 crc kubenswrapper[4805]: I0226 17:39:08.289917 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-6bc5p" event={"ID":"45bd8308-31de-4e9a-b0cc-d60f191e9f3a","Type":"ContainerDied","Data":"a3bcd7d0409f3b9a2bcc40555c21dd12fc3259f4315ff85210cd1622306063ac"} Feb 26 17:39:08 crc kubenswrapper[4805]: I0226 17:39:08.316821 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 26 17:39:09 crc kubenswrapper[4805]: I0226 17:39:09.170307 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-zzvd9"] Feb 26 17:39:09 crc kubenswrapper[4805]: I0226 17:39:09.369942 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 17:39:09 crc kubenswrapper[4805]: I0226 17:39:09.400618 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-6bc5p" event={"ID":"45bd8308-31de-4e9a-b0cc-d60f191e9f3a","Type":"ContainerDied","Data":"ff5190eb6cb75d27aaf463e077ffaf5180d15f971207b7f65aed7e21b3e03598"} Feb 26 17:39:09 crc kubenswrapper[4805]: I0226 17:39:09.400671 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff5190eb6cb75d27aaf463e077ffaf5180d15f971207b7f65aed7e21b3e03598" Feb 26 17:39:09 crc kubenswrapper[4805]: W0226 17:39:09.444353 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod712095a2_42ac_4ef6_a164_fbd4dd993948.slice/crio-fbf6fafaf8615eddcd442adc1f9deb6fd30c9773384c7b0f688e95c26f42e5f3 WatchSource:0}: Error finding container fbf6fafaf8615eddcd442adc1f9deb6fd30c9773384c7b0f688e95c26f42e5f3: Status 404 returned error can't find the container with id fbf6fafaf8615eddcd442adc1f9deb6fd30c9773384c7b0f688e95c26f42e5f3 Feb 26 17:39:09 crc kubenswrapper[4805]: I0226 17:39:09.444668 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-zzvd9" event={"ID":"f369c5e8-1932-4293-9e75-6b74c4d4eb21","Type":"ContainerStarted","Data":"45fa3d599c0b4162435694cda58e678c29e69676432d7a041b3eeb635da33b96"} Feb 26 17:39:09 crc kubenswrapper[4805]: I0226 17:39:09.483695 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-6bc5p" Feb 26 17:39:09 crc kubenswrapper[4805]: I0226 17:39:09.584157 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45bd8308-31de-4e9a-b0cc-d60f191e9f3a-ovsdbserver-sb\") pod \"45bd8308-31de-4e9a-b0cc-d60f191e9f3a\" (UID: \"45bd8308-31de-4e9a-b0cc-d60f191e9f3a\") " Feb 26 17:39:09 crc kubenswrapper[4805]: I0226 17:39:09.584264 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45bd8308-31de-4e9a-b0cc-d60f191e9f3a-dns-svc\") pod \"45bd8308-31de-4e9a-b0cc-d60f191e9f3a\" (UID: \"45bd8308-31de-4e9a-b0cc-d60f191e9f3a\") " Feb 26 17:39:09 crc kubenswrapper[4805]: I0226 17:39:09.584302 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/45bd8308-31de-4e9a-b0cc-d60f191e9f3a-dns-swift-storage-0\") pod \"45bd8308-31de-4e9a-b0cc-d60f191e9f3a\" (UID: \"45bd8308-31de-4e9a-b0cc-d60f191e9f3a\") " Feb 26 17:39:09 crc kubenswrapper[4805]: I0226 17:39:09.584348 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45bd8308-31de-4e9a-b0cc-d60f191e9f3a-ovsdbserver-nb\") pod \"45bd8308-31de-4e9a-b0cc-d60f191e9f3a\" (UID: \"45bd8308-31de-4e9a-b0cc-d60f191e9f3a\") " Feb 26 17:39:09 crc kubenswrapper[4805]: I0226 17:39:09.584470 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmjx5\" (UniqueName: \"kubernetes.io/projected/45bd8308-31de-4e9a-b0cc-d60f191e9f3a-kube-api-access-jmjx5\") pod \"45bd8308-31de-4e9a-b0cc-d60f191e9f3a\" (UID: \"45bd8308-31de-4e9a-b0cc-d60f191e9f3a\") " Feb 26 17:39:09 crc kubenswrapper[4805]: I0226 17:39:09.584513 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45bd8308-31de-4e9a-b0cc-d60f191e9f3a-config\") pod \"45bd8308-31de-4e9a-b0cc-d60f191e9f3a\" (UID: \"45bd8308-31de-4e9a-b0cc-d60f191e9f3a\") " Feb 26 17:39:09 crc kubenswrapper[4805]: I0226 17:39:09.755103 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45bd8308-31de-4e9a-b0cc-d60f191e9f3a-kube-api-access-jmjx5" (OuterVolumeSpecName: "kube-api-access-jmjx5") pod "45bd8308-31de-4e9a-b0cc-d60f191e9f3a" (UID: "45bd8308-31de-4e9a-b0cc-d60f191e9f3a"). InnerVolumeSpecName "kube-api-access-jmjx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:39:09 crc kubenswrapper[4805]: I0226 17:39:09.791008 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45bd8308-31de-4e9a-b0cc-d60f191e9f3a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "45bd8308-31de-4e9a-b0cc-d60f191e9f3a" (UID: "45bd8308-31de-4e9a-b0cc-d60f191e9f3a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:39:09 crc kubenswrapper[4805]: I0226 17:39:09.800546 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45bd8308-31de-4e9a-b0cc-d60f191e9f3a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "45bd8308-31de-4e9a-b0cc-d60f191e9f3a" (UID: "45bd8308-31de-4e9a-b0cc-d60f191e9f3a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:39:09 crc kubenswrapper[4805]: I0226 17:39:09.813849 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45bd8308-31de-4e9a-b0cc-d60f191e9f3a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "45bd8308-31de-4e9a-b0cc-d60f191e9f3a" (UID: "45bd8308-31de-4e9a-b0cc-d60f191e9f3a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:39:09 crc kubenswrapper[4805]: I0226 17:39:09.821339 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45bd8308-31de-4e9a-b0cc-d60f191e9f3a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "45bd8308-31de-4e9a-b0cc-d60f191e9f3a" (UID: "45bd8308-31de-4e9a-b0cc-d60f191e9f3a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:39:09 crc kubenswrapper[4805]: I0226 17:39:09.835317 4805 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/45bd8308-31de-4e9a-b0cc-d60f191e9f3a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:09 crc kubenswrapper[4805]: I0226 17:39:09.835349 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45bd8308-31de-4e9a-b0cc-d60f191e9f3a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:09 crc kubenswrapper[4805]: I0226 17:39:09.835358 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmjx5\" (UniqueName: \"kubernetes.io/projected/45bd8308-31de-4e9a-b0cc-d60f191e9f3a-kube-api-access-jmjx5\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:09 crc kubenswrapper[4805]: I0226 17:39:09.835370 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45bd8308-31de-4e9a-b0cc-d60f191e9f3a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:09 crc kubenswrapper[4805]: I0226 17:39:09.835380 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45bd8308-31de-4e9a-b0cc-d60f191e9f3a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:09 crc kubenswrapper[4805]: I0226 17:39:09.948061 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45bd8308-31de-4e9a-b0cc-d60f191e9f3a-config" (OuterVolumeSpecName: "config") pod "45bd8308-31de-4e9a-b0cc-d60f191e9f3a" (UID: "45bd8308-31de-4e9a-b0cc-d60f191e9f3a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:39:10 crc kubenswrapper[4805]: I0226 17:39:10.007489 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 26 17:39:10 crc kubenswrapper[4805]: I0226 17:39:10.031730 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-xhgvb"] Feb 26 17:39:10 crc kubenswrapper[4805]: I0226 17:39:10.042670 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45bd8308-31de-4e9a-b0cc-d60f191e9f3a-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:10 crc kubenswrapper[4805]: I0226 17:39:10.599454 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc","Type":"ContainerStarted","Data":"5614a4f0bf9e1fecf2b77e9b95eb5d5a02ef3cba2c8ae20add81ecfa9729242e"} Feb 26 17:39:10 crc kubenswrapper[4805]: I0226 17:39:10.627993 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-xhgvb" event={"ID":"1db182f9-cd35-48d6-90d0-270eb8a25e4c","Type":"ContainerStarted","Data":"afb3fa2aab56d9c2f02851e6323a6cdc683d3698149677e07a089d4c0eac5b32"} Feb 26 17:39:10 crc kubenswrapper[4805]: I0226 17:39:10.678417 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-zzvd9" event={"ID":"f369c5e8-1932-4293-9e75-6b74c4d4eb21","Type":"ContainerStarted","Data":"a26931c8694fe1c33ac5d7e1bcfb562f5c502f20f4d2afd41e4c61a2bcbb4889"} Feb 26 17:39:10 crc kubenswrapper[4805]: I0226 17:39:10.705784 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-6bc5p" Feb 26 17:39:10 crc kubenswrapper[4805]: I0226 17:39:10.707661 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"712095a2-42ac-4ef6-a164-fbd4dd993948","Type":"ContainerStarted","Data":"fbf6fafaf8615eddcd442adc1f9deb6fd30c9773384c7b0f688e95c26f42e5f3"} Feb 26 17:39:10 crc kubenswrapper[4805]: I0226 17:39:10.717246 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-storageinit-zzvd9" podStartSLOduration=3.71721895 podStartE2EDuration="3.71721895s" podCreationTimestamp="2026-02-26 17:39:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:39:10.701645057 +0000 UTC m=+1465.263399396" watchObservedRunningTime="2026-02-26 17:39:10.71721895 +0000 UTC m=+1465.278973289" Feb 26 17:39:10 crc kubenswrapper[4805]: I0226 17:39:10.794743 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-6bc5p"] Feb 26 17:39:10 crc kubenswrapper[4805]: I0226 17:39:10.816970 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-6bc5p"] Feb 26 17:39:11 crc kubenswrapper[4805]: I0226 17:39:11.003796 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45bd8308-31de-4e9a-b0cc-d60f191e9f3a" path="/var/lib/kubelet/pods/45bd8308-31de-4e9a-b0cc-d60f191e9f3a/volumes" Feb 26 17:39:11 crc kubenswrapper[4805]: I0226 17:39:11.622787 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 26 17:39:11 crc kubenswrapper[4805]: I0226 17:39:11.836067 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc","Type":"ContainerStarted","Data":"14fe24a5b70d3b18006c86e4c7ea2af357f7eed993c30df44399f63d178d3af5"} Feb 26 17:39:11 crc kubenswrapper[4805]: I0226 17:39:11.837776 4805 generic.go:334] "Generic (PLEG): container finished" podID="1db182f9-cd35-48d6-90d0-270eb8a25e4c" containerID="ef47d40952554ed87d27e4aefdbac906c98429d0d38c98591f445d40b6182668" exitCode=0 Feb 26 17:39:11 crc kubenswrapper[4805]: I0226 17:39:11.838680 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-xhgvb" event={"ID":"1db182f9-cd35-48d6-90d0-270eb8a25e4c","Type":"ContainerDied","Data":"ef47d40952554ed87d27e4aefdbac906c98429d0d38c98591f445d40b6182668"} Feb 26 17:39:12 crc kubenswrapper[4805]: I0226 17:39:12.886214 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-xhgvb" event={"ID":"1db182f9-cd35-48d6-90d0-270eb8a25e4c","Type":"ContainerStarted","Data":"cdc814a1c48991a2eafc0aa3500a5a27c075d9ae5389051b5809c5e2b9df5fb4"} Feb 26 17:39:12 crc kubenswrapper[4805]: I0226 17:39:12.886716 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-xhgvb" Feb 26 17:39:12 crc kubenswrapper[4805]: I0226 17:39:12.892577 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"712095a2-42ac-4ef6-a164-fbd4dd993948","Type":"ContainerStarted","Data":"6d82f8ced4e063f5e012bf24b9ef1dabc29bf93b9847be756d1e88ad4ddfa43a"} Feb 26 17:39:12 crc kubenswrapper[4805]: I0226 17:39:12.915810 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7cf7d6fbd8-c2wwp" Feb 26 17:39:12 crc kubenswrapper[4805]: I0226 17:39:12.923916 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-xhgvb" podStartSLOduration=5.923897988 podStartE2EDuration="5.923897988s" podCreationTimestamp="2026-02-26 17:39:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:39:12.910121781 +0000 UTC m=+1467.471876120" watchObservedRunningTime="2026-02-26 17:39:12.923897988 +0000 UTC m=+1467.485652327" Feb 26 17:39:13 crc kubenswrapper[4805]: I0226 17:39:13.914966 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"712095a2-42ac-4ef6-a164-fbd4dd993948","Type":"ContainerStarted","Data":"f59d5527480b35afb418b83882cab58f4236ff1609a223166cd770c261333e8b"} Feb 26 17:39:13 crc kubenswrapper[4805]: I0226 17:39:13.920317 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="372c1ddd-b0b5-409d-a09f-30bf1c17e5bc" containerName="cinder-api-log" containerID="cri-o://14fe24a5b70d3b18006c86e4c7ea2af357f7eed993c30df44399f63d178d3af5" gracePeriod=30 Feb 26 17:39:13 crc kubenswrapper[4805]: I0226 17:39:13.920686 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="372c1ddd-b0b5-409d-a09f-30bf1c17e5bc" containerName="cinder-api" containerID="cri-o://b6edfd04093a21664526f6f96a5b6af2bc1142a82517fac2e6884ec9a6d3bac0" gracePeriod=30 Feb 26 17:39:13 crc kubenswrapper[4805]: I0226 17:39:13.920749 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc","Type":"ContainerStarted","Data":"b6edfd04093a21664526f6f96a5b6af2bc1142a82517fac2e6884ec9a6d3bac0"} Feb 26 17:39:13 crc kubenswrapper[4805]: I0226 17:39:13.920784 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 26 17:39:13 crc kubenswrapper[4805]: I0226 17:39:13.936206 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.388940335 podStartE2EDuration="6.93619219s" podCreationTimestamp="2026-02-26 17:39:07 +0000 UTC" firstStartedPulling="2026-02-26 17:39:09.467074574 +0000 UTC m=+1464.028828913" lastFinishedPulling="2026-02-26 17:39:11.014326429 +0000 UTC m=+1465.576080768" observedRunningTime="2026-02-26 17:39:13.932517318 +0000 UTC m=+1468.494271657" watchObservedRunningTime="2026-02-26 17:39:13.93619219 +0000 UTC m=+1468.497946529" Feb 26 17:39:13 crc kubenswrapper[4805]: I0226 17:39:13.960293 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.960272958 podStartE2EDuration="6.960272958s" podCreationTimestamp="2026-02-26 17:39:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:39:13.955112418 +0000 UTC m=+1468.516866777" watchObservedRunningTime="2026-02-26 17:39:13.960272958 +0000 UTC m=+1468.522027287" Feb 26 17:39:14 crc kubenswrapper[4805]: I0226 17:39:14.123664 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7cf7d6fbd8-c2wwp" Feb 26 17:39:14 crc kubenswrapper[4805]: I0226 17:39:14.932679 4805 generic.go:334] "Generic (PLEG): container finished" podID="f369c5e8-1932-4293-9e75-6b74c4d4eb21" containerID="a26931c8694fe1c33ac5d7e1bcfb562f5c502f20f4d2afd41e4c61a2bcbb4889" exitCode=0 Feb 26 17:39:14 crc kubenswrapper[4805]: I0226 17:39:14.932760 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-zzvd9" event={"ID":"f369c5e8-1932-4293-9e75-6b74c4d4eb21","Type":"ContainerDied","Data":"a26931c8694fe1c33ac5d7e1bcfb562f5c502f20f4d2afd41e4c61a2bcbb4889"} Feb 26 17:39:14 crc kubenswrapper[4805]: I0226 17:39:14.941890 4805 generic.go:334] "Generic (PLEG): container finished" podID="372c1ddd-b0b5-409d-a09f-30bf1c17e5bc" containerID="b6edfd04093a21664526f6f96a5b6af2bc1142a82517fac2e6884ec9a6d3bac0" exitCode=0 Feb 26 17:39:14 crc kubenswrapper[4805]: I0226 17:39:14.941928 4805 generic.go:334] "Generic (PLEG): container finished" podID="372c1ddd-b0b5-409d-a09f-30bf1c17e5bc" containerID="14fe24a5b70d3b18006c86e4c7ea2af357f7eed993c30df44399f63d178d3af5" exitCode=143 Feb 26 17:39:14 crc kubenswrapper[4805]: I0226 17:39:14.942113 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc","Type":"ContainerDied","Data":"b6edfd04093a21664526f6f96a5b6af2bc1142a82517fac2e6884ec9a6d3bac0"} Feb 26 17:39:14 crc kubenswrapper[4805]: I0226 17:39:14.942171 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc","Type":"ContainerDied","Data":"14fe24a5b70d3b18006c86e4c7ea2af357f7eed993c30df44399f63d178d3af5"} Feb 26 17:39:15 crc kubenswrapper[4805]: I0226 17:39:15.302312 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-69668dd468-5vzck" Feb 26 17:39:15 crc kubenswrapper[4805]: I0226 17:39:15.378648 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-69668dd468-5vzck" Feb 26 17:39:15 crc kubenswrapper[4805]: I0226 17:39:15.469350 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7cf7d6fbd8-c2wwp"] Feb 26 17:39:15 crc kubenswrapper[4805]: I0226 17:39:15.469626 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7cf7d6fbd8-c2wwp" podUID="1fef9a78-09bd-4e03-b7be-0f92d3aa705b" containerName="barbican-api-log" containerID="cri-o://69f3081626bc75e68bbd59df574ddf15ced84bc4a9e4f0d15bd4b6b3e5c2490b" gracePeriod=30 Feb 26 17:39:15 crc kubenswrapper[4805]: I0226 17:39:15.469936 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7cf7d6fbd8-c2wwp" podUID="1fef9a78-09bd-4e03-b7be-0f92d3aa705b" containerName="barbican-api" containerID="cri-o://812aaa90d7c85aefa95f8af2f204c58b96a105e80a0926d93ef0839a20cb6c0a" gracePeriod=30 Feb 26 17:39:15 crc kubenswrapper[4805]: I0226 17:39:15.485532 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7cf7d6fbd8-c2wwp" podUID="1fef9a78-09bd-4e03-b7be-0f92d3aa705b" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.185:9311/healthcheck\": EOF" Feb 26 17:39:15 crc kubenswrapper[4805]: I0226 17:39:15.485547 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7cf7d6fbd8-c2wwp" podUID="1fef9a78-09bd-4e03-b7be-0f92d3aa705b" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.185:9311/healthcheck\": EOF" Feb 26 17:39:15 crc kubenswrapper[4805]: I0226 17:39:15.501708 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7cf7d6fbd8-c2wwp" podUID="1fef9a78-09bd-4e03-b7be-0f92d3aa705b" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.185:9311/healthcheck\": EOF" Feb 26 17:39:16 crc kubenswrapper[4805]: I0226 17:39:16.108749 4805 generic.go:334] "Generic (PLEG): container finished" podID="1fef9a78-09bd-4e03-b7be-0f92d3aa705b" containerID="69f3081626bc75e68bbd59df574ddf15ced84bc4a9e4f0d15bd4b6b3e5c2490b" exitCode=143 Feb 26 17:39:16 crc kubenswrapper[4805]: I0226 17:39:16.109557 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7cf7d6fbd8-c2wwp" event={"ID":"1fef9a78-09bd-4e03-b7be-0f92d3aa705b","Type":"ContainerDied","Data":"69f3081626bc75e68bbd59df574ddf15ced84bc4a9e4f0d15bd4b6b3e5c2490b"} Feb 26 17:39:17 crc kubenswrapper[4805]: I0226 17:39:17.646076 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rb2p9"] Feb 26 17:39:17 crc kubenswrapper[4805]: E0226 17:39:17.647088 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45bd8308-31de-4e9a-b0cc-d60f191e9f3a" containerName="init" Feb 26 17:39:17 crc kubenswrapper[4805]: I0226 17:39:17.647108 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="45bd8308-31de-4e9a-b0cc-d60f191e9f3a" containerName="init" Feb 26 17:39:17 crc kubenswrapper[4805]: E0226 17:39:17.647135 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45bd8308-31de-4e9a-b0cc-d60f191e9f3a" containerName="dnsmasq-dns" Feb 26 17:39:17 crc kubenswrapper[4805]: I0226 17:39:17.647144 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="45bd8308-31de-4e9a-b0cc-d60f191e9f3a" containerName="dnsmasq-dns" Feb 26 17:39:17 crc kubenswrapper[4805]: I0226 17:39:17.647594 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="45bd8308-31de-4e9a-b0cc-d60f191e9f3a" containerName="dnsmasq-dns" Feb 26 17:39:17 crc kubenswrapper[4805]: I0226 17:39:17.759966 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rb2p9" Feb 26 17:39:17 crc kubenswrapper[4805]: I0226 17:39:17.787257 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rb2p9"] Feb 26 17:39:17 crc kubenswrapper[4805]: I0226 17:39:17.864711 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db58eaee-5842-4d11-babf-1ededef9c68e-catalog-content\") pod \"redhat-operators-rb2p9\" (UID: \"db58eaee-5842-4d11-babf-1ededef9c68e\") " pod="openshift-marketplace/redhat-operators-rb2p9" Feb 26 17:39:17 crc kubenswrapper[4805]: I0226 17:39:17.864787 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bckv\" (UniqueName: \"kubernetes.io/projected/db58eaee-5842-4d11-babf-1ededef9c68e-kube-api-access-7bckv\") pod \"redhat-operators-rb2p9\" (UID: \"db58eaee-5842-4d11-babf-1ededef9c68e\") " pod="openshift-marketplace/redhat-operators-rb2p9" Feb 26 17:39:17 crc kubenswrapper[4805]: I0226 17:39:17.864931 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db58eaee-5842-4d11-babf-1ededef9c68e-utilities\") pod \"redhat-operators-rb2p9\" (UID: \"db58eaee-5842-4d11-babf-1ededef9c68e\") " pod="openshift-marketplace/redhat-operators-rb2p9" Feb 26 17:39:17 crc kubenswrapper[4805]: I0226 17:39:17.967465 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db58eaee-5842-4d11-babf-1ededef9c68e-utilities\") pod \"redhat-operators-rb2p9\" (UID: \"db58eaee-5842-4d11-babf-1ededef9c68e\") " pod="openshift-marketplace/redhat-operators-rb2p9" Feb 26 17:39:17 crc kubenswrapper[4805]: I0226 17:39:17.967696 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db58eaee-5842-4d11-babf-1ededef9c68e-catalog-content\") pod \"redhat-operators-rb2p9\" (UID: \"db58eaee-5842-4d11-babf-1ededef9c68e\") " pod="openshift-marketplace/redhat-operators-rb2p9" Feb 26 17:39:17 crc kubenswrapper[4805]: I0226 17:39:17.967741 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bckv\" (UniqueName: \"kubernetes.io/projected/db58eaee-5842-4d11-babf-1ededef9c68e-kube-api-access-7bckv\") pod \"redhat-operators-rb2p9\" (UID: \"db58eaee-5842-4d11-babf-1ededef9c68e\") " pod="openshift-marketplace/redhat-operators-rb2p9" Feb 26 17:39:17 crc kubenswrapper[4805]: I0226 17:39:17.975410 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db58eaee-5842-4d11-babf-1ededef9c68e-utilities\") pod \"redhat-operators-rb2p9\" (UID: \"db58eaee-5842-4d11-babf-1ededef9c68e\") " pod="openshift-marketplace/redhat-operators-rb2p9" Feb 26 17:39:17 crc kubenswrapper[4805]: I0226 17:39:17.977315 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db58eaee-5842-4d11-babf-1ededef9c68e-catalog-content\") pod \"redhat-operators-rb2p9\" (UID: \"db58eaee-5842-4d11-babf-1ededef9c68e\") " pod="openshift-marketplace/redhat-operators-rb2p9" Feb 26 17:39:17 crc kubenswrapper[4805]: I0226 17:39:17.981478 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 26 17:39:17 crc kubenswrapper[4805]: I0226 17:39:17.990427 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bckv\" (UniqueName: \"kubernetes.io/projected/db58eaee-5842-4d11-babf-1ededef9c68e-kube-api-access-7bckv\") pod \"redhat-operators-rb2p9\" (UID: \"db58eaee-5842-4d11-babf-1ededef9c68e\") " pod="openshift-marketplace/redhat-operators-rb2p9" Feb 26 17:39:18 crc kubenswrapper[4805]: I0226 17:39:18.100177 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rb2p9" Feb 26 17:39:18 crc kubenswrapper[4805]: I0226 17:39:18.257835 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 26 17:39:18 crc kubenswrapper[4805]: I0226 17:39:18.279158 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-xhgvb" Feb 26 17:39:18 crc kubenswrapper[4805]: I0226 17:39:18.299728 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 17:39:18 crc kubenswrapper[4805]: I0226 17:39:18.319999 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="372c1ddd-b0b5-409d-a09f-30bf1c17e5bc" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.191:8776/healthcheck\": dial tcp 10.217.0.191:8776: connect: connection refused" Feb 26 17:39:18 crc kubenswrapper[4805]: I0226 17:39:18.386824 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-sj72q"] Feb 26 17:39:18 crc kubenswrapper[4805]: I0226 17:39:18.387126 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-sj72q" podUID="4a310598-e458-4267-99fd-2a14ee356946" containerName="dnsmasq-dns" containerID="cri-o://d240c37e4c6840bafea545fc49876c1ff4731b8d719f5aeb77752f049fd123a6" gracePeriod=10 Feb 26 17:39:19 crc kubenswrapper[4805]: I0226 17:39:19.151923 4805 generic.go:334] "Generic (PLEG): container finished" podID="4a310598-e458-4267-99fd-2a14ee356946" containerID="d240c37e4c6840bafea545fc49876c1ff4731b8d719f5aeb77752f049fd123a6" exitCode=0 Feb 26 17:39:19 crc kubenswrapper[4805]: I0226 17:39:19.151976 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-sj72q" event={"ID":"4a310598-e458-4267-99fd-2a14ee356946","Type":"ContainerDied","Data":"d240c37e4c6840bafea545fc49876c1ff4731b8d719f5aeb77752f049fd123a6"} Feb 26 17:39:19 crc kubenswrapper[4805]: I0226 17:39:19.153071 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="712095a2-42ac-4ef6-a164-fbd4dd993948" containerName="cinder-scheduler" containerID="cri-o://6d82f8ced4e063f5e012bf24b9ef1dabc29bf93b9847be756d1e88ad4ddfa43a" gracePeriod=30 Feb 26 17:39:19 crc kubenswrapper[4805]: I0226 17:39:19.153069 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="712095a2-42ac-4ef6-a164-fbd4dd993948" containerName="probe" containerID="cri-o://f59d5527480b35afb418b83882cab58f4236ff1609a223166cd770c261333e8b" gracePeriod=30 Feb 26 17:39:20 crc kubenswrapper[4805]: I0226 17:39:20.165500 4805 generic.go:334] "Generic (PLEG): container finished" podID="712095a2-42ac-4ef6-a164-fbd4dd993948" containerID="f59d5527480b35afb418b83882cab58f4236ff1609a223166cd770c261333e8b" exitCode=0 Feb 26 17:39:20 crc kubenswrapper[4805]: I0226 17:39:20.165579 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"712095a2-42ac-4ef6-a164-fbd4dd993948","Type":"ContainerDied","Data":"f59d5527480b35afb418b83882cab58f4236ff1609a223166cd770c261333e8b"} Feb 26 17:39:20 crc kubenswrapper[4805]: I0226 17:39:20.960949 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7cf7d6fbd8-c2wwp" podUID="1fef9a78-09bd-4e03-b7be-0f92d3aa705b" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.185:9311/healthcheck\": read tcp 10.217.0.2:47762->10.217.0.185:9311: read: connection reset by peer" Feb 26 17:39:20 crc kubenswrapper[4805]: I0226 17:39:20.960968 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7cf7d6fbd8-c2wwp" podUID="1fef9a78-09bd-4e03-b7be-0f92d3aa705b" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.185:9311/healthcheck\": read tcp 10.217.0.2:47764->10.217.0.185:9311: read: connection reset by peer" Feb 26 17:39:20 crc kubenswrapper[4805]: I0226 17:39:20.973973 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7cf7d6fbd8-c2wwp" Feb 26 17:39:21 crc kubenswrapper[4805]: I0226 17:39:21.073405 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-zzvd9" Feb 26 17:39:21 crc kubenswrapper[4805]: I0226 17:39:21.187162 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-zzvd9" event={"ID":"f369c5e8-1932-4293-9e75-6b74c4d4eb21","Type":"ContainerDied","Data":"45fa3d599c0b4162435694cda58e678c29e69676432d7a041b3eeb635da33b96"} Feb 26 17:39:21 crc kubenswrapper[4805]: I0226 17:39:21.187242 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45fa3d599c0b4162435694cda58e678c29e69676432d7a041b3eeb635da33b96" Feb 26 17:39:21 crc kubenswrapper[4805]: I0226 17:39:21.187208 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-zzvd9" Feb 26 17:39:21 crc kubenswrapper[4805]: I0226 17:39:21.189327 4805 generic.go:334] "Generic (PLEG): container finished" podID="712095a2-42ac-4ef6-a164-fbd4dd993948" containerID="6d82f8ced4e063f5e012bf24b9ef1dabc29bf93b9847be756d1e88ad4ddfa43a" exitCode=0 Feb 26 17:39:21 crc kubenswrapper[4805]: I0226 17:39:21.189394 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"712095a2-42ac-4ef6-a164-fbd4dd993948","Type":"ContainerDied","Data":"6d82f8ced4e063f5e012bf24b9ef1dabc29bf93b9847be756d1e88ad4ddfa43a"} Feb 26 17:39:21 crc kubenswrapper[4805]: I0226 17:39:21.191657 4805 generic.go:334] "Generic (PLEG): container finished" podID="1fef9a78-09bd-4e03-b7be-0f92d3aa705b" containerID="812aaa90d7c85aefa95f8af2f204c58b96a105e80a0926d93ef0839a20cb6c0a" exitCode=0 Feb 26 17:39:21 crc kubenswrapper[4805]: I0226 17:39:21.191729 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7cf7d6fbd8-c2wwp" event={"ID":"1fef9a78-09bd-4e03-b7be-0f92d3aa705b","Type":"ContainerDied","Data":"812aaa90d7c85aefa95f8af2f204c58b96a105e80a0926d93ef0839a20cb6c0a"} Feb 26 17:39:21 crc kubenswrapper[4805]: I0226 17:39:21.245713 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/f369c5e8-1932-4293-9e75-6b74c4d4eb21-certs\") pod \"f369c5e8-1932-4293-9e75-6b74c4d4eb21\" (UID: \"f369c5e8-1932-4293-9e75-6b74c4d4eb21\") " Feb 26 17:39:21 crc kubenswrapper[4805]: I0226 17:39:21.245861 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f369c5e8-1932-4293-9e75-6b74c4d4eb21-scripts\") pod \"f369c5e8-1932-4293-9e75-6b74c4d4eb21\" (UID: \"f369c5e8-1932-4293-9e75-6b74c4d4eb21\") " Feb 26 17:39:21 crc kubenswrapper[4805]: I0226 17:39:21.251063 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5n4cj\" (UniqueName: \"kubernetes.io/projected/f369c5e8-1932-4293-9e75-6b74c4d4eb21-kube-api-access-5n4cj\") pod \"f369c5e8-1932-4293-9e75-6b74c4d4eb21\" (UID: \"f369c5e8-1932-4293-9e75-6b74c4d4eb21\") " Feb 26 17:39:21 crc kubenswrapper[4805]: I0226 17:39:21.251142 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f369c5e8-1932-4293-9e75-6b74c4d4eb21-combined-ca-bundle\") pod \"f369c5e8-1932-4293-9e75-6b74c4d4eb21\" (UID: \"f369c5e8-1932-4293-9e75-6b74c4d4eb21\") " Feb 26 17:39:21 crc kubenswrapper[4805]: I0226 17:39:21.251303 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f369c5e8-1932-4293-9e75-6b74c4d4eb21-config-data\") pod \"f369c5e8-1932-4293-9e75-6b74c4d4eb21\" (UID: \"f369c5e8-1932-4293-9e75-6b74c4d4eb21\") " Feb 26 17:39:21 crc kubenswrapper[4805]: I0226 17:39:21.257835 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f369c5e8-1932-4293-9e75-6b74c4d4eb21-kube-api-access-5n4cj" (OuterVolumeSpecName: "kube-api-access-5n4cj") pod "f369c5e8-1932-4293-9e75-6b74c4d4eb21" (UID: "f369c5e8-1932-4293-9e75-6b74c4d4eb21"). InnerVolumeSpecName "kube-api-access-5n4cj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:39:21 crc kubenswrapper[4805]: I0226 17:39:21.286453 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f369c5e8-1932-4293-9e75-6b74c4d4eb21-certs" (OuterVolumeSpecName: "certs") pod "f369c5e8-1932-4293-9e75-6b74c4d4eb21" (UID: "f369c5e8-1932-4293-9e75-6b74c4d4eb21"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:39:21 crc kubenswrapper[4805]: I0226 17:39:21.310332 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f369c5e8-1932-4293-9e75-6b74c4d4eb21-scripts" (OuterVolumeSpecName: "scripts") pod "f369c5e8-1932-4293-9e75-6b74c4d4eb21" (UID: "f369c5e8-1932-4293-9e75-6b74c4d4eb21"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:21 crc kubenswrapper[4805]: I0226 17:39:21.317222 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f369c5e8-1932-4293-9e75-6b74c4d4eb21-config-data" (OuterVolumeSpecName: "config-data") pod "f369c5e8-1932-4293-9e75-6b74c4d4eb21" (UID: "f369c5e8-1932-4293-9e75-6b74c4d4eb21"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:21 crc kubenswrapper[4805]: I0226 17:39:21.336201 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f369c5e8-1932-4293-9e75-6b74c4d4eb21-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f369c5e8-1932-4293-9e75-6b74c4d4eb21" (UID: "f369c5e8-1932-4293-9e75-6b74c4d4eb21"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:21 crc kubenswrapper[4805]: I0226 17:39:21.357220 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5n4cj\" (UniqueName: \"kubernetes.io/projected/f369c5e8-1932-4293-9e75-6b74c4d4eb21-kube-api-access-5n4cj\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:21 crc kubenswrapper[4805]: I0226 17:39:21.357270 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f369c5e8-1932-4293-9e75-6b74c4d4eb21-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:21 crc kubenswrapper[4805]: I0226 17:39:21.357297 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f369c5e8-1932-4293-9e75-6b74c4d4eb21-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:21 crc kubenswrapper[4805]: I0226 17:39:21.357309 4805 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/f369c5e8-1932-4293-9e75-6b74c4d4eb21-certs\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:21 crc kubenswrapper[4805]: I0226 17:39:21.357321 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f369c5e8-1932-4293-9e75-6b74c4d4eb21-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.441883 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 26 17:39:22 crc kubenswrapper[4805]: E0226 17:39:22.442878 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f369c5e8-1932-4293-9e75-6b74c4d4eb21" containerName="cloudkitty-storageinit" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.442893 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f369c5e8-1932-4293-9e75-6b74c4d4eb21" containerName="cloudkitty-storageinit" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.443592 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f369c5e8-1932-4293-9e75-6b74c4d4eb21" containerName="cloudkitty-storageinit" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.444773 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.462642 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.462940 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.462646 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-ttgs5" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.463283 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.463363 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.490155 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.567119 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-854cm"] Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.569284 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bd69657f-854cm" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.578460 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/838d0018-e6a7-41c4-8c65-ccb9500d75c2-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"838d0018-e6a7-41c4-8c65-ccb9500d75c2\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.578530 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/838d0018-e6a7-41c4-8c65-ccb9500d75c2-config-data\") pod \"cloudkitty-proc-0\" (UID: \"838d0018-e6a7-41c4-8c65-ccb9500d75c2\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.578592 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/838d0018-e6a7-41c4-8c65-ccb9500d75c2-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"838d0018-e6a7-41c4-8c65-ccb9500d75c2\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.578693 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/838d0018-e6a7-41c4-8c65-ccb9500d75c2-certs\") pod \"cloudkitty-proc-0\" (UID: \"838d0018-e6a7-41c4-8c65-ccb9500d75c2\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.578734 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/838d0018-e6a7-41c4-8c65-ccb9500d75c2-scripts\") pod \"cloudkitty-proc-0\" (UID: \"838d0018-e6a7-41c4-8c65-ccb9500d75c2\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.578842 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92prt\" (UniqueName: \"kubernetes.io/projected/838d0018-e6a7-41c4-8c65-ccb9500d75c2-kube-api-access-92prt\") pod \"cloudkitty-proc-0\" (UID: \"838d0018-e6a7-41c4-8c65-ccb9500d75c2\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.581470 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-854cm"] Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.680454 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/838d0018-e6a7-41c4-8c65-ccb9500d75c2-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"838d0018-e6a7-41c4-8c65-ccb9500d75c2\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.681057 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/838d0018-e6a7-41c4-8c65-ccb9500d75c2-config-data\") pod \"cloudkitty-proc-0\" (UID: \"838d0018-e6a7-41c4-8c65-ccb9500d75c2\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.681170 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/838d0018-e6a7-41c4-8c65-ccb9500d75c2-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"838d0018-e6a7-41c4-8c65-ccb9500d75c2\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.681350 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/838d0018-e6a7-41c4-8c65-ccb9500d75c2-certs\") pod \"cloudkitty-proc-0\" (UID: \"838d0018-e6a7-41c4-8c65-ccb9500d75c2\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.681478 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5tmv\" (UniqueName: \"kubernetes.io/projected/753f754f-839c-49a1-81e7-93d2c94a9cc7-kube-api-access-c5tmv\") pod \"dnsmasq-dns-58bd69657f-854cm\" (UID: \"753f754f-839c-49a1-81e7-93d2c94a9cc7\") " pod="openstack/dnsmasq-dns-58bd69657f-854cm" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.681595 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/838d0018-e6a7-41c4-8c65-ccb9500d75c2-scripts\") pod \"cloudkitty-proc-0\" (UID: \"838d0018-e6a7-41c4-8c65-ccb9500d75c2\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.681746 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92prt\" (UniqueName: \"kubernetes.io/projected/838d0018-e6a7-41c4-8c65-ccb9500d75c2-kube-api-access-92prt\") pod \"cloudkitty-proc-0\" (UID: \"838d0018-e6a7-41c4-8c65-ccb9500d75c2\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.681866 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/753f754f-839c-49a1-81e7-93d2c94a9cc7-dns-swift-storage-0\") pod \"dnsmasq-dns-58bd69657f-854cm\" (UID: \"753f754f-839c-49a1-81e7-93d2c94a9cc7\") " pod="openstack/dnsmasq-dns-58bd69657f-854cm" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.681980 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/753f754f-839c-49a1-81e7-93d2c94a9cc7-ovsdbserver-nb\") pod \"dnsmasq-dns-58bd69657f-854cm\" (UID: \"753f754f-839c-49a1-81e7-93d2c94a9cc7\") " pod="openstack/dnsmasq-dns-58bd69657f-854cm" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.682116 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/753f754f-839c-49a1-81e7-93d2c94a9cc7-config\") pod \"dnsmasq-dns-58bd69657f-854cm\" (UID: \"753f754f-839c-49a1-81e7-93d2c94a9cc7\") " pod="openstack/dnsmasq-dns-58bd69657f-854cm" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.682242 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/753f754f-839c-49a1-81e7-93d2c94a9cc7-dns-svc\") pod \"dnsmasq-dns-58bd69657f-854cm\" (UID: \"753f754f-839c-49a1-81e7-93d2c94a9cc7\") " pod="openstack/dnsmasq-dns-58bd69657f-854cm" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.682409 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/753f754f-839c-49a1-81e7-93d2c94a9cc7-ovsdbserver-sb\") pod \"dnsmasq-dns-58bd69657f-854cm\" (UID: \"753f754f-839c-49a1-81e7-93d2c94a9cc7\") " pod="openstack/dnsmasq-dns-58bd69657f-854cm" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.691532 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/838d0018-e6a7-41c4-8c65-ccb9500d75c2-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"838d0018-e6a7-41c4-8c65-ccb9500d75c2\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.694037 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/838d0018-e6a7-41c4-8c65-ccb9500d75c2-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"838d0018-e6a7-41c4-8c65-ccb9500d75c2\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.694220 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/838d0018-e6a7-41c4-8c65-ccb9500d75c2-certs\") pod \"cloudkitty-proc-0\" (UID: \"838d0018-e6a7-41c4-8c65-ccb9500d75c2\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.697955 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/838d0018-e6a7-41c4-8c65-ccb9500d75c2-scripts\") pod \"cloudkitty-proc-0\" (UID: \"838d0018-e6a7-41c4-8c65-ccb9500d75c2\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.704259 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/838d0018-e6a7-41c4-8c65-ccb9500d75c2-config-data\") pod \"cloudkitty-proc-0\" (UID: \"838d0018-e6a7-41c4-8c65-ccb9500d75c2\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.711980 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92prt\" (UniqueName: \"kubernetes.io/projected/838d0018-e6a7-41c4-8c65-ccb9500d75c2-kube-api-access-92prt\") pod \"cloudkitty-proc-0\" (UID: \"838d0018-e6a7-41c4-8c65-ccb9500d75c2\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.767648 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.769739 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.773038 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.784068 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5tmv\" (UniqueName: \"kubernetes.io/projected/753f754f-839c-49a1-81e7-93d2c94a9cc7-kube-api-access-c5tmv\") pod \"dnsmasq-dns-58bd69657f-854cm\" (UID: \"753f754f-839c-49a1-81e7-93d2c94a9cc7\") " pod="openstack/dnsmasq-dns-58bd69657f-854cm" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.784173 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/753f754f-839c-49a1-81e7-93d2c94a9cc7-dns-swift-storage-0\") pod \"dnsmasq-dns-58bd69657f-854cm\" (UID: \"753f754f-839c-49a1-81e7-93d2c94a9cc7\") " pod="openstack/dnsmasq-dns-58bd69657f-854cm" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.784217 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/753f754f-839c-49a1-81e7-93d2c94a9cc7-ovsdbserver-nb\") pod \"dnsmasq-dns-58bd69657f-854cm\" (UID: \"753f754f-839c-49a1-81e7-93d2c94a9cc7\") " pod="openstack/dnsmasq-dns-58bd69657f-854cm" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.784245 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/753f754f-839c-49a1-81e7-93d2c94a9cc7-config\") pod \"dnsmasq-dns-58bd69657f-854cm\" (UID: \"753f754f-839c-49a1-81e7-93d2c94a9cc7\") " pod="openstack/dnsmasq-dns-58bd69657f-854cm" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.784274 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/753f754f-839c-49a1-81e7-93d2c94a9cc7-dns-svc\") pod \"dnsmasq-dns-58bd69657f-854cm\" (UID: \"753f754f-839c-49a1-81e7-93d2c94a9cc7\") " pod="openstack/dnsmasq-dns-58bd69657f-854cm" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.784356 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/753f754f-839c-49a1-81e7-93d2c94a9cc7-ovsdbserver-sb\") pod \"dnsmasq-dns-58bd69657f-854cm\" (UID: \"753f754f-839c-49a1-81e7-93d2c94a9cc7\") " pod="openstack/dnsmasq-dns-58bd69657f-854cm" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.785697 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/753f754f-839c-49a1-81e7-93d2c94a9cc7-ovsdbserver-sb\") pod \"dnsmasq-dns-58bd69657f-854cm\" (UID: \"753f754f-839c-49a1-81e7-93d2c94a9cc7\") " pod="openstack/dnsmasq-dns-58bd69657f-854cm" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.785796 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/753f754f-839c-49a1-81e7-93d2c94a9cc7-ovsdbserver-nb\") pod \"dnsmasq-dns-58bd69657f-854cm\" (UID: \"753f754f-839c-49a1-81e7-93d2c94a9cc7\") " pod="openstack/dnsmasq-dns-58bd69657f-854cm" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.787644 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.788441 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/753f754f-839c-49a1-81e7-93d2c94a9cc7-config\") pod \"dnsmasq-dns-58bd69657f-854cm\" (UID: \"753f754f-839c-49a1-81e7-93d2c94a9cc7\") " pod="openstack/dnsmasq-dns-58bd69657f-854cm" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.788947 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/753f754f-839c-49a1-81e7-93d2c94a9cc7-dns-svc\") pod \"dnsmasq-dns-58bd69657f-854cm\" (UID: \"753f754f-839c-49a1-81e7-93d2c94a9cc7\") " pod="openstack/dnsmasq-dns-58bd69657f-854cm" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.791254 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/753f754f-839c-49a1-81e7-93d2c94a9cc7-dns-swift-storage-0\") pod \"dnsmasq-dns-58bd69657f-854cm\" (UID: \"753f754f-839c-49a1-81e7-93d2c94a9cc7\") " pod="openstack/dnsmasq-dns-58bd69657f-854cm" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.810028 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5tmv\" (UniqueName: \"kubernetes.io/projected/753f754f-839c-49a1-81e7-93d2c94a9cc7-kube-api-access-c5tmv\") pod \"dnsmasq-dns-58bd69657f-854cm\" (UID: \"753f754f-839c-49a1-81e7-93d2c94a9cc7\") " pod="openstack/dnsmasq-dns-58bd69657f-854cm" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.824061 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.838047 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.887959 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5g6q\" (UniqueName: \"kubernetes.io/projected/712095a2-42ac-4ef6-a164-fbd4dd993948-kube-api-access-z5g6q\") pod \"712095a2-42ac-4ef6-a164-fbd4dd993948\" (UID: \"712095a2-42ac-4ef6-a164-fbd4dd993948\") " Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.894581 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/712095a2-42ac-4ef6-a164-fbd4dd993948-config-data-custom\") pod \"712095a2-42ac-4ef6-a164-fbd4dd993948\" (UID: \"712095a2-42ac-4ef6-a164-fbd4dd993948\") " Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.894897 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/712095a2-42ac-4ef6-a164-fbd4dd993948-combined-ca-bundle\") pod \"712095a2-42ac-4ef6-a164-fbd4dd993948\" (UID: \"712095a2-42ac-4ef6-a164-fbd4dd993948\") " Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.895078 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/712095a2-42ac-4ef6-a164-fbd4dd993948-scripts\") pod \"712095a2-42ac-4ef6-a164-fbd4dd993948\" (UID: \"712095a2-42ac-4ef6-a164-fbd4dd993948\") " Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.895543 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/712095a2-42ac-4ef6-a164-fbd4dd993948-config-data\") pod \"712095a2-42ac-4ef6-a164-fbd4dd993948\" (UID: \"712095a2-42ac-4ef6-a164-fbd4dd993948\") " Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.895736 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/712095a2-42ac-4ef6-a164-fbd4dd993948-etc-machine-id\") pod \"712095a2-42ac-4ef6-a164-fbd4dd993948\" (UID: \"712095a2-42ac-4ef6-a164-fbd4dd993948\") " Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.896001 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/712095a2-42ac-4ef6-a164-fbd4dd993948-kube-api-access-z5g6q" (OuterVolumeSpecName: "kube-api-access-z5g6q") pod "712095a2-42ac-4ef6-a164-fbd4dd993948" (UID: "712095a2-42ac-4ef6-a164-fbd4dd993948"). InnerVolumeSpecName "kube-api-access-z5g6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.896190 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3331db20-1348-4e54-81c3-c1e6e3cba7a8-scripts\") pod \"cloudkitty-api-0\" (UID: \"3331db20-1348-4e54-81c3-c1e6e3cba7a8\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.906348 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3331db20-1348-4e54-81c3-c1e6e3cba7a8-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"3331db20-1348-4e54-81c3-c1e6e3cba7a8\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.906664 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3331db20-1348-4e54-81c3-c1e6e3cba7a8-logs\") pod \"cloudkitty-api-0\" (UID: \"3331db20-1348-4e54-81c3-c1e6e3cba7a8\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.896998 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/712095a2-42ac-4ef6-a164-fbd4dd993948-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "712095a2-42ac-4ef6-a164-fbd4dd993948" (UID: "712095a2-42ac-4ef6-a164-fbd4dd993948"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.906762 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3331db20-1348-4e54-81c3-c1e6e3cba7a8-config-data\") pod \"cloudkitty-api-0\" (UID: \"3331db20-1348-4e54-81c3-c1e6e3cba7a8\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.906960 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3331db20-1348-4e54-81c3-c1e6e3cba7a8-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"3331db20-1348-4e54-81c3-c1e6e3cba7a8\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.907073 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/3331db20-1348-4e54-81c3-c1e6e3cba7a8-certs\") pod \"cloudkitty-api-0\" (UID: \"3331db20-1348-4e54-81c3-c1e6e3cba7a8\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.907193 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcvjk\" (UniqueName: \"kubernetes.io/projected/3331db20-1348-4e54-81c3-c1e6e3cba7a8-kube-api-access-mcvjk\") pod \"cloudkitty-api-0\" (UID: \"3331db20-1348-4e54-81c3-c1e6e3cba7a8\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.907207 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/712095a2-42ac-4ef6-a164-fbd4dd993948-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "712095a2-42ac-4ef6-a164-fbd4dd993948" (UID: "712095a2-42ac-4ef6-a164-fbd4dd993948"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.907586 4805 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/712095a2-42ac-4ef6-a164-fbd4dd993948-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.912726 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5g6q\" (UniqueName: \"kubernetes.io/projected/712095a2-42ac-4ef6-a164-fbd4dd993948-kube-api-access-z5g6q\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.911421 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/712095a2-42ac-4ef6-a164-fbd4dd993948-scripts" (OuterVolumeSpecName: "scripts") pod "712095a2-42ac-4ef6-a164-fbd4dd993948" (UID: "712095a2-42ac-4ef6-a164-fbd4dd993948"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.913419 4805 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/712095a2-42ac-4ef6-a164-fbd4dd993948-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:22 crc kubenswrapper[4805]: I0226 17:39:22.950332 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bd69657f-854cm" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.002475 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/712095a2-42ac-4ef6-a164-fbd4dd993948-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "712095a2-42ac-4ef6-a164-fbd4dd993948" (UID: "712095a2-42ac-4ef6-a164-fbd4dd993948"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.016599 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3331db20-1348-4e54-81c3-c1e6e3cba7a8-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"3331db20-1348-4e54-81c3-c1e6e3cba7a8\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.016921 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3331db20-1348-4e54-81c3-c1e6e3cba7a8-logs\") pod \"cloudkitty-api-0\" (UID: \"3331db20-1348-4e54-81c3-c1e6e3cba7a8\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.017057 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3331db20-1348-4e54-81c3-c1e6e3cba7a8-config-data\") pod \"cloudkitty-api-0\" (UID: \"3331db20-1348-4e54-81c3-c1e6e3cba7a8\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.017248 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3331db20-1348-4e54-81c3-c1e6e3cba7a8-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"3331db20-1348-4e54-81c3-c1e6e3cba7a8\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.017358 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/3331db20-1348-4e54-81c3-c1e6e3cba7a8-certs\") pod \"cloudkitty-api-0\" (UID: \"3331db20-1348-4e54-81c3-c1e6e3cba7a8\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.017489 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcvjk\" (UniqueName: \"kubernetes.io/projected/3331db20-1348-4e54-81c3-c1e6e3cba7a8-kube-api-access-mcvjk\") pod \"cloudkitty-api-0\" (UID: \"3331db20-1348-4e54-81c3-c1e6e3cba7a8\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.017695 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3331db20-1348-4e54-81c3-c1e6e3cba7a8-scripts\") pod \"cloudkitty-api-0\" (UID: \"3331db20-1348-4e54-81c3-c1e6e3cba7a8\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.017848 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/712095a2-42ac-4ef6-a164-fbd4dd993948-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.017947 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/712095a2-42ac-4ef6-a164-fbd4dd993948-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.018144 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3331db20-1348-4e54-81c3-c1e6e3cba7a8-logs\") pod \"cloudkitty-api-0\" (UID: \"3331db20-1348-4e54-81c3-c1e6e3cba7a8\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.033795 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3331db20-1348-4e54-81c3-c1e6e3cba7a8-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"3331db20-1348-4e54-81c3-c1e6e3cba7a8\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.033976 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3331db20-1348-4e54-81c3-c1e6e3cba7a8-config-data\") pod \"cloudkitty-api-0\" (UID: \"3331db20-1348-4e54-81c3-c1e6e3cba7a8\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.039645 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3331db20-1348-4e54-81c3-c1e6e3cba7a8-scripts\") pod \"cloudkitty-api-0\" (UID: \"3331db20-1348-4e54-81c3-c1e6e3cba7a8\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.042373 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/3331db20-1348-4e54-81c3-c1e6e3cba7a8-certs\") pod \"cloudkitty-api-0\" (UID: \"3331db20-1348-4e54-81c3-c1e6e3cba7a8\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.054860 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcvjk\" (UniqueName: \"kubernetes.io/projected/3331db20-1348-4e54-81c3-c1e6e3cba7a8-kube-api-access-mcvjk\") pod \"cloudkitty-api-0\" (UID: \"3331db20-1348-4e54-81c3-c1e6e3cba7a8\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.055370 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3331db20-1348-4e54-81c3-c1e6e3cba7a8-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"3331db20-1348-4e54-81c3-c1e6e3cba7a8\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.109736 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.277951 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/712095a2-42ac-4ef6-a164-fbd4dd993948-config-data" (OuterVolumeSpecName: "config-data") pod "712095a2-42ac-4ef6-a164-fbd4dd993948" (UID: "712095a2-42ac-4ef6-a164-fbd4dd993948"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.287030 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-sj72q" event={"ID":"4a310598-e458-4267-99fd-2a14ee356946","Type":"ContainerDied","Data":"bd90b42704ace84914d7e4c8a2bf4f250a90e3abd9c91495ac7e581742ea2ebe"} Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.287083 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd90b42704ace84914d7e4c8a2bf4f250a90e3abd9c91495ac7e581742ea2ebe" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.296443 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.306000 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc","Type":"ContainerDied","Data":"5614a4f0bf9e1fecf2b77e9b95eb5d5a02ef3cba2c8ae20add81ecfa9729242e"} Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.306067 4805 scope.go:117] "RemoveContainer" containerID="b6edfd04093a21664526f6f96a5b6af2bc1142a82517fac2e6884ec9a6d3bac0" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.310731 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-sj72q" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.324595 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62fcq\" (UniqueName: \"kubernetes.io/projected/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-kube-api-access-62fcq\") pod \"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc\" (UID: \"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc\") " Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.324818 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-config-data\") pod \"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc\" (UID: \"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc\") " Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.324914 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-scripts\") pod \"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc\" (UID: \"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc\") " Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.325043 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-config-data-custom\") pod \"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc\" (UID: \"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc\") " Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.325073 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-logs\") pod \"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc\" (UID: \"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc\") " Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.325125 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-combined-ca-bundle\") pod \"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc\" (UID: \"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc\") " Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.325183 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-etc-machine-id\") pod \"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc\" (UID: \"372c1ddd-b0b5-409d-a09f-30bf1c17e5bc\") " Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.325607 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/712095a2-42ac-4ef6-a164-fbd4dd993948-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.325663 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "372c1ddd-b0b5-409d-a09f-30bf1c17e5bc" (UID: "372c1ddd-b0b5-409d-a09f-30bf1c17e5bc"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.336430 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-kube-api-access-62fcq" (OuterVolumeSpecName: "kube-api-access-62fcq") pod "372c1ddd-b0b5-409d-a09f-30bf1c17e5bc" (UID: "372c1ddd-b0b5-409d-a09f-30bf1c17e5bc"). InnerVolumeSpecName "kube-api-access-62fcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.336712 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-logs" (OuterVolumeSpecName: "logs") pod "372c1ddd-b0b5-409d-a09f-30bf1c17e5bc" (UID: "372c1ddd-b0b5-409d-a09f-30bf1c17e5bc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.339169 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "372c1ddd-b0b5-409d-a09f-30bf1c17e5bc" (UID: "372c1ddd-b0b5-409d-a09f-30bf1c17e5bc"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.341938 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rb2p9"] Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.342799 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-scripts" (OuterVolumeSpecName: "scripts") pod "372c1ddd-b0b5-409d-a09f-30bf1c17e5bc" (UID: "372c1ddd-b0b5-409d-a09f-30bf1c17e5bc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.350029 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7cf7d6fbd8-c2wwp" event={"ID":"1fef9a78-09bd-4e03-b7be-0f92d3aa705b","Type":"ContainerDied","Data":"2be13c1d0b225c7d4e5be6029fab2803e8901769e8c99716c35ba7df19c01e53"} Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.350065 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2be13c1d0b225c7d4e5be6029fab2803e8901769e8c99716c35ba7df19c01e53" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.351144 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7cf7d6fbd8-c2wwp" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.395443 4805 scope.go:117] "RemoveContainer" containerID="14fe24a5b70d3b18006c86e4c7ea2af357f7eed993c30df44399f63d178d3af5" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.398033 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "372c1ddd-b0b5-409d-a09f-30bf1c17e5bc" (UID: "372c1ddd-b0b5-409d-a09f-30bf1c17e5bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.401278 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"137f038e-91ff-44e0-9f5c-616765295b23","Type":"ContainerStarted","Data":"174d7c99344723f4aa321d5fdb474ffc6bd1ccd7b7888bc682ee05cbabc29ff6"} Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.401469 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="137f038e-91ff-44e0-9f5c-616765295b23" containerName="ceilometer-central-agent" containerID="cri-o://42434e486b56a20091e18aef8acebafe32d47be4e8d9bbd951451b88a005f112" gracePeriod=30 Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.401569 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.401633 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="137f038e-91ff-44e0-9f5c-616765295b23" containerName="proxy-httpd" containerID="cri-o://174d7c99344723f4aa321d5fdb474ffc6bd1ccd7b7888bc682ee05cbabc29ff6" gracePeriod=30 Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.401685 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="137f038e-91ff-44e0-9f5c-616765295b23" containerName="sg-core" containerID="cri-o://2e1d8c580374d2d29492306f907602a4e7ea110578934b831a747c61073f6454" gracePeriod=30 Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.401725 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="137f038e-91ff-44e0-9f5c-616765295b23" containerName="ceilometer-notification-agent" containerID="cri-o://8c104ad65c86546f104f419d7b22b73d49404959477ca78dd75b9e4053284bd2" gracePeriod=30 Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.440186 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fef9a78-09bd-4e03-b7be-0f92d3aa705b-logs\") pod \"1fef9a78-09bd-4e03-b7be-0f92d3aa705b\" (UID: \"1fef9a78-09bd-4e03-b7be-0f92d3aa705b\") " Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.440290 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2c7n9\" (UniqueName: \"kubernetes.io/projected/4a310598-e458-4267-99fd-2a14ee356946-kube-api-access-2c7n9\") pod \"4a310598-e458-4267-99fd-2a14ee356946\" (UID: \"4a310598-e458-4267-99fd-2a14ee356946\") " Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.440319 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a310598-e458-4267-99fd-2a14ee356946-dns-svc\") pod \"4a310598-e458-4267-99fd-2a14ee356946\" (UID: \"4a310598-e458-4267-99fd-2a14ee356946\") " Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.440341 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fef9a78-09bd-4e03-b7be-0f92d3aa705b-combined-ca-bundle\") pod \"1fef9a78-09bd-4e03-b7be-0f92d3aa705b\" (UID: \"1fef9a78-09bd-4e03-b7be-0f92d3aa705b\") " Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.440409 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1fef9a78-09bd-4e03-b7be-0f92d3aa705b-config-data-custom\") pod \"1fef9a78-09bd-4e03-b7be-0f92d3aa705b\" (UID: \"1fef9a78-09bd-4e03-b7be-0f92d3aa705b\") " Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.440447 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a310598-e458-4267-99fd-2a14ee356946-ovsdbserver-nb\") pod \"4a310598-e458-4267-99fd-2a14ee356946\" (UID: \"4a310598-e458-4267-99fd-2a14ee356946\") " Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.440465 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fef9a78-09bd-4e03-b7be-0f92d3aa705b-config-data\") pod \"1fef9a78-09bd-4e03-b7be-0f92d3aa705b\" (UID: \"1fef9a78-09bd-4e03-b7be-0f92d3aa705b\") " Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.440571 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmg22\" (UniqueName: \"kubernetes.io/projected/1fef9a78-09bd-4e03-b7be-0f92d3aa705b-kube-api-access-bmg22\") pod \"1fef9a78-09bd-4e03-b7be-0f92d3aa705b\" (UID: \"1fef9a78-09bd-4e03-b7be-0f92d3aa705b\") " Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.440614 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a310598-e458-4267-99fd-2a14ee356946-ovsdbserver-sb\") pod \"4a310598-e458-4267-99fd-2a14ee356946\" (UID: \"4a310598-e458-4267-99fd-2a14ee356946\") " Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.440728 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a310598-e458-4267-99fd-2a14ee356946-config\") pod \"4a310598-e458-4267-99fd-2a14ee356946\" (UID: \"4a310598-e458-4267-99fd-2a14ee356946\") " Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.441559 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62fcq\" (UniqueName: \"kubernetes.io/projected/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-kube-api-access-62fcq\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.441581 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.441592 4805 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.441600 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-logs\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.441608 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.441617 4805 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.458540 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fef9a78-09bd-4e03-b7be-0f92d3aa705b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1fef9a78-09bd-4e03-b7be-0f92d3aa705b" (UID: "1fef9a78-09bd-4e03-b7be-0f92d3aa705b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.459474 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fef9a78-09bd-4e03-b7be-0f92d3aa705b-logs" (OuterVolumeSpecName: "logs") pod "1fef9a78-09bd-4e03-b7be-0f92d3aa705b" (UID: "1fef9a78-09bd-4e03-b7be-0f92d3aa705b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.474567 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"712095a2-42ac-4ef6-a164-fbd4dd993948","Type":"ContainerDied","Data":"fbf6fafaf8615eddcd442adc1f9deb6fd30c9773384c7b0f688e95c26f42e5f3"} Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.474695 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.476084 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fef9a78-09bd-4e03-b7be-0f92d3aa705b-kube-api-access-bmg22" (OuterVolumeSpecName: "kube-api-access-bmg22") pod "1fef9a78-09bd-4e03-b7be-0f92d3aa705b" (UID: "1fef9a78-09bd-4e03-b7be-0f92d3aa705b"). InnerVolumeSpecName "kube-api-access-bmg22". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:39:23 crc kubenswrapper[4805]: W0226 17:39:23.485292 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb58eaee_5842_4d11_babf_1ededef9c68e.slice/crio-d175c4171d248daa769119b5b9c20877806dbe3893c6e5296bcf47c2349b30ca WatchSource:0}: Error finding container d175c4171d248daa769119b5b9c20877806dbe3893c6e5296bcf47c2349b30ca: Status 404 returned error can't find the container with id d175c4171d248daa769119b5b9c20877806dbe3893c6e5296bcf47c2349b30ca Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.497885 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a310598-e458-4267-99fd-2a14ee356946-kube-api-access-2c7n9" (OuterVolumeSpecName: "kube-api-access-2c7n9") pod "4a310598-e458-4267-99fd-2a14ee356946" (UID: "4a310598-e458-4267-99fd-2a14ee356946"). InnerVolumeSpecName "kube-api-access-2c7n9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.530040 4805 scope.go:117] "RemoveContainer" containerID="f59d5527480b35afb418b83882cab58f4236ff1609a223166cd770c261333e8b" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.542452 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.770000988 podStartE2EDuration="1m27.542429993s" podCreationTimestamp="2026-02-26 17:37:56 +0000 UTC" firstStartedPulling="2026-02-26 17:37:58.385993214 +0000 UTC m=+1392.947747553" lastFinishedPulling="2026-02-26 17:39:22.158422219 +0000 UTC m=+1476.720176558" observedRunningTime="2026-02-26 17:39:23.504471255 +0000 UTC m=+1478.066225624" watchObservedRunningTime="2026-02-26 17:39:23.542429993 +0000 UTC m=+1478.104184332" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.570830 4805 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1fef9a78-09bd-4e03-b7be-0f92d3aa705b-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.570888 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmg22\" (UniqueName: \"kubernetes.io/projected/1fef9a78-09bd-4e03-b7be-0f92d3aa705b-kube-api-access-bmg22\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.570911 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fef9a78-09bd-4e03-b7be-0f92d3aa705b-logs\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.570923 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2c7n9\" (UniqueName: \"kubernetes.io/projected/4a310598-e458-4267-99fd-2a14ee356946-kube-api-access-2c7n9\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.699190 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-config-data" (OuterVolumeSpecName: "config-data") pod "372c1ddd-b0b5-409d-a09f-30bf1c17e5bc" (UID: "372c1ddd-b0b5-409d-a09f-30bf1c17e5bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.710857 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fef9a78-09bd-4e03-b7be-0f92d3aa705b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1fef9a78-09bd-4e03-b7be-0f92d3aa705b" (UID: "1fef9a78-09bd-4e03-b7be-0f92d3aa705b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.787466 4805 scope.go:117] "RemoveContainer" containerID="6d82f8ced4e063f5e012bf24b9ef1dabc29bf93b9847be756d1e88ad4ddfa43a" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.799655 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.799685 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fef9a78-09bd-4e03-b7be-0f92d3aa705b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.801417 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.823198 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fef9a78-09bd-4e03-b7be-0f92d3aa705b-config-data" (OuterVolumeSpecName: "config-data") pod "1fef9a78-09bd-4e03-b7be-0f92d3aa705b" (UID: "1fef9a78-09bd-4e03-b7be-0f92d3aa705b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.825576 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a310598-e458-4267-99fd-2a14ee356946-config" (OuterVolumeSpecName: "config") pod "4a310598-e458-4267-99fd-2a14ee356946" (UID: "4a310598-e458-4267-99fd-2a14ee356946"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.840967 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.841815 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a310598-e458-4267-99fd-2a14ee356946-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4a310598-e458-4267-99fd-2a14ee356946" (UID: "4a310598-e458-4267-99fd-2a14ee356946"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.881686 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a310598-e458-4267-99fd-2a14ee356946-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4a310598-e458-4267-99fd-2a14ee356946" (UID: "4a310598-e458-4267-99fd-2a14ee356946"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.882423 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a310598-e458-4267-99fd-2a14ee356946-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4a310598-e458-4267-99fd-2a14ee356946" (UID: "4a310598-e458-4267-99fd-2a14ee356946"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.890322 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 17:39:23 crc kubenswrapper[4805]: E0226 17:39:23.890928 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fef9a78-09bd-4e03-b7be-0f92d3aa705b" containerName="barbican-api" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.890947 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fef9a78-09bd-4e03-b7be-0f92d3aa705b" containerName="barbican-api" Feb 26 17:39:23 crc kubenswrapper[4805]: E0226 17:39:23.890967 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a310598-e458-4267-99fd-2a14ee356946" containerName="init" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.890975 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a310598-e458-4267-99fd-2a14ee356946" containerName="init" Feb 26 17:39:23 crc kubenswrapper[4805]: E0226 17:39:23.890990 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a310598-e458-4267-99fd-2a14ee356946" containerName="dnsmasq-dns" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.890999 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a310598-e458-4267-99fd-2a14ee356946" containerName="dnsmasq-dns" Feb 26 17:39:23 crc kubenswrapper[4805]: E0226 17:39:23.891033 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="712095a2-42ac-4ef6-a164-fbd4dd993948" containerName="cinder-scheduler" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.891041 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="712095a2-42ac-4ef6-a164-fbd4dd993948" containerName="cinder-scheduler" Feb 26 17:39:23 crc kubenswrapper[4805]: E0226 17:39:23.891068 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="372c1ddd-b0b5-409d-a09f-30bf1c17e5bc" containerName="cinder-api" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.891076 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="372c1ddd-b0b5-409d-a09f-30bf1c17e5bc" containerName="cinder-api" Feb 26 17:39:23 crc kubenswrapper[4805]: E0226 17:39:23.891095 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="372c1ddd-b0b5-409d-a09f-30bf1c17e5bc" containerName="cinder-api-log" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.891102 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="372c1ddd-b0b5-409d-a09f-30bf1c17e5bc" containerName="cinder-api-log" Feb 26 17:39:23 crc kubenswrapper[4805]: E0226 17:39:23.891128 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="712095a2-42ac-4ef6-a164-fbd4dd993948" containerName="probe" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.891136 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="712095a2-42ac-4ef6-a164-fbd4dd993948" containerName="probe" Feb 26 17:39:23 crc kubenswrapper[4805]: E0226 17:39:23.891155 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fef9a78-09bd-4e03-b7be-0f92d3aa705b" containerName="barbican-api-log" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.891163 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fef9a78-09bd-4e03-b7be-0f92d3aa705b" containerName="barbican-api-log" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.891391 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fef9a78-09bd-4e03-b7be-0f92d3aa705b" containerName="barbican-api" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.891412 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="372c1ddd-b0b5-409d-a09f-30bf1c17e5bc" containerName="cinder-api" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.891434 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="712095a2-42ac-4ef6-a164-fbd4dd993948" containerName="cinder-scheduler" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.892759 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="712095a2-42ac-4ef6-a164-fbd4dd993948" containerName="probe" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.892783 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a310598-e458-4267-99fd-2a14ee356946" containerName="dnsmasq-dns" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.892797 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="372c1ddd-b0b5-409d-a09f-30bf1c17e5bc" containerName="cinder-api-log" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.892825 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fef9a78-09bd-4e03-b7be-0f92d3aa705b" containerName="barbican-api-log" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.894433 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.904149 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a310598-e458-4267-99fd-2a14ee356946-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.904187 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a310598-e458-4267-99fd-2a14ee356946-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.904201 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fef9a78-09bd-4e03-b7be-0f92d3aa705b-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.904213 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a310598-e458-4267-99fd-2a14ee356946-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.904225 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a310598-e458-4267-99fd-2a14ee356946-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.910871 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 26 17:39:23 crc kubenswrapper[4805]: I0226 17:39:23.957594 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.005524 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1b0e927-cb66-4694-890d-33b20573ccca-config-data\") pod \"cinder-scheduler-0\" (UID: \"e1b0e927-cb66-4694-890d-33b20573ccca\") " pod="openstack/cinder-scheduler-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.005566 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1b0e927-cb66-4694-890d-33b20573ccca-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e1b0e927-cb66-4694-890d-33b20573ccca\") " pod="openstack/cinder-scheduler-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.005594 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6rmw\" (UniqueName: \"kubernetes.io/projected/e1b0e927-cb66-4694-890d-33b20573ccca-kube-api-access-w6rmw\") pod \"cinder-scheduler-0\" (UID: \"e1b0e927-cb66-4694-890d-33b20573ccca\") " pod="openstack/cinder-scheduler-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.005658 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1b0e927-cb66-4694-890d-33b20573ccca-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e1b0e927-cb66-4694-890d-33b20573ccca\") " pod="openstack/cinder-scheduler-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.005701 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e1b0e927-cb66-4694-890d-33b20573ccca-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e1b0e927-cb66-4694-890d-33b20573ccca\") " pod="openstack/cinder-scheduler-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.005758 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1b0e927-cb66-4694-890d-33b20573ccca-scripts\") pod \"cinder-scheduler-0\" (UID: \"e1b0e927-cb66-4694-890d-33b20573ccca\") " pod="openstack/cinder-scheduler-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.107403 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1b0e927-cb66-4694-890d-33b20573ccca-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e1b0e927-cb66-4694-890d-33b20573ccca\") " pod="openstack/cinder-scheduler-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.107839 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e1b0e927-cb66-4694-890d-33b20573ccca-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e1b0e927-cb66-4694-890d-33b20573ccca\") " pod="openstack/cinder-scheduler-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.107904 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1b0e927-cb66-4694-890d-33b20573ccca-scripts\") pod \"cinder-scheduler-0\" (UID: \"e1b0e927-cb66-4694-890d-33b20573ccca\") " pod="openstack/cinder-scheduler-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.107975 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1b0e927-cb66-4694-890d-33b20573ccca-config-data\") pod \"cinder-scheduler-0\" (UID: \"e1b0e927-cb66-4694-890d-33b20573ccca\") " pod="openstack/cinder-scheduler-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.107997 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1b0e927-cb66-4694-890d-33b20573ccca-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e1b0e927-cb66-4694-890d-33b20573ccca\") " pod="openstack/cinder-scheduler-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.108050 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6rmw\" (UniqueName: \"kubernetes.io/projected/e1b0e927-cb66-4694-890d-33b20573ccca-kube-api-access-w6rmw\") pod \"cinder-scheduler-0\" (UID: \"e1b0e927-cb66-4694-890d-33b20573ccca\") " pod="openstack/cinder-scheduler-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.108115 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e1b0e927-cb66-4694-890d-33b20573ccca-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e1b0e927-cb66-4694-890d-33b20573ccca\") " pod="openstack/cinder-scheduler-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.112706 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1b0e927-cb66-4694-890d-33b20573ccca-scripts\") pod \"cinder-scheduler-0\" (UID: \"e1b0e927-cb66-4694-890d-33b20573ccca\") " pod="openstack/cinder-scheduler-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.120986 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1b0e927-cb66-4694-890d-33b20573ccca-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e1b0e927-cb66-4694-890d-33b20573ccca\") " pod="openstack/cinder-scheduler-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.121787 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1b0e927-cb66-4694-890d-33b20573ccca-config-data\") pod \"cinder-scheduler-0\" (UID: \"e1b0e927-cb66-4694-890d-33b20573ccca\") " pod="openstack/cinder-scheduler-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.127936 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1b0e927-cb66-4694-890d-33b20573ccca-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e1b0e927-cb66-4694-890d-33b20573ccca\") " pod="openstack/cinder-scheduler-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.147645 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6rmw\" (UniqueName: \"kubernetes.io/projected/e1b0e927-cb66-4694-890d-33b20573ccca-kube-api-access-w6rmw\") pod \"cinder-scheduler-0\" (UID: \"e1b0e927-cb66-4694-890d-33b20573ccca\") " pod="openstack/cinder-scheduler-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.156280 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.200134 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 17:39:24 crc kubenswrapper[4805]: W0226 17:39:24.212437 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod753f754f_839c_49a1_81e7_93d2c94a9cc7.slice/crio-8db088528661a72ea5ad391f4170ef0a666904b54672a331e8e6701e68f83b40 WatchSource:0}: Error finding container 8db088528661a72ea5ad391f4170ef0a666904b54672a331e8e6701e68f83b40: Status 404 returned error can't find the container with id 8db088528661a72ea5ad391f4170ef0a666904b54672a331e8e6701e68f83b40 Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.216191 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-854cm"] Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.347954 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 26 17:39:24 crc kubenswrapper[4805]: W0226 17:39:24.378118 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3331db20_1348_4e54_81c3_c1e6e3cba7a8.slice/crio-8cd84c5309d95754f8652b259506ef479c31c5fa41697bb439be0d0c0216af8e WatchSource:0}: Error finding container 8cd84c5309d95754f8652b259506ef479c31c5fa41697bb439be0d0c0216af8e: Status 404 returned error can't find the container with id 8cd84c5309d95754f8652b259506ef479c31c5fa41697bb439be0d0c0216af8e Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.383798 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.488542 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"838d0018-e6a7-41c4-8c65-ccb9500d75c2","Type":"ContainerStarted","Data":"cb0754d23f667e5a58d65d6ec489404a74a268aeb57ad2393e06d26f0a3cd9a6"} Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.498743 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.520548 4805 generic.go:334] "Generic (PLEG): container finished" podID="137f038e-91ff-44e0-9f5c-616765295b23" containerID="2e1d8c580374d2d29492306f907602a4e7ea110578934b831a747c61073f6454" exitCode=2 Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.520578 4805 generic.go:334] "Generic (PLEG): container finished" podID="137f038e-91ff-44e0-9f5c-616765295b23" containerID="42434e486b56a20091e18aef8acebafe32d47be4e8d9bbd951451b88a005f112" exitCode=0 Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.520630 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"137f038e-91ff-44e0-9f5c-616765295b23","Type":"ContainerDied","Data":"2e1d8c580374d2d29492306f907602a4e7ea110578934b831a747c61073f6454"} Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.520682 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"137f038e-91ff-44e0-9f5c-616765295b23","Type":"ContainerDied","Data":"42434e486b56a20091e18aef8acebafe32d47be4e8d9bbd951451b88a005f112"} Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.526164 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"3331db20-1348-4e54-81c3-c1e6e3cba7a8","Type":"ContainerStarted","Data":"8cd84c5309d95754f8652b259506ef479c31c5fa41697bb439be0d0c0216af8e"} Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.544271 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.546129 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bd69657f-854cm" event={"ID":"753f754f-839c-49a1-81e7-93d2c94a9cc7","Type":"ContainerStarted","Data":"8db088528661a72ea5ad391f4170ef0a666904b54672a331e8e6701e68f83b40"} Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.555934 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.557535 4805 generic.go:334] "Generic (PLEG): container finished" podID="db58eaee-5842-4d11-babf-1ededef9c68e" containerID="3501b0210f09be04b6c0abd0830b87016df664733368b5513faae8866af051f8" exitCode=0 Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.557685 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7cf7d6fbd8-c2wwp" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.558648 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rb2p9" event={"ID":"db58eaee-5842-4d11-babf-1ededef9c68e","Type":"ContainerDied","Data":"3501b0210f09be04b6c0abd0830b87016df664733368b5513faae8866af051f8"} Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.558712 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rb2p9" event={"ID":"db58eaee-5842-4d11-babf-1ededef9c68e","Type":"ContainerStarted","Data":"d175c4171d248daa769119b5b9c20877806dbe3893c6e5296bcf47c2349b30ca"} Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.558808 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-sj72q" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.581377 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.583763 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.586430 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.586641 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.590109 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.607117 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.719863 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce5c6445-f359-4ebd-ab9c-d86269a25d2e-logs\") pod \"cinder-api-0\" (UID: \"ce5c6445-f359-4ebd-ab9c-d86269a25d2e\") " pod="openstack/cinder-api-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.720297 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce5c6445-f359-4ebd-ab9c-d86269a25d2e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ce5c6445-f359-4ebd-ab9c-d86269a25d2e\") " pod="openstack/cinder-api-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.720375 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce5c6445-f359-4ebd-ab9c-d86269a25d2e-public-tls-certs\") pod \"cinder-api-0\" (UID: \"ce5c6445-f359-4ebd-ab9c-d86269a25d2e\") " pod="openstack/cinder-api-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.720405 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce5c6445-f359-4ebd-ab9c-d86269a25d2e-scripts\") pod \"cinder-api-0\" (UID: \"ce5c6445-f359-4ebd-ab9c-d86269a25d2e\") " pod="openstack/cinder-api-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.720518 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce5c6445-f359-4ebd-ab9c-d86269a25d2e-config-data\") pod \"cinder-api-0\" (UID: \"ce5c6445-f359-4ebd-ab9c-d86269a25d2e\") " pod="openstack/cinder-api-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.720562 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ce5c6445-f359-4ebd-ab9c-d86269a25d2e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ce5c6445-f359-4ebd-ab9c-d86269a25d2e\") " pod="openstack/cinder-api-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.720583 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k76x\" (UniqueName: \"kubernetes.io/projected/ce5c6445-f359-4ebd-ab9c-d86269a25d2e-kube-api-access-4k76x\") pod \"cinder-api-0\" (UID: \"ce5c6445-f359-4ebd-ab9c-d86269a25d2e\") " pod="openstack/cinder-api-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.720618 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce5c6445-f359-4ebd-ab9c-d86269a25d2e-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"ce5c6445-f359-4ebd-ab9c-d86269a25d2e\") " pod="openstack/cinder-api-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.720697 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce5c6445-f359-4ebd-ab9c-d86269a25d2e-config-data-custom\") pod \"cinder-api-0\" (UID: \"ce5c6445-f359-4ebd-ab9c-d86269a25d2e\") " pod="openstack/cinder-api-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.821908 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce5c6445-f359-4ebd-ab9c-d86269a25d2e-config-data-custom\") pod \"cinder-api-0\" (UID: \"ce5c6445-f359-4ebd-ab9c-d86269a25d2e\") " pod="openstack/cinder-api-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.821958 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce5c6445-f359-4ebd-ab9c-d86269a25d2e-logs\") pod \"cinder-api-0\" (UID: \"ce5c6445-f359-4ebd-ab9c-d86269a25d2e\") " pod="openstack/cinder-api-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.822006 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce5c6445-f359-4ebd-ab9c-d86269a25d2e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ce5c6445-f359-4ebd-ab9c-d86269a25d2e\") " pod="openstack/cinder-api-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.822049 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce5c6445-f359-4ebd-ab9c-d86269a25d2e-public-tls-certs\") pod \"cinder-api-0\" (UID: \"ce5c6445-f359-4ebd-ab9c-d86269a25d2e\") " pod="openstack/cinder-api-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.822067 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce5c6445-f359-4ebd-ab9c-d86269a25d2e-scripts\") pod \"cinder-api-0\" (UID: \"ce5c6445-f359-4ebd-ab9c-d86269a25d2e\") " pod="openstack/cinder-api-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.822116 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce5c6445-f359-4ebd-ab9c-d86269a25d2e-config-data\") pod \"cinder-api-0\" (UID: \"ce5c6445-f359-4ebd-ab9c-d86269a25d2e\") " pod="openstack/cinder-api-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.822163 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ce5c6445-f359-4ebd-ab9c-d86269a25d2e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ce5c6445-f359-4ebd-ab9c-d86269a25d2e\") " pod="openstack/cinder-api-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.822181 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k76x\" (UniqueName: \"kubernetes.io/projected/ce5c6445-f359-4ebd-ab9c-d86269a25d2e-kube-api-access-4k76x\") pod \"cinder-api-0\" (UID: \"ce5c6445-f359-4ebd-ab9c-d86269a25d2e\") " pod="openstack/cinder-api-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.822203 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce5c6445-f359-4ebd-ab9c-d86269a25d2e-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"ce5c6445-f359-4ebd-ab9c-d86269a25d2e\") " pod="openstack/cinder-api-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.826658 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce5c6445-f359-4ebd-ab9c-d86269a25d2e-logs\") pod \"cinder-api-0\" (UID: \"ce5c6445-f359-4ebd-ab9c-d86269a25d2e\") " pod="openstack/cinder-api-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.828094 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ce5c6445-f359-4ebd-ab9c-d86269a25d2e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ce5c6445-f359-4ebd-ab9c-d86269a25d2e\") " pod="openstack/cinder-api-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.831948 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce5c6445-f359-4ebd-ab9c-d86269a25d2e-public-tls-certs\") pod \"cinder-api-0\" (UID: \"ce5c6445-f359-4ebd-ab9c-d86269a25d2e\") " pod="openstack/cinder-api-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.832626 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce5c6445-f359-4ebd-ab9c-d86269a25d2e-scripts\") pod \"cinder-api-0\" (UID: \"ce5c6445-f359-4ebd-ab9c-d86269a25d2e\") " pod="openstack/cinder-api-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.834570 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce5c6445-f359-4ebd-ab9c-d86269a25d2e-config-data-custom\") pod \"cinder-api-0\" (UID: \"ce5c6445-f359-4ebd-ab9c-d86269a25d2e\") " pod="openstack/cinder-api-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.836757 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce5c6445-f359-4ebd-ab9c-d86269a25d2e-config-data\") pod \"cinder-api-0\" (UID: \"ce5c6445-f359-4ebd-ab9c-d86269a25d2e\") " pod="openstack/cinder-api-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.841618 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-sj72q"] Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.852170 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce5c6445-f359-4ebd-ab9c-d86269a25d2e-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"ce5c6445-f359-4ebd-ab9c-d86269a25d2e\") " pod="openstack/cinder-api-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.856582 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-sj72q"] Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.859557 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k76x\" (UniqueName: \"kubernetes.io/projected/ce5c6445-f359-4ebd-ab9c-d86269a25d2e-kube-api-access-4k76x\") pod \"cinder-api-0\" (UID: \"ce5c6445-f359-4ebd-ab9c-d86269a25d2e\") " pod="openstack/cinder-api-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.866608 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce5c6445-f359-4ebd-ab9c-d86269a25d2e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ce5c6445-f359-4ebd-ab9c-d86269a25d2e\") " pod="openstack/cinder-api-0" Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.871560 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7cf7d6fbd8-c2wwp"] Feb 26 17:39:24 crc kubenswrapper[4805]: I0226 17:39:24.884563 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7cf7d6fbd8-c2wwp"] Feb 26 17:39:25 crc kubenswrapper[4805]: I0226 17:39:25.004365 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fef9a78-09bd-4e03-b7be-0f92d3aa705b" path="/var/lib/kubelet/pods/1fef9a78-09bd-4e03-b7be-0f92d3aa705b/volumes" Feb 26 17:39:25 crc kubenswrapper[4805]: I0226 17:39:25.007385 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="372c1ddd-b0b5-409d-a09f-30bf1c17e5bc" path="/var/lib/kubelet/pods/372c1ddd-b0b5-409d-a09f-30bf1c17e5bc/volumes" Feb 26 17:39:25 crc kubenswrapper[4805]: I0226 17:39:25.008116 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a310598-e458-4267-99fd-2a14ee356946" path="/var/lib/kubelet/pods/4a310598-e458-4267-99fd-2a14ee356946/volumes" Feb 26 17:39:25 crc kubenswrapper[4805]: I0226 17:39:25.016146 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="712095a2-42ac-4ef6-a164-fbd4dd993948" path="/var/lib/kubelet/pods/712095a2-42ac-4ef6-a164-fbd4dd993948/volumes" Feb 26 17:39:25 crc kubenswrapper[4805]: I0226 17:39:25.103891 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 26 17:39:25 crc kubenswrapper[4805]: I0226 17:39:25.207815 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 26 17:39:25 crc kubenswrapper[4805]: W0226 17:39:25.580476 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1b0e927_cb66_4694_890d_33b20573ccca.slice/crio-a5bb5de7431e708be75d6853d4d54841ae9ac95a302c5b314a3a23e851c171f4 WatchSource:0}: Error finding container a5bb5de7431e708be75d6853d4d54841ae9ac95a302c5b314a3a23e851c171f4: Status 404 returned error can't find the container with id a5bb5de7431e708be75d6853d4d54841ae9ac95a302c5b314a3a23e851c171f4 Feb 26 17:39:25 crc kubenswrapper[4805]: I0226 17:39:25.583538 4805 generic.go:334] "Generic (PLEG): container finished" podID="137f038e-91ff-44e0-9f5c-616765295b23" containerID="174d7c99344723f4aa321d5fdb474ffc6bd1ccd7b7888bc682ee05cbabc29ff6" exitCode=0 Feb 26 17:39:25 crc kubenswrapper[4805]: I0226 17:39:25.583623 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"137f038e-91ff-44e0-9f5c-616765295b23","Type":"ContainerDied","Data":"174d7c99344723f4aa321d5fdb474ffc6bd1ccd7b7888bc682ee05cbabc29ff6"} Feb 26 17:39:25 crc kubenswrapper[4805]: I0226 17:39:25.586945 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"3331db20-1348-4e54-81c3-c1e6e3cba7a8","Type":"ContainerStarted","Data":"8b7e76772bd975f22137624e41cde6cbd9ed99258bf3212750f2cf44094476af"} Feb 26 17:39:25 crc kubenswrapper[4805]: I0226 17:39:25.586993 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"3331db20-1348-4e54-81c3-c1e6e3cba7a8","Type":"ContainerStarted","Data":"7c86dfd6172868e92ffa026502806771bd2b5d1f800195d942a39e2c60be8ca6"} Feb 26 17:39:25 crc kubenswrapper[4805]: I0226 17:39:25.587121 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Feb 26 17:39:25 crc kubenswrapper[4805]: I0226 17:39:25.589550 4805 generic.go:334] "Generic (PLEG): container finished" podID="753f754f-839c-49a1-81e7-93d2c94a9cc7" containerID="bb7dac5fd64b17452adddfbc8f2b6be3f4c22874aa3e40d8559448eb2c23cf51" exitCode=0 Feb 26 17:39:25 crc kubenswrapper[4805]: I0226 17:39:25.589584 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bd69657f-854cm" event={"ID":"753f754f-839c-49a1-81e7-93d2c94a9cc7","Type":"ContainerDied","Data":"bb7dac5fd64b17452adddfbc8f2b6be3f4c22874aa3e40d8559448eb2c23cf51"} Feb 26 17:39:25 crc kubenswrapper[4805]: I0226 17:39:25.658568 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=3.658539246 podStartE2EDuration="3.658539246s" podCreationTimestamp="2026-02-26 17:39:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:39:25.619404639 +0000 UTC m=+1480.181158998" watchObservedRunningTime="2026-02-26 17:39:25.658539246 +0000 UTC m=+1480.220293585" Feb 26 17:39:26 crc kubenswrapper[4805]: I0226 17:39:26.309365 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 26 17:39:26 crc kubenswrapper[4805]: I0226 17:39:26.337364 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 26 17:39:26 crc kubenswrapper[4805]: W0226 17:39:26.365215 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce5c6445_f359_4ebd_ab9c_d86269a25d2e.slice/crio-3d37f26e509d95e7ef57d53e9cc4edf21411f57b613779684f5eb5b127003de4 WatchSource:0}: Error finding container 3d37f26e509d95e7ef57d53e9cc4edf21411f57b613779684f5eb5b127003de4: Status 404 returned error can't find the container with id 3d37f26e509d95e7ef57d53e9cc4edf21411f57b613779684f5eb5b127003de4 Feb 26 17:39:26 crc kubenswrapper[4805]: I0226 17:39:26.628788 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e1b0e927-cb66-4694-890d-33b20573ccca","Type":"ContainerStarted","Data":"a5bb5de7431e708be75d6853d4d54841ae9ac95a302c5b314a3a23e851c171f4"} Feb 26 17:39:26 crc kubenswrapper[4805]: I0226 17:39:26.646114 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ce5c6445-f359-4ebd-ab9c-d86269a25d2e","Type":"ContainerStarted","Data":"3d37f26e509d95e7ef57d53e9cc4edf21411f57b613779684f5eb5b127003de4"} Feb 26 17:39:26 crc kubenswrapper[4805]: I0226 17:39:26.980433 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-847cbf4c89-k2szx" Feb 26 17:39:27 crc kubenswrapper[4805]: I0226 17:39:27.493241 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-sj72q" podUID="4a310598-e458-4267-99fd-2a14ee356946" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: i/o timeout" Feb 26 17:39:27 crc kubenswrapper[4805]: I0226 17:39:27.760760 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bd69657f-854cm" event={"ID":"753f754f-839c-49a1-81e7-93d2c94a9cc7","Type":"ContainerStarted","Data":"3bc403d4acf8101afb0c0a0b395e3755fcafc174537b0f18edf03e29f25feb9b"} Feb 26 17:39:27 crc kubenswrapper[4805]: I0226 17:39:27.762152 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58bd69657f-854cm" Feb 26 17:39:27 crc kubenswrapper[4805]: I0226 17:39:27.774836 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ce5c6445-f359-4ebd-ab9c-d86269a25d2e","Type":"ContainerStarted","Data":"ddf129bce60fbcc90e220401662f220d7d6ff64fd3b3859d0f43556dc17e4ba2"} Feb 26 17:39:27 crc kubenswrapper[4805]: I0226 17:39:27.780593 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e1b0e927-cb66-4694-890d-33b20573ccca","Type":"ContainerStarted","Data":"8e9c2dd2593d4f17cbfcbce8e908b7348cd8acf21ce978b84b809f4c19c3c664"} Feb 26 17:39:27 crc kubenswrapper[4805]: I0226 17:39:27.795583 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="3331db20-1348-4e54-81c3-c1e6e3cba7a8" containerName="cloudkitty-api-log" containerID="cri-o://7c86dfd6172868e92ffa026502806771bd2b5d1f800195d942a39e2c60be8ca6" gracePeriod=30 Feb 26 17:39:27 crc kubenswrapper[4805]: I0226 17:39:27.796932 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"838d0018-e6a7-41c4-8c65-ccb9500d75c2","Type":"ContainerStarted","Data":"36c7d32e62c9a4d51d1d08a5094f8fbe94aa47214d523025b582533924318e1b"} Feb 26 17:39:27 crc kubenswrapper[4805]: I0226 17:39:27.797089 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="3331db20-1348-4e54-81c3-c1e6e3cba7a8" containerName="cloudkitty-api" containerID="cri-o://8b7e76772bd975f22137624e41cde6cbd9ed99258bf3212750f2cf44094476af" gracePeriod=30 Feb 26 17:39:27 crc kubenswrapper[4805]: I0226 17:39:27.807262 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58bd69657f-854cm" podStartSLOduration=5.807235262 podStartE2EDuration="5.807235262s" podCreationTimestamp="2026-02-26 17:39:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:39:27.794579572 +0000 UTC m=+1482.356333921" watchObservedRunningTime="2026-02-26 17:39:27.807235262 +0000 UTC m=+1482.368989601" Feb 26 17:39:27 crc kubenswrapper[4805]: I0226 17:39:27.828394 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=4.282435493 podStartE2EDuration="5.828367655s" podCreationTimestamp="2026-02-26 17:39:22 +0000 UTC" firstStartedPulling="2026-02-26 17:39:24.199900068 +0000 UTC m=+1478.761654407" lastFinishedPulling="2026-02-26 17:39:25.74583222 +0000 UTC m=+1480.307586569" observedRunningTime="2026-02-26 17:39:27.823358158 +0000 UTC m=+1482.385112497" watchObservedRunningTime="2026-02-26 17:39:27.828367655 +0000 UTC m=+1482.390122004" Feb 26 17:39:27 crc kubenswrapper[4805]: I0226 17:39:27.873298 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 26 17:39:28 crc kubenswrapper[4805]: I0226 17:39:28.889400 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ce5c6445-f359-4ebd-ab9c-d86269a25d2e","Type":"ContainerStarted","Data":"871ecdf6672d8bbb9c67e56815f9fd5f45deb0dd14cdac4e86a0423162111729"} Feb 26 17:39:28 crc kubenswrapper[4805]: I0226 17:39:28.889951 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 26 17:39:28 crc kubenswrapper[4805]: I0226 17:39:28.913309 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e1b0e927-cb66-4694-890d-33b20573ccca","Type":"ContainerStarted","Data":"f264d14db257d2e72111c78c98d85509813873184bebbaab495caf7d87c8ea2f"} Feb 26 17:39:28 crc kubenswrapper[4805]: I0226 17:39:28.938440 4805 generic.go:334] "Generic (PLEG): container finished" podID="3331db20-1348-4e54-81c3-c1e6e3cba7a8" containerID="8b7e76772bd975f22137624e41cde6cbd9ed99258bf3212750f2cf44094476af" exitCode=0 Feb 26 17:39:28 crc kubenswrapper[4805]: I0226 17:39:28.938480 4805 generic.go:334] "Generic (PLEG): container finished" podID="3331db20-1348-4e54-81c3-c1e6e3cba7a8" containerID="7c86dfd6172868e92ffa026502806771bd2b5d1f800195d942a39e2c60be8ca6" exitCode=143 Feb 26 17:39:28 crc kubenswrapper[4805]: I0226 17:39:28.938941 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"3331db20-1348-4e54-81c3-c1e6e3cba7a8","Type":"ContainerDied","Data":"8b7e76772bd975f22137624e41cde6cbd9ed99258bf3212750f2cf44094476af"} Feb 26 17:39:28 crc kubenswrapper[4805]: I0226 17:39:28.938996 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"3331db20-1348-4e54-81c3-c1e6e3cba7a8","Type":"ContainerDied","Data":"7c86dfd6172868e92ffa026502806771bd2b5d1f800195d942a39e2c60be8ca6"} Feb 26 17:39:28 crc kubenswrapper[4805]: I0226 17:39:28.942706 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.942681142 podStartE2EDuration="4.942681142s" podCreationTimestamp="2026-02-26 17:39:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:39:28.925877028 +0000 UTC m=+1483.487631377" watchObservedRunningTime="2026-02-26 17:39:28.942681142 +0000 UTC m=+1483.504435481" Feb 26 17:39:29 crc kubenswrapper[4805]: I0226 17:39:29.060186 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.060160937 podStartE2EDuration="6.060160937s" podCreationTimestamp="2026-02-26 17:39:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:39:29.050515974 +0000 UTC m=+1483.612270313" watchObservedRunningTime="2026-02-26 17:39:29.060160937 +0000 UTC m=+1483.621915276" Feb 26 17:39:29 crc kubenswrapper[4805]: I0226 17:39:29.153527 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 26 17:39:29 crc kubenswrapper[4805]: I0226 17:39:29.327324 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3331db20-1348-4e54-81c3-c1e6e3cba7a8-combined-ca-bundle\") pod \"3331db20-1348-4e54-81c3-c1e6e3cba7a8\" (UID: \"3331db20-1348-4e54-81c3-c1e6e3cba7a8\") " Feb 26 17:39:29 crc kubenswrapper[4805]: I0226 17:39:29.327397 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3331db20-1348-4e54-81c3-c1e6e3cba7a8-config-data\") pod \"3331db20-1348-4e54-81c3-c1e6e3cba7a8\" (UID: \"3331db20-1348-4e54-81c3-c1e6e3cba7a8\") " Feb 26 17:39:29 crc kubenswrapper[4805]: I0226 17:39:29.327508 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/3331db20-1348-4e54-81c3-c1e6e3cba7a8-certs\") pod \"3331db20-1348-4e54-81c3-c1e6e3cba7a8\" (UID: \"3331db20-1348-4e54-81c3-c1e6e3cba7a8\") " Feb 26 17:39:29 crc kubenswrapper[4805]: I0226 17:39:29.327581 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3331db20-1348-4e54-81c3-c1e6e3cba7a8-logs\") pod \"3331db20-1348-4e54-81c3-c1e6e3cba7a8\" (UID: \"3331db20-1348-4e54-81c3-c1e6e3cba7a8\") " Feb 26 17:39:29 crc kubenswrapper[4805]: I0226 17:39:29.327641 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3331db20-1348-4e54-81c3-c1e6e3cba7a8-config-data-custom\") pod \"3331db20-1348-4e54-81c3-c1e6e3cba7a8\" (UID: \"3331db20-1348-4e54-81c3-c1e6e3cba7a8\") " Feb 26 17:39:29 crc kubenswrapper[4805]: I0226 17:39:29.327671 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcvjk\" (UniqueName: \"kubernetes.io/projected/3331db20-1348-4e54-81c3-c1e6e3cba7a8-kube-api-access-mcvjk\") pod \"3331db20-1348-4e54-81c3-c1e6e3cba7a8\" (UID: \"3331db20-1348-4e54-81c3-c1e6e3cba7a8\") " Feb 26 17:39:29 crc kubenswrapper[4805]: I0226 17:39:29.327751 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3331db20-1348-4e54-81c3-c1e6e3cba7a8-scripts\") pod \"3331db20-1348-4e54-81c3-c1e6e3cba7a8\" (UID: \"3331db20-1348-4e54-81c3-c1e6e3cba7a8\") " Feb 26 17:39:29 crc kubenswrapper[4805]: I0226 17:39:29.338609 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3331db20-1348-4e54-81c3-c1e6e3cba7a8-certs" (OuterVolumeSpecName: "certs") pod "3331db20-1348-4e54-81c3-c1e6e3cba7a8" (UID: "3331db20-1348-4e54-81c3-c1e6e3cba7a8"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:39:29 crc kubenswrapper[4805]: I0226 17:39:29.344326 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3331db20-1348-4e54-81c3-c1e6e3cba7a8-logs" (OuterVolumeSpecName: "logs") pod "3331db20-1348-4e54-81c3-c1e6e3cba7a8" (UID: "3331db20-1348-4e54-81c3-c1e6e3cba7a8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:39:29 crc kubenswrapper[4805]: I0226 17:39:29.344521 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3331db20-1348-4e54-81c3-c1e6e3cba7a8-scripts" (OuterVolumeSpecName: "scripts") pod "3331db20-1348-4e54-81c3-c1e6e3cba7a8" (UID: "3331db20-1348-4e54-81c3-c1e6e3cba7a8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:29 crc kubenswrapper[4805]: I0226 17:39:29.349208 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3331db20-1348-4e54-81c3-c1e6e3cba7a8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3331db20-1348-4e54-81c3-c1e6e3cba7a8" (UID: "3331db20-1348-4e54-81c3-c1e6e3cba7a8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:29 crc kubenswrapper[4805]: I0226 17:39:29.369463 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3331db20-1348-4e54-81c3-c1e6e3cba7a8-kube-api-access-mcvjk" (OuterVolumeSpecName: "kube-api-access-mcvjk") pod "3331db20-1348-4e54-81c3-c1e6e3cba7a8" (UID: "3331db20-1348-4e54-81c3-c1e6e3cba7a8"). InnerVolumeSpecName "kube-api-access-mcvjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:39:29 crc kubenswrapper[4805]: I0226 17:39:29.385167 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 26 17:39:29 crc kubenswrapper[4805]: I0226 17:39:29.402441 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3331db20-1348-4e54-81c3-c1e6e3cba7a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3331db20-1348-4e54-81c3-c1e6e3cba7a8" (UID: "3331db20-1348-4e54-81c3-c1e6e3cba7a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:29 crc kubenswrapper[4805]: I0226 17:39:29.432720 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3331db20-1348-4e54-81c3-c1e6e3cba7a8-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:29 crc kubenswrapper[4805]: I0226 17:39:29.432765 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3331db20-1348-4e54-81c3-c1e6e3cba7a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:29 crc kubenswrapper[4805]: I0226 17:39:29.432777 4805 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/3331db20-1348-4e54-81c3-c1e6e3cba7a8-certs\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:29 crc kubenswrapper[4805]: I0226 17:39:29.432786 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3331db20-1348-4e54-81c3-c1e6e3cba7a8-logs\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:29 crc kubenswrapper[4805]: I0226 17:39:29.432796 4805 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3331db20-1348-4e54-81c3-c1e6e3cba7a8-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:29 crc kubenswrapper[4805]: I0226 17:39:29.432807 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcvjk\" (UniqueName: \"kubernetes.io/projected/3331db20-1348-4e54-81c3-c1e6e3cba7a8-kube-api-access-mcvjk\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:29 crc kubenswrapper[4805]: I0226 17:39:29.484638 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3331db20-1348-4e54-81c3-c1e6e3cba7a8-config-data" (OuterVolumeSpecName: "config-data") pod "3331db20-1348-4e54-81c3-c1e6e3cba7a8" (UID: "3331db20-1348-4e54-81c3-c1e6e3cba7a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:29 crc kubenswrapper[4805]: I0226 17:39:29.534698 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3331db20-1348-4e54-81c3-c1e6e3cba7a8-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:29 crc kubenswrapper[4805]: I0226 17:39:29.980199 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"3331db20-1348-4e54-81c3-c1e6e3cba7a8","Type":"ContainerDied","Data":"8cd84c5309d95754f8652b259506ef479c31c5fa41697bb439be0d0c0216af8e"} Feb 26 17:39:29 crc kubenswrapper[4805]: I0226 17:39:29.980249 4805 scope.go:117] "RemoveContainer" containerID="8b7e76772bd975f22137624e41cde6cbd9ed99258bf3212750f2cf44094476af" Feb 26 17:39:29 crc kubenswrapper[4805]: I0226 17:39:29.980800 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-proc-0" podUID="838d0018-e6a7-41c4-8c65-ccb9500d75c2" containerName="cloudkitty-proc" containerID="cri-o://36c7d32e62c9a4d51d1d08a5094f8fbe94aa47214d523025b582533924318e1b" gracePeriod=30 Feb 26 17:39:29 crc kubenswrapper[4805]: I0226 17:39:29.985570 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.051346 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.056611 4805 scope.go:117] "RemoveContainer" containerID="7c86dfd6172868e92ffa026502806771bd2b5d1f800195d942a39e2c60be8ca6" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.088161 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.125085 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Feb 26 17:39:30 crc kubenswrapper[4805]: E0226 17:39:30.125551 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3331db20-1348-4e54-81c3-c1e6e3cba7a8" containerName="cloudkitty-api-log" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.125564 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3331db20-1348-4e54-81c3-c1e6e3cba7a8" containerName="cloudkitty-api-log" Feb 26 17:39:30 crc kubenswrapper[4805]: E0226 17:39:30.125582 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3331db20-1348-4e54-81c3-c1e6e3cba7a8" containerName="cloudkitty-api" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.125588 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3331db20-1348-4e54-81c3-c1e6e3cba7a8" containerName="cloudkitty-api" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.125753 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="3331db20-1348-4e54-81c3-c1e6e3cba7a8" containerName="cloudkitty-api-log" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.125792 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="3331db20-1348-4e54-81c3-c1e6e3cba7a8" containerName="cloudkitty-api" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.126923 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.131334 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.131600 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-public-svc" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.132221 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-internal-svc" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.138224 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.260855 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/69411227-14e0-40b4-a753-f2178bfbdd2a-certs\") pod \"cloudkitty-api-0\" (UID: \"69411227-14e0-40b4-a753-f2178bfbdd2a\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.261354 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69411227-14e0-40b4-a753-f2178bfbdd2a-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"69411227-14e0-40b4-a753-f2178bfbdd2a\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.264673 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69411227-14e0-40b4-a753-f2178bfbdd2a-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"69411227-14e0-40b4-a753-f2178bfbdd2a\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.265671 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69411227-14e0-40b4-a753-f2178bfbdd2a-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"69411227-14e0-40b4-a753-f2178bfbdd2a\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.265903 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69411227-14e0-40b4-a753-f2178bfbdd2a-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"69411227-14e0-40b4-a753-f2178bfbdd2a\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.266393 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69411227-14e0-40b4-a753-f2178bfbdd2a-logs\") pod \"cloudkitty-api-0\" (UID: \"69411227-14e0-40b4-a753-f2178bfbdd2a\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.266535 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69411227-14e0-40b4-a753-f2178bfbdd2a-config-data\") pod \"cloudkitty-api-0\" (UID: \"69411227-14e0-40b4-a753-f2178bfbdd2a\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.267376 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69411227-14e0-40b4-a753-f2178bfbdd2a-scripts\") pod \"cloudkitty-api-0\" (UID: \"69411227-14e0-40b4-a753-f2178bfbdd2a\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.267465 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5rhh\" (UniqueName: \"kubernetes.io/projected/69411227-14e0-40b4-a753-f2178bfbdd2a-kube-api-access-t5rhh\") pod \"cloudkitty-api-0\" (UID: \"69411227-14e0-40b4-a753-f2178bfbdd2a\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.370957 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69411227-14e0-40b4-a753-f2178bfbdd2a-scripts\") pod \"cloudkitty-api-0\" (UID: \"69411227-14e0-40b4-a753-f2178bfbdd2a\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.371110 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5rhh\" (UniqueName: \"kubernetes.io/projected/69411227-14e0-40b4-a753-f2178bfbdd2a-kube-api-access-t5rhh\") pod \"cloudkitty-api-0\" (UID: \"69411227-14e0-40b4-a753-f2178bfbdd2a\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.371159 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/69411227-14e0-40b4-a753-f2178bfbdd2a-certs\") pod \"cloudkitty-api-0\" (UID: \"69411227-14e0-40b4-a753-f2178bfbdd2a\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.371210 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69411227-14e0-40b4-a753-f2178bfbdd2a-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"69411227-14e0-40b4-a753-f2178bfbdd2a\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.371299 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69411227-14e0-40b4-a753-f2178bfbdd2a-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"69411227-14e0-40b4-a753-f2178bfbdd2a\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.371343 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69411227-14e0-40b4-a753-f2178bfbdd2a-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"69411227-14e0-40b4-a753-f2178bfbdd2a\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.371378 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69411227-14e0-40b4-a753-f2178bfbdd2a-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"69411227-14e0-40b4-a753-f2178bfbdd2a\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.371434 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69411227-14e0-40b4-a753-f2178bfbdd2a-logs\") pod \"cloudkitty-api-0\" (UID: \"69411227-14e0-40b4-a753-f2178bfbdd2a\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.371513 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69411227-14e0-40b4-a753-f2178bfbdd2a-config-data\") pod \"cloudkitty-api-0\" (UID: \"69411227-14e0-40b4-a753-f2178bfbdd2a\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.373577 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69411227-14e0-40b4-a753-f2178bfbdd2a-logs\") pod \"cloudkitty-api-0\" (UID: \"69411227-14e0-40b4-a753-f2178bfbdd2a\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.378611 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69411227-14e0-40b4-a753-f2178bfbdd2a-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"69411227-14e0-40b4-a753-f2178bfbdd2a\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.379508 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69411227-14e0-40b4-a753-f2178bfbdd2a-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"69411227-14e0-40b4-a753-f2178bfbdd2a\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.380216 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69411227-14e0-40b4-a753-f2178bfbdd2a-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"69411227-14e0-40b4-a753-f2178bfbdd2a\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.394734 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69411227-14e0-40b4-a753-f2178bfbdd2a-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"69411227-14e0-40b4-a753-f2178bfbdd2a\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.395095 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69411227-14e0-40b4-a753-f2178bfbdd2a-scripts\") pod \"cloudkitty-api-0\" (UID: \"69411227-14e0-40b4-a753-f2178bfbdd2a\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.395526 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/69411227-14e0-40b4-a753-f2178bfbdd2a-certs\") pod \"cloudkitty-api-0\" (UID: \"69411227-14e0-40b4-a753-f2178bfbdd2a\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.403132 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69411227-14e0-40b4-a753-f2178bfbdd2a-config-data\") pod \"cloudkitty-api-0\" (UID: \"69411227-14e0-40b4-a753-f2178bfbdd2a\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.405859 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5rhh\" (UniqueName: \"kubernetes.io/projected/69411227-14e0-40b4-a753-f2178bfbdd2a-kube-api-access-t5rhh\") pod \"cloudkitty-api-0\" (UID: \"69411227-14e0-40b4-a753-f2178bfbdd2a\") " pod="openstack/cloudkitty-api-0" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.452726 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.582804 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.584655 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.588504 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.589648 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.593762 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-hhqd8" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.601163 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.678819 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3bfa8b25-9a4e-482c-b2f6-15347757aec2-openstack-config-secret\") pod \"openstackclient\" (UID: \"3bfa8b25-9a4e-482c-b2f6-15347757aec2\") " pod="openstack/openstackclient" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.681448 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpdkz\" (UniqueName: \"kubernetes.io/projected/3bfa8b25-9a4e-482c-b2f6-15347757aec2-kube-api-access-tpdkz\") pod \"openstackclient\" (UID: \"3bfa8b25-9a4e-482c-b2f6-15347757aec2\") " pod="openstack/openstackclient" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.681660 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3bfa8b25-9a4e-482c-b2f6-15347757aec2-openstack-config\") pod \"openstackclient\" (UID: \"3bfa8b25-9a4e-482c-b2f6-15347757aec2\") " pod="openstack/openstackclient" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.681798 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bfa8b25-9a4e-482c-b2f6-15347757aec2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3bfa8b25-9a4e-482c-b2f6-15347757aec2\") " pod="openstack/openstackclient" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.785521 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpdkz\" (UniqueName: \"kubernetes.io/projected/3bfa8b25-9a4e-482c-b2f6-15347757aec2-kube-api-access-tpdkz\") pod \"openstackclient\" (UID: \"3bfa8b25-9a4e-482c-b2f6-15347757aec2\") " pod="openstack/openstackclient" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.785634 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3bfa8b25-9a4e-482c-b2f6-15347757aec2-openstack-config\") pod \"openstackclient\" (UID: \"3bfa8b25-9a4e-482c-b2f6-15347757aec2\") " pod="openstack/openstackclient" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.785718 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bfa8b25-9a4e-482c-b2f6-15347757aec2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3bfa8b25-9a4e-482c-b2f6-15347757aec2\") " pod="openstack/openstackclient" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.785918 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3bfa8b25-9a4e-482c-b2f6-15347757aec2-openstack-config-secret\") pod \"openstackclient\" (UID: \"3bfa8b25-9a4e-482c-b2f6-15347757aec2\") " pod="openstack/openstackclient" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.796091 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3bfa8b25-9a4e-482c-b2f6-15347757aec2-openstack-config\") pod \"openstackclient\" (UID: \"3bfa8b25-9a4e-482c-b2f6-15347757aec2\") " pod="openstack/openstackclient" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.797898 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3bfa8b25-9a4e-482c-b2f6-15347757aec2-openstack-config-secret\") pod \"openstackclient\" (UID: \"3bfa8b25-9a4e-482c-b2f6-15347757aec2\") " pod="openstack/openstackclient" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.800477 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bfa8b25-9a4e-482c-b2f6-15347757aec2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3bfa8b25-9a4e-482c-b2f6-15347757aec2\") " pod="openstack/openstackclient" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.829396 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpdkz\" (UniqueName: \"kubernetes.io/projected/3bfa8b25-9a4e-482c-b2f6-15347757aec2-kube-api-access-tpdkz\") pod \"openstackclient\" (UID: \"3bfa8b25-9a4e-482c-b2f6-15347757aec2\") " pod="openstack/openstackclient" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.963285 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 26 17:39:30 crc kubenswrapper[4805]: I0226 17:39:30.979134 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3331db20-1348-4e54-81c3-c1e6e3cba7a8" path="/var/lib/kubelet/pods/3331db20-1348-4e54-81c3-c1e6e3cba7a8/volumes" Feb 26 17:39:31 crc kubenswrapper[4805]: W0226 17:39:31.214310 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69411227_14e0_40b4_a753_f2178bfbdd2a.slice/crio-0720c61c9fb74f97c3a65740cded3a4bd88f661c3803374031cbc06b78858a88 WatchSource:0}: Error finding container 0720c61c9fb74f97c3a65740cded3a4bd88f661c3803374031cbc06b78858a88: Status 404 returned error can't find the container with id 0720c61c9fb74f97c3a65740cded3a4bd88f661c3803374031cbc06b78858a88 Feb 26 17:39:31 crc kubenswrapper[4805]: I0226 17:39:31.228439 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 26 17:39:31 crc kubenswrapper[4805]: I0226 17:39:31.585967 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 26 17:39:31 crc kubenswrapper[4805]: W0226 17:39:31.601233 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3bfa8b25_9a4e_482c_b2f6_15347757aec2.slice/crio-22bd93d916693fd342ebb1e9e80dfba60aabf901fb6c931d143ad324b573971b WatchSource:0}: Error finding container 22bd93d916693fd342ebb1e9e80dfba60aabf901fb6c931d143ad324b573971b: Status 404 returned error can't find the container with id 22bd93d916693fd342ebb1e9e80dfba60aabf901fb6c931d143ad324b573971b Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.143340 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.157313 4805 generic.go:334] "Generic (PLEG): container finished" podID="137f038e-91ff-44e0-9f5c-616765295b23" containerID="8c104ad65c86546f104f419d7b22b73d49404959477ca78dd75b9e4053284bd2" exitCode=0 Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.157386 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"137f038e-91ff-44e0-9f5c-616765295b23","Type":"ContainerDied","Data":"8c104ad65c86546f104f419d7b22b73d49404959477ca78dd75b9e4053284bd2"} Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.157414 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"137f038e-91ff-44e0-9f5c-616765295b23","Type":"ContainerDied","Data":"712e1d0d98e73b81e9376915ef3fa85b0c1af98dd1ec8c61f644473c5d562022"} Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.157432 4805 scope.go:117] "RemoveContainer" containerID="174d7c99344723f4aa321d5fdb474ffc6bd1ccd7b7888bc682ee05cbabc29ff6" Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.178008 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"3bfa8b25-9a4e-482c-b2f6-15347757aec2","Type":"ContainerStarted","Data":"22bd93d916693fd342ebb1e9e80dfba60aabf901fb6c931d143ad324b573971b"} Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.241499 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/137f038e-91ff-44e0-9f5c-616765295b23-combined-ca-bundle\") pod \"137f038e-91ff-44e0-9f5c-616765295b23\" (UID: \"137f038e-91ff-44e0-9f5c-616765295b23\") " Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.241816 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgp7k\" (UniqueName: \"kubernetes.io/projected/137f038e-91ff-44e0-9f5c-616765295b23-kube-api-access-rgp7k\") pod \"137f038e-91ff-44e0-9f5c-616765295b23\" (UID: \"137f038e-91ff-44e0-9f5c-616765295b23\") " Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.241943 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/137f038e-91ff-44e0-9f5c-616765295b23-scripts\") pod \"137f038e-91ff-44e0-9f5c-616765295b23\" (UID: \"137f038e-91ff-44e0-9f5c-616765295b23\") " Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.242056 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/137f038e-91ff-44e0-9f5c-616765295b23-config-data\") pod \"137f038e-91ff-44e0-9f5c-616765295b23\" (UID: \"137f038e-91ff-44e0-9f5c-616765295b23\") " Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.242106 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/137f038e-91ff-44e0-9f5c-616765295b23-run-httpd\") pod \"137f038e-91ff-44e0-9f5c-616765295b23\" (UID: \"137f038e-91ff-44e0-9f5c-616765295b23\") " Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.242145 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/137f038e-91ff-44e0-9f5c-616765295b23-sg-core-conf-yaml\") pod \"137f038e-91ff-44e0-9f5c-616765295b23\" (UID: \"137f038e-91ff-44e0-9f5c-616765295b23\") " Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.242280 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/137f038e-91ff-44e0-9f5c-616765295b23-log-httpd\") pod \"137f038e-91ff-44e0-9f5c-616765295b23\" (UID: \"137f038e-91ff-44e0-9f5c-616765295b23\") " Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.250505 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/137f038e-91ff-44e0-9f5c-616765295b23-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "137f038e-91ff-44e0-9f5c-616765295b23" (UID: "137f038e-91ff-44e0-9f5c-616765295b23"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.253581 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/137f038e-91ff-44e0-9f5c-616765295b23-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "137f038e-91ff-44e0-9f5c-616765295b23" (UID: "137f038e-91ff-44e0-9f5c-616765295b23"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.346335 4805 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/137f038e-91ff-44e0-9f5c-616765295b23-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.346363 4805 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/137f038e-91ff-44e0-9f5c-616765295b23-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.376428 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/137f038e-91ff-44e0-9f5c-616765295b23-kube-api-access-rgp7k" (OuterVolumeSpecName: "kube-api-access-rgp7k") pod "137f038e-91ff-44e0-9f5c-616765295b23" (UID: "137f038e-91ff-44e0-9f5c-616765295b23"). InnerVolumeSpecName "kube-api-access-rgp7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.377315 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/137f038e-91ff-44e0-9f5c-616765295b23-scripts" (OuterVolumeSpecName: "scripts") pod "137f038e-91ff-44e0-9f5c-616765295b23" (UID: "137f038e-91ff-44e0-9f5c-616765295b23"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.449126 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgp7k\" (UniqueName: \"kubernetes.io/projected/137f038e-91ff-44e0-9f5c-616765295b23-kube-api-access-rgp7k\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.453776 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/137f038e-91ff-44e0-9f5c-616765295b23-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.449319 4805 scope.go:117] "RemoveContainer" containerID="2e1d8c580374d2d29492306f907602a4e7ea110578934b831a747c61073f6454" Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.454320 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"69411227-14e0-40b4-a753-f2178bfbdd2a","Type":"ContainerStarted","Data":"f74cd03523a7658add610343484f110bf6d064e020411cae410e73d3f4efc332"} Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.454373 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"69411227-14e0-40b4-a753-f2178bfbdd2a","Type":"ContainerStarted","Data":"0720c61c9fb74f97c3a65740cded3a4bd88f661c3803374031cbc06b78858a88"} Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.477260 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/137f038e-91ff-44e0-9f5c-616765295b23-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "137f038e-91ff-44e0-9f5c-616765295b23" (UID: "137f038e-91ff-44e0-9f5c-616765295b23"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.555542 4805 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/137f038e-91ff-44e0-9f5c-616765295b23-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.678267 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/137f038e-91ff-44e0-9f5c-616765295b23-config-data" (OuterVolumeSpecName: "config-data") pod "137f038e-91ff-44e0-9f5c-616765295b23" (UID: "137f038e-91ff-44e0-9f5c-616765295b23"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.686141 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/137f038e-91ff-44e0-9f5c-616765295b23-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "137f038e-91ff-44e0-9f5c-616765295b23" (UID: "137f038e-91ff-44e0-9f5c-616765295b23"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.712199 4805 scope.go:117] "RemoveContainer" containerID="8c104ad65c86546f104f419d7b22b73d49404959477ca78dd75b9e4053284bd2" Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.760416 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/137f038e-91ff-44e0-9f5c-616765295b23-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.760988 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/137f038e-91ff-44e0-9f5c-616765295b23-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.787168 4805 scope.go:117] "RemoveContainer" containerID="42434e486b56a20091e18aef8acebafe32d47be4e8d9bbd951451b88a005f112" Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.853481 4805 scope.go:117] "RemoveContainer" containerID="174d7c99344723f4aa321d5fdb474ffc6bd1ccd7b7888bc682ee05cbabc29ff6" Feb 26 17:39:32 crc kubenswrapper[4805]: E0226 17:39:32.854250 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"174d7c99344723f4aa321d5fdb474ffc6bd1ccd7b7888bc682ee05cbabc29ff6\": container with ID starting with 174d7c99344723f4aa321d5fdb474ffc6bd1ccd7b7888bc682ee05cbabc29ff6 not found: ID does not exist" containerID="174d7c99344723f4aa321d5fdb474ffc6bd1ccd7b7888bc682ee05cbabc29ff6" Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.854306 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"174d7c99344723f4aa321d5fdb474ffc6bd1ccd7b7888bc682ee05cbabc29ff6"} err="failed to get container status \"174d7c99344723f4aa321d5fdb474ffc6bd1ccd7b7888bc682ee05cbabc29ff6\": rpc error: code = NotFound desc = could not find container \"174d7c99344723f4aa321d5fdb474ffc6bd1ccd7b7888bc682ee05cbabc29ff6\": container with ID starting with 174d7c99344723f4aa321d5fdb474ffc6bd1ccd7b7888bc682ee05cbabc29ff6 not found: ID does not exist" Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.854340 4805 scope.go:117] "RemoveContainer" containerID="2e1d8c580374d2d29492306f907602a4e7ea110578934b831a747c61073f6454" Feb 26 17:39:32 crc kubenswrapper[4805]: E0226 17:39:32.855089 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e1d8c580374d2d29492306f907602a4e7ea110578934b831a747c61073f6454\": container with ID starting with 2e1d8c580374d2d29492306f907602a4e7ea110578934b831a747c61073f6454 not found: ID does not exist" containerID="2e1d8c580374d2d29492306f907602a4e7ea110578934b831a747c61073f6454" Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.855122 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e1d8c580374d2d29492306f907602a4e7ea110578934b831a747c61073f6454"} err="failed to get container status \"2e1d8c580374d2d29492306f907602a4e7ea110578934b831a747c61073f6454\": rpc error: code = NotFound desc = could not find container \"2e1d8c580374d2d29492306f907602a4e7ea110578934b831a747c61073f6454\": container with ID starting with 2e1d8c580374d2d29492306f907602a4e7ea110578934b831a747c61073f6454 not found: ID does not exist" Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.855144 4805 scope.go:117] "RemoveContainer" containerID="8c104ad65c86546f104f419d7b22b73d49404959477ca78dd75b9e4053284bd2" Feb 26 17:39:32 crc kubenswrapper[4805]: E0226 17:39:32.870216 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c104ad65c86546f104f419d7b22b73d49404959477ca78dd75b9e4053284bd2\": container with ID starting with 8c104ad65c86546f104f419d7b22b73d49404959477ca78dd75b9e4053284bd2 not found: ID does not exist" containerID="8c104ad65c86546f104f419d7b22b73d49404959477ca78dd75b9e4053284bd2" Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.870275 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c104ad65c86546f104f419d7b22b73d49404959477ca78dd75b9e4053284bd2"} err="failed to get container status \"8c104ad65c86546f104f419d7b22b73d49404959477ca78dd75b9e4053284bd2\": rpc error: code = NotFound desc = could not find container \"8c104ad65c86546f104f419d7b22b73d49404959477ca78dd75b9e4053284bd2\": container with ID starting with 8c104ad65c86546f104f419d7b22b73d49404959477ca78dd75b9e4053284bd2 not found: ID does not exist" Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.870315 4805 scope.go:117] "RemoveContainer" containerID="42434e486b56a20091e18aef8acebafe32d47be4e8d9bbd951451b88a005f112" Feb 26 17:39:32 crc kubenswrapper[4805]: E0226 17:39:32.872510 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42434e486b56a20091e18aef8acebafe32d47be4e8d9bbd951451b88a005f112\": container with ID starting with 42434e486b56a20091e18aef8acebafe32d47be4e8d9bbd951451b88a005f112 not found: ID does not exist" containerID="42434e486b56a20091e18aef8acebafe32d47be4e8d9bbd951451b88a005f112" Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.872544 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42434e486b56a20091e18aef8acebafe32d47be4e8d9bbd951451b88a005f112"} err="failed to get container status \"42434e486b56a20091e18aef8acebafe32d47be4e8d9bbd951451b88a005f112\": rpc error: code = NotFound desc = could not find container \"42434e486b56a20091e18aef8acebafe32d47be4e8d9bbd951451b88a005f112\": container with ID starting with 42434e486b56a20091e18aef8acebafe32d47be4e8d9bbd951451b88a005f112 not found: ID does not exist" Feb 26 17:39:32 crc kubenswrapper[4805]: I0226 17:39:32.977463 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58bd69657f-854cm" Feb 26 17:39:33 crc kubenswrapper[4805]: I0226 17:39:33.301264 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-xhgvb"] Feb 26 17:39:33 crc kubenswrapper[4805]: I0226 17:39:33.301566 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-xhgvb" podUID="1db182f9-cd35-48d6-90d0-270eb8a25e4c" containerName="dnsmasq-dns" containerID="cri-o://cdc814a1c48991a2eafc0aa3500a5a27c075d9ae5389051b5809c5e2b9df5fb4" gracePeriod=10 Feb 26 17:39:33 crc kubenswrapper[4805]: I0226 17:39:33.651159 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"69411227-14e0-40b4-a753-f2178bfbdd2a","Type":"ContainerStarted","Data":"4bc1b1627ba0aaf3af3c9f5f6d1e1d4c956b78ed1f87c20b3c197955a2e00e05"} Feb 26 17:39:33 crc kubenswrapper[4805]: I0226 17:39:33.652220 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Feb 26 17:39:33 crc kubenswrapper[4805]: I0226 17:39:33.676513 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 17:39:33 crc kubenswrapper[4805]: I0226 17:39:33.740772 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=3.74075329 podStartE2EDuration="3.74075329s" podCreationTimestamp="2026-02-26 17:39:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:39:33.69278574 +0000 UTC m=+1488.254540099" watchObservedRunningTime="2026-02-26 17:39:33.74075329 +0000 UTC m=+1488.302507629" Feb 26 17:39:33 crc kubenswrapper[4805]: I0226 17:39:33.788233 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:39:33 crc kubenswrapper[4805]: I0226 17:39:33.815463 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:39:33 crc kubenswrapper[4805]: I0226 17:39:33.833786 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:39:33 crc kubenswrapper[4805]: E0226 17:39:33.834307 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="137f038e-91ff-44e0-9f5c-616765295b23" containerName="ceilometer-notification-agent" Feb 26 17:39:33 crc kubenswrapper[4805]: I0226 17:39:33.834330 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="137f038e-91ff-44e0-9f5c-616765295b23" containerName="ceilometer-notification-agent" Feb 26 17:39:33 crc kubenswrapper[4805]: E0226 17:39:33.834347 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="137f038e-91ff-44e0-9f5c-616765295b23" containerName="ceilometer-central-agent" Feb 26 17:39:33 crc kubenswrapper[4805]: I0226 17:39:33.834354 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="137f038e-91ff-44e0-9f5c-616765295b23" containerName="ceilometer-central-agent" Feb 26 17:39:33 crc kubenswrapper[4805]: E0226 17:39:33.834369 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="137f038e-91ff-44e0-9f5c-616765295b23" containerName="sg-core" Feb 26 17:39:33 crc kubenswrapper[4805]: I0226 17:39:33.834376 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="137f038e-91ff-44e0-9f5c-616765295b23" containerName="sg-core" Feb 26 17:39:33 crc kubenswrapper[4805]: E0226 17:39:33.834400 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="137f038e-91ff-44e0-9f5c-616765295b23" containerName="proxy-httpd" Feb 26 17:39:33 crc kubenswrapper[4805]: I0226 17:39:33.834405 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="137f038e-91ff-44e0-9f5c-616765295b23" containerName="proxy-httpd" Feb 26 17:39:33 crc kubenswrapper[4805]: I0226 17:39:33.834610 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="137f038e-91ff-44e0-9f5c-616765295b23" containerName="proxy-httpd" Feb 26 17:39:33 crc kubenswrapper[4805]: I0226 17:39:33.834629 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="137f038e-91ff-44e0-9f5c-616765295b23" containerName="ceilometer-notification-agent" Feb 26 17:39:33 crc kubenswrapper[4805]: I0226 17:39:33.834646 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="137f038e-91ff-44e0-9f5c-616765295b23" containerName="ceilometer-central-agent" Feb 26 17:39:33 crc kubenswrapper[4805]: I0226 17:39:33.834664 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="137f038e-91ff-44e0-9f5c-616765295b23" containerName="sg-core" Feb 26 17:39:33 crc kubenswrapper[4805]: I0226 17:39:33.845402 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 17:39:33 crc kubenswrapper[4805]: I0226 17:39:33.852906 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:39:33 crc kubenswrapper[4805]: I0226 17:39:33.855618 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 17:39:33 crc kubenswrapper[4805]: I0226 17:39:33.855853 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 17:39:33 crc kubenswrapper[4805]: I0226 17:39:33.903080 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0549f363-78ec-4961-a646-a27f5d96d274-run-httpd\") pod \"ceilometer-0\" (UID: \"0549f363-78ec-4961-a646-a27f5d96d274\") " pod="openstack/ceilometer-0" Feb 26 17:39:33 crc kubenswrapper[4805]: I0226 17:39:33.903144 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jksnr\" (UniqueName: \"kubernetes.io/projected/0549f363-78ec-4961-a646-a27f5d96d274-kube-api-access-jksnr\") pod \"ceilometer-0\" (UID: \"0549f363-78ec-4961-a646-a27f5d96d274\") " pod="openstack/ceilometer-0" Feb 26 17:39:33 crc kubenswrapper[4805]: I0226 17:39:33.903167 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0549f363-78ec-4961-a646-a27f5d96d274-config-data\") pod \"ceilometer-0\" (UID: \"0549f363-78ec-4961-a646-a27f5d96d274\") " pod="openstack/ceilometer-0" Feb 26 17:39:33 crc kubenswrapper[4805]: I0226 17:39:33.903206 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0549f363-78ec-4961-a646-a27f5d96d274-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0549f363-78ec-4961-a646-a27f5d96d274\") " pod="openstack/ceilometer-0" Feb 26 17:39:33 crc kubenswrapper[4805]: I0226 17:39:33.903288 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0549f363-78ec-4961-a646-a27f5d96d274-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0549f363-78ec-4961-a646-a27f5d96d274\") " pod="openstack/ceilometer-0" Feb 26 17:39:33 crc kubenswrapper[4805]: I0226 17:39:33.903489 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0549f363-78ec-4961-a646-a27f5d96d274-scripts\") pod \"ceilometer-0\" (UID: \"0549f363-78ec-4961-a646-a27f5d96d274\") " pod="openstack/ceilometer-0" Feb 26 17:39:33 crc kubenswrapper[4805]: I0226 17:39:33.903554 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0549f363-78ec-4961-a646-a27f5d96d274-log-httpd\") pod \"ceilometer-0\" (UID: \"0549f363-78ec-4961-a646-a27f5d96d274\") " pod="openstack/ceilometer-0" Feb 26 17:39:34 crc kubenswrapper[4805]: I0226 17:39:34.005975 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0549f363-78ec-4961-a646-a27f5d96d274-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0549f363-78ec-4961-a646-a27f5d96d274\") " pod="openstack/ceilometer-0" Feb 26 17:39:34 crc kubenswrapper[4805]: I0226 17:39:34.006243 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0549f363-78ec-4961-a646-a27f5d96d274-scripts\") pod \"ceilometer-0\" (UID: \"0549f363-78ec-4961-a646-a27f5d96d274\") " pod="openstack/ceilometer-0" Feb 26 17:39:34 crc kubenswrapper[4805]: I0226 17:39:34.006320 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0549f363-78ec-4961-a646-a27f5d96d274-log-httpd\") pod \"ceilometer-0\" (UID: \"0549f363-78ec-4961-a646-a27f5d96d274\") " pod="openstack/ceilometer-0" Feb 26 17:39:34 crc kubenswrapper[4805]: I0226 17:39:34.006481 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0549f363-78ec-4961-a646-a27f5d96d274-run-httpd\") pod \"ceilometer-0\" (UID: \"0549f363-78ec-4961-a646-a27f5d96d274\") " pod="openstack/ceilometer-0" Feb 26 17:39:34 crc kubenswrapper[4805]: I0226 17:39:34.006621 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jksnr\" (UniqueName: \"kubernetes.io/projected/0549f363-78ec-4961-a646-a27f5d96d274-kube-api-access-jksnr\") pod \"ceilometer-0\" (UID: \"0549f363-78ec-4961-a646-a27f5d96d274\") " pod="openstack/ceilometer-0" Feb 26 17:39:34 crc kubenswrapper[4805]: I0226 17:39:34.006733 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0549f363-78ec-4961-a646-a27f5d96d274-config-data\") pod \"ceilometer-0\" (UID: \"0549f363-78ec-4961-a646-a27f5d96d274\") " pod="openstack/ceilometer-0" Feb 26 17:39:34 crc kubenswrapper[4805]: I0226 17:39:34.006880 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0549f363-78ec-4961-a646-a27f5d96d274-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0549f363-78ec-4961-a646-a27f5d96d274\") " pod="openstack/ceilometer-0" Feb 26 17:39:34 crc kubenswrapper[4805]: I0226 17:39:34.008783 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0549f363-78ec-4961-a646-a27f5d96d274-run-httpd\") pod \"ceilometer-0\" (UID: \"0549f363-78ec-4961-a646-a27f5d96d274\") " pod="openstack/ceilometer-0" Feb 26 17:39:34 crc kubenswrapper[4805]: I0226 17:39:34.012548 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0549f363-78ec-4961-a646-a27f5d96d274-log-httpd\") pod \"ceilometer-0\" (UID: \"0549f363-78ec-4961-a646-a27f5d96d274\") " pod="openstack/ceilometer-0" Feb 26 17:39:34 crc kubenswrapper[4805]: I0226 17:39:34.018739 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0549f363-78ec-4961-a646-a27f5d96d274-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0549f363-78ec-4961-a646-a27f5d96d274\") " pod="openstack/ceilometer-0" Feb 26 17:39:34 crc kubenswrapper[4805]: I0226 17:39:34.024336 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0549f363-78ec-4961-a646-a27f5d96d274-config-data\") pod \"ceilometer-0\" (UID: \"0549f363-78ec-4961-a646-a27f5d96d274\") " pod="openstack/ceilometer-0" Feb 26 17:39:34 crc kubenswrapper[4805]: I0226 17:39:34.024900 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0549f363-78ec-4961-a646-a27f5d96d274-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0549f363-78ec-4961-a646-a27f5d96d274\") " pod="openstack/ceilometer-0" Feb 26 17:39:34 crc kubenswrapper[4805]: I0226 17:39:34.028230 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0549f363-78ec-4961-a646-a27f5d96d274-scripts\") pod \"ceilometer-0\" (UID: \"0549f363-78ec-4961-a646-a27f5d96d274\") " pod="openstack/ceilometer-0" Feb 26 17:39:34 crc kubenswrapper[4805]: I0226 17:39:34.031408 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jksnr\" (UniqueName: \"kubernetes.io/projected/0549f363-78ec-4961-a646-a27f5d96d274-kube-api-access-jksnr\") pod \"ceilometer-0\" (UID: \"0549f363-78ec-4961-a646-a27f5d96d274\") " pod="openstack/ceilometer-0" Feb 26 17:39:34 crc kubenswrapper[4805]: I0226 17:39:34.227392 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 17:39:34 crc kubenswrapper[4805]: I0226 17:39:34.298326 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7cb9cbd877-vr7d7" Feb 26 17:39:34 crc kubenswrapper[4805]: I0226 17:39:34.319846 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-59c7c6ffc6-vlxnj" Feb 26 17:39:34 crc kubenswrapper[4805]: I0226 17:39:34.320868 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-59c7c6ffc6-vlxnj" Feb 26 17:39:34 crc kubenswrapper[4805]: I0226 17:39:34.388632 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="e1b0e927-cb66-4694-890d-33b20573ccca" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.196:8080/\": dial tcp 10.217.0.196:8080: connect: connection refused" Feb 26 17:39:34 crc kubenswrapper[4805]: I0226 17:39:34.404004 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-69fdbbb8fb-b5bss"] Feb 26 17:39:34 crc kubenswrapper[4805]: I0226 17:39:34.404255 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-69fdbbb8fb-b5bss" podUID="ffe736fb-9cf2-4686-ac9f-d9da17f1e567" containerName="neutron-api" containerID="cri-o://a11287f85caeabc215617eb14feed8be325853f4380b7830acfac44c9b585f31" gracePeriod=30 Feb 26 17:39:34 crc kubenswrapper[4805]: I0226 17:39:34.404679 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-69fdbbb8fb-b5bss" podUID="ffe736fb-9cf2-4686-ac9f-d9da17f1e567" containerName="neutron-httpd" containerID="cri-o://5d81611b984476f33a2b70506403b0d972de66d679bddc922ae4ff038638df3e" gracePeriod=30 Feb 26 17:39:34 crc kubenswrapper[4805]: I0226 17:39:34.729501 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-66c9c57f69-5r7hv_04a821a2-53df-4081-a120-61b7b90b3120/neutron-api/0.log" Feb 26 17:39:34 crc kubenswrapper[4805]: I0226 17:39:34.729907 4805 generic.go:334] "Generic (PLEG): container finished" podID="04a821a2-53df-4081-a120-61b7b90b3120" containerID="ba7f10b014dccb0e6ad27fd1d69ec2a4e15adba750fa28c92b9528f3da0a5662" exitCode=137 Feb 26 17:39:34 crc kubenswrapper[4805]: I0226 17:39:34.730037 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66c9c57f69-5r7hv" event={"ID":"04a821a2-53df-4081-a120-61b7b90b3120","Type":"ContainerDied","Data":"ba7f10b014dccb0e6ad27fd1d69ec2a4e15adba750fa28c92b9528f3da0a5662"} Feb 26 17:39:34 crc kubenswrapper[4805]: I0226 17:39:34.760557 4805 generic.go:334] "Generic (PLEG): container finished" podID="1db182f9-cd35-48d6-90d0-270eb8a25e4c" containerID="cdc814a1c48991a2eafc0aa3500a5a27c075d9ae5389051b5809c5e2b9df5fb4" exitCode=0 Feb 26 17:39:34 crc kubenswrapper[4805]: I0226 17:39:34.760831 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-xhgvb" event={"ID":"1db182f9-cd35-48d6-90d0-270eb8a25e4c","Type":"ContainerDied","Data":"cdc814a1c48991a2eafc0aa3500a5a27c075d9ae5389051b5809c5e2b9df5fb4"} Feb 26 17:39:35 crc kubenswrapper[4805]: I0226 17:39:35.103471 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="137f038e-91ff-44e0-9f5c-616765295b23" path="/var/lib/kubelet/pods/137f038e-91ff-44e0-9f5c-616765295b23/volumes" Feb 26 17:39:35 crc kubenswrapper[4805]: I0226 17:39:35.381313 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-xhgvb" Feb 26 17:39:35 crc kubenswrapper[4805]: I0226 17:39:35.381730 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:39:35 crc kubenswrapper[4805]: I0226 17:39:35.498254 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1db182f9-cd35-48d6-90d0-270eb8a25e4c-ovsdbserver-sb\") pod \"1db182f9-cd35-48d6-90d0-270eb8a25e4c\" (UID: \"1db182f9-cd35-48d6-90d0-270eb8a25e4c\") " Feb 26 17:39:35 crc kubenswrapper[4805]: I0226 17:39:35.498345 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1db182f9-cd35-48d6-90d0-270eb8a25e4c-dns-swift-storage-0\") pod \"1db182f9-cd35-48d6-90d0-270eb8a25e4c\" (UID: \"1db182f9-cd35-48d6-90d0-270eb8a25e4c\") " Feb 26 17:39:35 crc kubenswrapper[4805]: I0226 17:39:35.498391 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1db182f9-cd35-48d6-90d0-270eb8a25e4c-dns-svc\") pod \"1db182f9-cd35-48d6-90d0-270eb8a25e4c\" (UID: \"1db182f9-cd35-48d6-90d0-270eb8a25e4c\") " Feb 26 17:39:35 crc kubenswrapper[4805]: I0226 17:39:35.498419 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4pzt\" (UniqueName: \"kubernetes.io/projected/1db182f9-cd35-48d6-90d0-270eb8a25e4c-kube-api-access-d4pzt\") pod \"1db182f9-cd35-48d6-90d0-270eb8a25e4c\" (UID: \"1db182f9-cd35-48d6-90d0-270eb8a25e4c\") " Feb 26 17:39:35 crc kubenswrapper[4805]: I0226 17:39:35.498642 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1db182f9-cd35-48d6-90d0-270eb8a25e4c-ovsdbserver-nb\") pod \"1db182f9-cd35-48d6-90d0-270eb8a25e4c\" (UID: \"1db182f9-cd35-48d6-90d0-270eb8a25e4c\") " Feb 26 17:39:35 crc kubenswrapper[4805]: I0226 17:39:35.498731 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1db182f9-cd35-48d6-90d0-270eb8a25e4c-config\") pod \"1db182f9-cd35-48d6-90d0-270eb8a25e4c\" (UID: \"1db182f9-cd35-48d6-90d0-270eb8a25e4c\") " Feb 26 17:39:35 crc kubenswrapper[4805]: I0226 17:39:35.538004 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1db182f9-cd35-48d6-90d0-270eb8a25e4c-kube-api-access-d4pzt" (OuterVolumeSpecName: "kube-api-access-d4pzt") pod "1db182f9-cd35-48d6-90d0-270eb8a25e4c" (UID: "1db182f9-cd35-48d6-90d0-270eb8a25e4c"). InnerVolumeSpecName "kube-api-access-d4pzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:39:35 crc kubenswrapper[4805]: I0226 17:39:35.607585 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4pzt\" (UniqueName: \"kubernetes.io/projected/1db182f9-cd35-48d6-90d0-270eb8a25e4c-kube-api-access-d4pzt\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:35 crc kubenswrapper[4805]: I0226 17:39:35.622918 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1db182f9-cd35-48d6-90d0-270eb8a25e4c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1db182f9-cd35-48d6-90d0-270eb8a25e4c" (UID: "1db182f9-cd35-48d6-90d0-270eb8a25e4c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:39:35 crc kubenswrapper[4805]: I0226 17:39:35.689658 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1db182f9-cd35-48d6-90d0-270eb8a25e4c-config" (OuterVolumeSpecName: "config") pod "1db182f9-cd35-48d6-90d0-270eb8a25e4c" (UID: "1db182f9-cd35-48d6-90d0-270eb8a25e4c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:39:35 crc kubenswrapper[4805]: I0226 17:39:35.711488 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1db182f9-cd35-48d6-90d0-270eb8a25e4c-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:35 crc kubenswrapper[4805]: I0226 17:39:35.711543 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1db182f9-cd35-48d6-90d0-270eb8a25e4c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:35 crc kubenswrapper[4805]: I0226 17:39:35.743431 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1db182f9-cd35-48d6-90d0-270eb8a25e4c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1db182f9-cd35-48d6-90d0-270eb8a25e4c" (UID: "1db182f9-cd35-48d6-90d0-270eb8a25e4c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:39:35 crc kubenswrapper[4805]: I0226 17:39:35.789067 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1db182f9-cd35-48d6-90d0-270eb8a25e4c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1db182f9-cd35-48d6-90d0-270eb8a25e4c" (UID: "1db182f9-cd35-48d6-90d0-270eb8a25e4c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:39:35 crc kubenswrapper[4805]: I0226 17:39:35.808247 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1db182f9-cd35-48d6-90d0-270eb8a25e4c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1db182f9-cd35-48d6-90d0-270eb8a25e4c" (UID: "1db182f9-cd35-48d6-90d0-270eb8a25e4c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:39:35 crc kubenswrapper[4805]: I0226 17:39:35.816423 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1db182f9-cd35-48d6-90d0-270eb8a25e4c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:35 crc kubenswrapper[4805]: I0226 17:39:35.816462 4805 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1db182f9-cd35-48d6-90d0-270eb8a25e4c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:35 crc kubenswrapper[4805]: I0226 17:39:35.816475 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1db182f9-cd35-48d6-90d0-270eb8a25e4c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:35 crc kubenswrapper[4805]: I0226 17:39:35.900368 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-xhgvb" event={"ID":"1db182f9-cd35-48d6-90d0-270eb8a25e4c","Type":"ContainerDied","Data":"afb3fa2aab56d9c2f02851e6323a6cdc683d3698149677e07a089d4c0eac5b32"} Feb 26 17:39:35 crc kubenswrapper[4805]: I0226 17:39:35.900428 4805 scope.go:117] "RemoveContainer" containerID="cdc814a1c48991a2eafc0aa3500a5a27c075d9ae5389051b5809c5e2b9df5fb4" Feb 26 17:39:35 crc kubenswrapper[4805]: I0226 17:39:35.900574 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-xhgvb" Feb 26 17:39:35 crc kubenswrapper[4805]: I0226 17:39:35.937489 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0549f363-78ec-4961-a646-a27f5d96d274","Type":"ContainerStarted","Data":"3e223a849dfa02fc3ab2de42e2595a226f913343a163b69247b1c3fe274c880c"} Feb 26 17:39:35 crc kubenswrapper[4805]: I0226 17:39:35.948792 4805 generic.go:334] "Generic (PLEG): container finished" podID="ffe736fb-9cf2-4686-ac9f-d9da17f1e567" containerID="5d81611b984476f33a2b70506403b0d972de66d679bddc922ae4ff038638df3e" exitCode=0 Feb 26 17:39:35 crc kubenswrapper[4805]: I0226 17:39:35.948840 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-69fdbbb8fb-b5bss" event={"ID":"ffe736fb-9cf2-4686-ac9f-d9da17f1e567","Type":"ContainerDied","Data":"5d81611b984476f33a2b70506403b0d972de66d679bddc922ae4ff038638df3e"} Feb 26 17:39:36 crc kubenswrapper[4805]: I0226 17:39:36.005541 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-66c9c57f69-5r7hv_04a821a2-53df-4081-a120-61b7b90b3120/neutron-api/0.log" Feb 26 17:39:36 crc kubenswrapper[4805]: I0226 17:39:36.005646 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66c9c57f69-5r7hv" Feb 26 17:39:36 crc kubenswrapper[4805]: I0226 17:39:36.005933 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-xhgvb"] Feb 26 17:39:36 crc kubenswrapper[4805]: I0226 17:39:36.034369 4805 scope.go:117] "RemoveContainer" containerID="ef47d40952554ed87d27e4aefdbac906c98429d0d38c98591f445d40b6182668" Feb 26 17:39:36 crc kubenswrapper[4805]: I0226 17:39:36.084673 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-xhgvb"] Feb 26 17:39:36 crc kubenswrapper[4805]: I0226 17:39:36.142965 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/04a821a2-53df-4081-a120-61b7b90b3120-httpd-config\") pod \"04a821a2-53df-4081-a120-61b7b90b3120\" (UID: \"04a821a2-53df-4081-a120-61b7b90b3120\") " Feb 26 17:39:36 crc kubenswrapper[4805]: I0226 17:39:36.143050 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a821a2-53df-4081-a120-61b7b90b3120-public-tls-certs\") pod \"04a821a2-53df-4081-a120-61b7b90b3120\" (UID: \"04a821a2-53df-4081-a120-61b7b90b3120\") " Feb 26 17:39:36 crc kubenswrapper[4805]: I0226 17:39:36.143139 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04a821a2-53df-4081-a120-61b7b90b3120-combined-ca-bundle\") pod \"04a821a2-53df-4081-a120-61b7b90b3120\" (UID: \"04a821a2-53df-4081-a120-61b7b90b3120\") " Feb 26 17:39:36 crc kubenswrapper[4805]: I0226 17:39:36.143170 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a821a2-53df-4081-a120-61b7b90b3120-internal-tls-certs\") pod \"04a821a2-53df-4081-a120-61b7b90b3120\" (UID: \"04a821a2-53df-4081-a120-61b7b90b3120\") " Feb 26 17:39:36 crc kubenswrapper[4805]: I0226 17:39:36.143219 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrrbd\" (UniqueName: \"kubernetes.io/projected/04a821a2-53df-4081-a120-61b7b90b3120-kube-api-access-wrrbd\") pod \"04a821a2-53df-4081-a120-61b7b90b3120\" (UID: \"04a821a2-53df-4081-a120-61b7b90b3120\") " Feb 26 17:39:36 crc kubenswrapper[4805]: I0226 17:39:36.143298 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a821a2-53df-4081-a120-61b7b90b3120-ovndb-tls-certs\") pod \"04a821a2-53df-4081-a120-61b7b90b3120\" (UID: \"04a821a2-53df-4081-a120-61b7b90b3120\") " Feb 26 17:39:36 crc kubenswrapper[4805]: I0226 17:39:36.143356 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/04a821a2-53df-4081-a120-61b7b90b3120-config\") pod \"04a821a2-53df-4081-a120-61b7b90b3120\" (UID: \"04a821a2-53df-4081-a120-61b7b90b3120\") " Feb 26 17:39:36 crc kubenswrapper[4805]: I0226 17:39:36.161825 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04a821a2-53df-4081-a120-61b7b90b3120-kube-api-access-wrrbd" (OuterVolumeSpecName: "kube-api-access-wrrbd") pod "04a821a2-53df-4081-a120-61b7b90b3120" (UID: "04a821a2-53df-4081-a120-61b7b90b3120"). InnerVolumeSpecName "kube-api-access-wrrbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:39:36 crc kubenswrapper[4805]: I0226 17:39:36.175375 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04a821a2-53df-4081-a120-61b7b90b3120-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "04a821a2-53df-4081-a120-61b7b90b3120" (UID: "04a821a2-53df-4081-a120-61b7b90b3120"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:36 crc kubenswrapper[4805]: I0226 17:39:36.246304 4805 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/04a821a2-53df-4081-a120-61b7b90b3120-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:36 crc kubenswrapper[4805]: I0226 17:39:36.246335 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrrbd\" (UniqueName: \"kubernetes.io/projected/04a821a2-53df-4081-a120-61b7b90b3120-kube-api-access-wrrbd\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:36 crc kubenswrapper[4805]: I0226 17:39:36.306260 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04a821a2-53df-4081-a120-61b7b90b3120-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "04a821a2-53df-4081-a120-61b7b90b3120" (UID: "04a821a2-53df-4081-a120-61b7b90b3120"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:36 crc kubenswrapper[4805]: I0226 17:39:36.338087 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04a821a2-53df-4081-a120-61b7b90b3120-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04a821a2-53df-4081-a120-61b7b90b3120" (UID: "04a821a2-53df-4081-a120-61b7b90b3120"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:36 crc kubenswrapper[4805]: I0226 17:39:36.342908 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04a821a2-53df-4081-a120-61b7b90b3120-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "04a821a2-53df-4081-a120-61b7b90b3120" (UID: "04a821a2-53df-4081-a120-61b7b90b3120"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:36 crc kubenswrapper[4805]: I0226 17:39:36.348758 4805 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a821a2-53df-4081-a120-61b7b90b3120-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:36 crc kubenswrapper[4805]: I0226 17:39:36.348791 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04a821a2-53df-4081-a120-61b7b90b3120-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:36 crc kubenswrapper[4805]: I0226 17:39:36.348800 4805 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a821a2-53df-4081-a120-61b7b90b3120-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:36 crc kubenswrapper[4805]: I0226 17:39:36.440642 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04a821a2-53df-4081-a120-61b7b90b3120-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "04a821a2-53df-4081-a120-61b7b90b3120" (UID: "04a821a2-53df-4081-a120-61b7b90b3120"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:36 crc kubenswrapper[4805]: I0226 17:39:36.451239 4805 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a821a2-53df-4081-a120-61b7b90b3120-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:36 crc kubenswrapper[4805]: I0226 17:39:36.462484 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04a821a2-53df-4081-a120-61b7b90b3120-config" (OuterVolumeSpecName: "config") pod "04a821a2-53df-4081-a120-61b7b90b3120" (UID: "04a821a2-53df-4081-a120-61b7b90b3120"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:36 crc kubenswrapper[4805]: I0226 17:39:36.552944 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/04a821a2-53df-4081-a120-61b7b90b3120-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:36 crc kubenswrapper[4805]: I0226 17:39:36.969656 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-66c9c57f69-5r7hv_04a821a2-53df-4081-a120-61b7b90b3120/neutron-api/0.log" Feb 26 17:39:36 crc kubenswrapper[4805]: I0226 17:39:36.969855 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66c9c57f69-5r7hv" Feb 26 17:39:36 crc kubenswrapper[4805]: I0226 17:39:36.971715 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1db182f9-cd35-48d6-90d0-270eb8a25e4c" path="/var/lib/kubelet/pods/1db182f9-cd35-48d6-90d0-270eb8a25e4c/volumes" Feb 26 17:39:36 crc kubenswrapper[4805]: I0226 17:39:36.972594 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66c9c57f69-5r7hv" event={"ID":"04a821a2-53df-4081-a120-61b7b90b3120","Type":"ContainerDied","Data":"01281376c435352a21faf37222dc57188d7413dab2b2fd50066d6f2d6ee970b2"} Feb 26 17:39:36 crc kubenswrapper[4805]: I0226 17:39:36.972633 4805 scope.go:117] "RemoveContainer" containerID="1016df8f9d8a15b6c98fe96ede4c5294b7dc0bac93ab731f36779f30155311b4" Feb 26 17:39:36 crc kubenswrapper[4805]: I0226 17:39:36.992694 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0549f363-78ec-4961-a646-a27f5d96d274","Type":"ContainerStarted","Data":"d8ace3db2e32c46a0ce235ea10a44df24146d7d66e6e5eb5fd930e21e9bb4b8e"} Feb 26 17:39:37 crc kubenswrapper[4805]: I0226 17:39:37.057820 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-66c9c57f69-5r7hv"] Feb 26 17:39:37 crc kubenswrapper[4805]: I0226 17:39:37.060614 4805 scope.go:117] "RemoveContainer" containerID="ba7f10b014dccb0e6ad27fd1d69ec2a4e15adba750fa28c92b9528f3da0a5662" Feb 26 17:39:37 crc kubenswrapper[4805]: I0226 17:39:37.069607 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-66c9c57f69-5r7hv"] Feb 26 17:39:38 crc kubenswrapper[4805]: I0226 17:39:38.020113 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0549f363-78ec-4961-a646-a27f5d96d274","Type":"ContainerStarted","Data":"99dbac660ab581ebe7b60def48e24e85a96c5d34193dc74a562e2a1a27fb7832"} Feb 26 17:39:38 crc kubenswrapper[4805]: I0226 17:39:38.920542 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 26 17:39:38 crc kubenswrapper[4805]: I0226 17:39:38.985210 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04a821a2-53df-4081-a120-61b7b90b3120" path="/var/lib/kubelet/pods/04a821a2-53df-4081-a120-61b7b90b3120/volumes" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.027670 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/838d0018-e6a7-41c4-8c65-ccb9500d75c2-config-data-custom\") pod \"838d0018-e6a7-41c4-8c65-ccb9500d75c2\" (UID: \"838d0018-e6a7-41c4-8c65-ccb9500d75c2\") " Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.027723 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/838d0018-e6a7-41c4-8c65-ccb9500d75c2-combined-ca-bundle\") pod \"838d0018-e6a7-41c4-8c65-ccb9500d75c2\" (UID: \"838d0018-e6a7-41c4-8c65-ccb9500d75c2\") " Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.027762 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/838d0018-e6a7-41c4-8c65-ccb9500d75c2-scripts\") pod \"838d0018-e6a7-41c4-8c65-ccb9500d75c2\" (UID: \"838d0018-e6a7-41c4-8c65-ccb9500d75c2\") " Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.027857 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92prt\" (UniqueName: \"kubernetes.io/projected/838d0018-e6a7-41c4-8c65-ccb9500d75c2-kube-api-access-92prt\") pod \"838d0018-e6a7-41c4-8c65-ccb9500d75c2\" (UID: \"838d0018-e6a7-41c4-8c65-ccb9500d75c2\") " Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.028046 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/838d0018-e6a7-41c4-8c65-ccb9500d75c2-certs\") pod \"838d0018-e6a7-41c4-8c65-ccb9500d75c2\" (UID: \"838d0018-e6a7-41c4-8c65-ccb9500d75c2\") " Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.028075 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/838d0018-e6a7-41c4-8c65-ccb9500d75c2-config-data\") pod \"838d0018-e6a7-41c4-8c65-ccb9500d75c2\" (UID: \"838d0018-e6a7-41c4-8c65-ccb9500d75c2\") " Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.039979 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/838d0018-e6a7-41c4-8c65-ccb9500d75c2-kube-api-access-92prt" (OuterVolumeSpecName: "kube-api-access-92prt") pod "838d0018-e6a7-41c4-8c65-ccb9500d75c2" (UID: "838d0018-e6a7-41c4-8c65-ccb9500d75c2"). InnerVolumeSpecName "kube-api-access-92prt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.041196 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/838d0018-e6a7-41c4-8c65-ccb9500d75c2-scripts" (OuterVolumeSpecName: "scripts") pod "838d0018-e6a7-41c4-8c65-ccb9500d75c2" (UID: "838d0018-e6a7-41c4-8c65-ccb9500d75c2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.056967 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/838d0018-e6a7-41c4-8c65-ccb9500d75c2-certs" (OuterVolumeSpecName: "certs") pod "838d0018-e6a7-41c4-8c65-ccb9500d75c2" (UID: "838d0018-e6a7-41c4-8c65-ccb9500d75c2"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.057078 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/838d0018-e6a7-41c4-8c65-ccb9500d75c2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "838d0018-e6a7-41c4-8c65-ccb9500d75c2" (UID: "838d0018-e6a7-41c4-8c65-ccb9500d75c2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.080780 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/838d0018-e6a7-41c4-8c65-ccb9500d75c2-config-data" (OuterVolumeSpecName: "config-data") pod "838d0018-e6a7-41c4-8c65-ccb9500d75c2" (UID: "838d0018-e6a7-41c4-8c65-ccb9500d75c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.080864 4805 generic.go:334] "Generic (PLEG): container finished" podID="838d0018-e6a7-41c4-8c65-ccb9500d75c2" containerID="36c7d32e62c9a4d51d1d08a5094f8fbe94aa47214d523025b582533924318e1b" exitCode=0 Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.080902 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"838d0018-e6a7-41c4-8c65-ccb9500d75c2","Type":"ContainerDied","Data":"36c7d32e62c9a4d51d1d08a5094f8fbe94aa47214d523025b582533924318e1b"} Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.081628 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"838d0018-e6a7-41c4-8c65-ccb9500d75c2","Type":"ContainerDied","Data":"cb0754d23f667e5a58d65d6ec489404a74a268aeb57ad2393e06d26f0a3cd9a6"} Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.082139 4805 scope.go:117] "RemoveContainer" containerID="36c7d32e62c9a4d51d1d08a5094f8fbe94aa47214d523025b582533924318e1b" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.080981 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.105714 4805 generic.go:334] "Generic (PLEG): container finished" podID="ffe736fb-9cf2-4686-ac9f-d9da17f1e567" containerID="a11287f85caeabc215617eb14feed8be325853f4380b7830acfac44c9b585f31" exitCode=0 Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.105818 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-69fdbbb8fb-b5bss" event={"ID":"ffe736fb-9cf2-4686-ac9f-d9da17f1e567","Type":"ContainerDied","Data":"a11287f85caeabc215617eb14feed8be325853f4380b7830acfac44c9b585f31"} Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.115433 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0549f363-78ec-4961-a646-a27f5d96d274","Type":"ContainerStarted","Data":"5a5032f76c98c3e4ee19606b038ab920869297fa49d8488fc0e15655381f6fca"} Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.121321 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="ce5c6445-f359-4ebd-ab9c-d86269a25d2e" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.197:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.122925 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/838d0018-e6a7-41c4-8c65-ccb9500d75c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "838d0018-e6a7-41c4-8c65-ccb9500d75c2" (UID: "838d0018-e6a7-41c4-8c65-ccb9500d75c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.130251 4805 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/838d0018-e6a7-41c4-8c65-ccb9500d75c2-certs\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.130284 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/838d0018-e6a7-41c4-8c65-ccb9500d75c2-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.130295 4805 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/838d0018-e6a7-41c4-8c65-ccb9500d75c2-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.130306 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/838d0018-e6a7-41c4-8c65-ccb9500d75c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.130316 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/838d0018-e6a7-41c4-8c65-ccb9500d75c2-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.130326 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92prt\" (UniqueName: \"kubernetes.io/projected/838d0018-e6a7-41c4-8c65-ccb9500d75c2-kube-api-access-92prt\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.255265 4805 scope.go:117] "RemoveContainer" containerID="36c7d32e62c9a4d51d1d08a5094f8fbe94aa47214d523025b582533924318e1b" Feb 26 17:39:39 crc kubenswrapper[4805]: E0226 17:39:39.255589 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36c7d32e62c9a4d51d1d08a5094f8fbe94aa47214d523025b582533924318e1b\": container with ID starting with 36c7d32e62c9a4d51d1d08a5094f8fbe94aa47214d523025b582533924318e1b not found: ID does not exist" containerID="36c7d32e62c9a4d51d1d08a5094f8fbe94aa47214d523025b582533924318e1b" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.255612 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36c7d32e62c9a4d51d1d08a5094f8fbe94aa47214d523025b582533924318e1b"} err="failed to get container status \"36c7d32e62c9a4d51d1d08a5094f8fbe94aa47214d523025b582533924318e1b\": rpc error: code = NotFound desc = could not find container \"36c7d32e62c9a4d51d1d08a5094f8fbe94aa47214d523025b582533924318e1b\": container with ID starting with 36c7d32e62c9a4d51d1d08a5094f8fbe94aa47214d523025b582533924318e1b not found: ID does not exist" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.378631 4805 scope.go:117] "RemoveContainer" containerID="e07412a516f5409469792dc0a2b18405f86d480df730a3e67af22989803299b8" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.615185 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.625537 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-69fdbbb8fb-b5bss" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.650092 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.663201 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 26 17:39:39 crc kubenswrapper[4805]: E0226 17:39:39.663726 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffe736fb-9cf2-4686-ac9f-d9da17f1e567" containerName="neutron-httpd" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.663743 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffe736fb-9cf2-4686-ac9f-d9da17f1e567" containerName="neutron-httpd" Feb 26 17:39:39 crc kubenswrapper[4805]: E0226 17:39:39.663758 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04a821a2-53df-4081-a120-61b7b90b3120" containerName="neutron-api" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.663764 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="04a821a2-53df-4081-a120-61b7b90b3120" containerName="neutron-api" Feb 26 17:39:39 crc kubenswrapper[4805]: E0226 17:39:39.663779 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1db182f9-cd35-48d6-90d0-270eb8a25e4c" containerName="dnsmasq-dns" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.663785 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1db182f9-cd35-48d6-90d0-270eb8a25e4c" containerName="dnsmasq-dns" Feb 26 17:39:39 crc kubenswrapper[4805]: E0226 17:39:39.663798 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1db182f9-cd35-48d6-90d0-270eb8a25e4c" containerName="init" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.663804 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1db182f9-cd35-48d6-90d0-270eb8a25e4c" containerName="init" Feb 26 17:39:39 crc kubenswrapper[4805]: E0226 17:39:39.663814 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04a821a2-53df-4081-a120-61b7b90b3120" containerName="neutron-httpd" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.663820 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="04a821a2-53df-4081-a120-61b7b90b3120" containerName="neutron-httpd" Feb 26 17:39:39 crc kubenswrapper[4805]: E0226 17:39:39.663835 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffe736fb-9cf2-4686-ac9f-d9da17f1e567" containerName="neutron-api" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.663841 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffe736fb-9cf2-4686-ac9f-d9da17f1e567" containerName="neutron-api" Feb 26 17:39:39 crc kubenswrapper[4805]: E0226 17:39:39.663852 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="838d0018-e6a7-41c4-8c65-ccb9500d75c2" containerName="cloudkitty-proc" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.663857 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="838d0018-e6a7-41c4-8c65-ccb9500d75c2" containerName="cloudkitty-proc" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.664050 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="04a821a2-53df-4081-a120-61b7b90b3120" containerName="neutron-api" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.664068 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="04a821a2-53df-4081-a120-61b7b90b3120" containerName="neutron-httpd" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.664074 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="838d0018-e6a7-41c4-8c65-ccb9500d75c2" containerName="cloudkitty-proc" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.664082 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffe736fb-9cf2-4686-ac9f-d9da17f1e567" containerName="neutron-httpd" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.664093 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1db182f9-cd35-48d6-90d0-270eb8a25e4c" containerName="dnsmasq-dns" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.664101 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffe736fb-9cf2-4686-ac9f-d9da17f1e567" containerName="neutron-api" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.664888 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.682620 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.687215 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.745523 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ffe736fb-9cf2-4686-ac9f-d9da17f1e567-httpd-config\") pod \"ffe736fb-9cf2-4686-ac9f-d9da17f1e567\" (UID: \"ffe736fb-9cf2-4686-ac9f-d9da17f1e567\") " Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.746134 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe736fb-9cf2-4686-ac9f-d9da17f1e567-combined-ca-bundle\") pod \"ffe736fb-9cf2-4686-ac9f-d9da17f1e567\" (UID: \"ffe736fb-9cf2-4686-ac9f-d9da17f1e567\") " Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.746303 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffe736fb-9cf2-4686-ac9f-d9da17f1e567-ovndb-tls-certs\") pod \"ffe736fb-9cf2-4686-ac9f-d9da17f1e567\" (UID: \"ffe736fb-9cf2-4686-ac9f-d9da17f1e567\") " Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.746403 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snp6r\" (UniqueName: \"kubernetes.io/projected/ffe736fb-9cf2-4686-ac9f-d9da17f1e567-kube-api-access-snp6r\") pod \"ffe736fb-9cf2-4686-ac9f-d9da17f1e567\" (UID: \"ffe736fb-9cf2-4686-ac9f-d9da17f1e567\") " Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.746457 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ffe736fb-9cf2-4686-ac9f-d9da17f1e567-config\") pod \"ffe736fb-9cf2-4686-ac9f-d9da17f1e567\" (UID: \"ffe736fb-9cf2-4686-ac9f-d9da17f1e567\") " Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.746765 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72a9ed44-dc10-4f81-be61-b6ba20c83548-config-data\") pod \"cloudkitty-proc-0\" (UID: \"72a9ed44-dc10-4f81-be61-b6ba20c83548\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.746862 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxhhf\" (UniqueName: \"kubernetes.io/projected/72a9ed44-dc10-4f81-be61-b6ba20c83548-kube-api-access-zxhhf\") pod \"cloudkitty-proc-0\" (UID: \"72a9ed44-dc10-4f81-be61-b6ba20c83548\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.746891 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/72a9ed44-dc10-4f81-be61-b6ba20c83548-certs\") pod \"cloudkitty-proc-0\" (UID: \"72a9ed44-dc10-4f81-be61-b6ba20c83548\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.746934 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72a9ed44-dc10-4f81-be61-b6ba20c83548-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"72a9ed44-dc10-4f81-be61-b6ba20c83548\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.747031 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72a9ed44-dc10-4f81-be61-b6ba20c83548-scripts\") pod \"cloudkitty-proc-0\" (UID: \"72a9ed44-dc10-4f81-be61-b6ba20c83548\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.747052 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72a9ed44-dc10-4f81-be61-b6ba20c83548-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"72a9ed44-dc10-4f81-be61-b6ba20c83548\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.753252 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffe736fb-9cf2-4686-ac9f-d9da17f1e567-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "ffe736fb-9cf2-4686-ac9f-d9da17f1e567" (UID: "ffe736fb-9cf2-4686-ac9f-d9da17f1e567"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.776552 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffe736fb-9cf2-4686-ac9f-d9da17f1e567-kube-api-access-snp6r" (OuterVolumeSpecName: "kube-api-access-snp6r") pod "ffe736fb-9cf2-4686-ac9f-d9da17f1e567" (UID: "ffe736fb-9cf2-4686-ac9f-d9da17f1e567"). InnerVolumeSpecName "kube-api-access-snp6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.820008 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffe736fb-9cf2-4686-ac9f-d9da17f1e567-config" (OuterVolumeSpecName: "config") pod "ffe736fb-9cf2-4686-ac9f-d9da17f1e567" (UID: "ffe736fb-9cf2-4686-ac9f-d9da17f1e567"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.848284 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxhhf\" (UniqueName: \"kubernetes.io/projected/72a9ed44-dc10-4f81-be61-b6ba20c83548-kube-api-access-zxhhf\") pod \"cloudkitty-proc-0\" (UID: \"72a9ed44-dc10-4f81-be61-b6ba20c83548\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.848686 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/72a9ed44-dc10-4f81-be61-b6ba20c83548-certs\") pod \"cloudkitty-proc-0\" (UID: \"72a9ed44-dc10-4f81-be61-b6ba20c83548\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.848738 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72a9ed44-dc10-4f81-be61-b6ba20c83548-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"72a9ed44-dc10-4f81-be61-b6ba20c83548\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.848807 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72a9ed44-dc10-4f81-be61-b6ba20c83548-scripts\") pod \"cloudkitty-proc-0\" (UID: \"72a9ed44-dc10-4f81-be61-b6ba20c83548\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.848826 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72a9ed44-dc10-4f81-be61-b6ba20c83548-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"72a9ed44-dc10-4f81-be61-b6ba20c83548\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.848888 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72a9ed44-dc10-4f81-be61-b6ba20c83548-config-data\") pod \"cloudkitty-proc-0\" (UID: \"72a9ed44-dc10-4f81-be61-b6ba20c83548\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.849006 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snp6r\" (UniqueName: \"kubernetes.io/projected/ffe736fb-9cf2-4686-ac9f-d9da17f1e567-kube-api-access-snp6r\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.849036 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ffe736fb-9cf2-4686-ac9f-d9da17f1e567-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.849047 4805 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ffe736fb-9cf2-4686-ac9f-d9da17f1e567-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.848580 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffe736fb-9cf2-4686-ac9f-d9da17f1e567-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "ffe736fb-9cf2-4686-ac9f-d9da17f1e567" (UID: "ffe736fb-9cf2-4686-ac9f-d9da17f1e567"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.856050 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72a9ed44-dc10-4f81-be61-b6ba20c83548-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"72a9ed44-dc10-4f81-be61-b6ba20c83548\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.856084 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72a9ed44-dc10-4f81-be61-b6ba20c83548-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"72a9ed44-dc10-4f81-be61-b6ba20c83548\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.858492 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/72a9ed44-dc10-4f81-be61-b6ba20c83548-certs\") pod \"cloudkitty-proc-0\" (UID: \"72a9ed44-dc10-4f81-be61-b6ba20c83548\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.859043 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72a9ed44-dc10-4f81-be61-b6ba20c83548-config-data\") pod \"cloudkitty-proc-0\" (UID: \"72a9ed44-dc10-4f81-be61-b6ba20c83548\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.864761 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72a9ed44-dc10-4f81-be61-b6ba20c83548-scripts\") pod \"cloudkitty-proc-0\" (UID: \"72a9ed44-dc10-4f81-be61-b6ba20c83548\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.869702 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxhhf\" (UniqueName: \"kubernetes.io/projected/72a9ed44-dc10-4f81-be61-b6ba20c83548-kube-api-access-zxhhf\") pod \"cloudkitty-proc-0\" (UID: \"72a9ed44-dc10-4f81-be61-b6ba20c83548\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.883279 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffe736fb-9cf2-4686-ac9f-d9da17f1e567-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ffe736fb-9cf2-4686-ac9f-d9da17f1e567" (UID: "ffe736fb-9cf2-4686-ac9f-d9da17f1e567"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.951184 4805 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffe736fb-9cf2-4686-ac9f-d9da17f1e567-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.951213 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe736fb-9cf2-4686-ac9f-d9da17f1e567-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:39 crc kubenswrapper[4805]: I0226 17:39:39.963136 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 26 17:39:40 crc kubenswrapper[4805]: I0226 17:39:40.029296 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 26 17:39:40 crc kubenswrapper[4805]: I0226 17:39:40.112671 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="ce5c6445-f359-4ebd-ab9c-d86269a25d2e" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.197:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 17:39:40 crc kubenswrapper[4805]: I0226 17:39:40.160624 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-69fdbbb8fb-b5bss" event={"ID":"ffe736fb-9cf2-4686-ac9f-d9da17f1e567","Type":"ContainerDied","Data":"6754a832a94c9b06a5ad9201ab2d7d563400c1c704b9288a2c6345f668e29490"} Feb 26 17:39:40 crc kubenswrapper[4805]: I0226 17:39:40.160687 4805 scope.go:117] "RemoveContainer" containerID="5d81611b984476f33a2b70506403b0d972de66d679bddc922ae4ff038638df3e" Feb 26 17:39:40 crc kubenswrapper[4805]: I0226 17:39:40.160755 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-69fdbbb8fb-b5bss" Feb 26 17:39:40 crc kubenswrapper[4805]: I0226 17:39:40.297397 4805 scope.go:117] "RemoveContainer" containerID="a11287f85caeabc215617eb14feed8be325853f4380b7830acfac44c9b585f31" Feb 26 17:39:40 crc kubenswrapper[4805]: E0226 17:39:40.384652 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffe736fb_9cf2_4686_ac9f_d9da17f1e567.slice\": RecentStats: unable to find data in memory cache]" Feb 26 17:39:40 crc kubenswrapper[4805]: I0226 17:39:40.405241 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-69fdbbb8fb-b5bss"] Feb 26 17:39:40 crc kubenswrapper[4805]: I0226 17:39:40.466071 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-69fdbbb8fb-b5bss"] Feb 26 17:39:40 crc kubenswrapper[4805]: I0226 17:39:40.697765 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 26 17:39:40 crc kubenswrapper[4805]: I0226 17:39:40.964979 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="838d0018-e6a7-41c4-8c65-ccb9500d75c2" path="/var/lib/kubelet/pods/838d0018-e6a7-41c4-8c65-ccb9500d75c2/volumes" Feb 26 17:39:40 crc kubenswrapper[4805]: I0226 17:39:40.965655 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffe736fb-9cf2-4686-ac9f-d9da17f1e567" path="/var/lib/kubelet/pods/ffe736fb-9cf2-4686-ac9f-d9da17f1e567/volumes" Feb 26 17:39:41 crc kubenswrapper[4805]: I0226 17:39:41.008767 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7d9687b795-zqn9b"] Feb 26 17:39:41 crc kubenswrapper[4805]: I0226 17:39:41.010582 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7d9687b795-zqn9b" Feb 26 17:39:41 crc kubenswrapper[4805]: I0226 17:39:41.012372 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 26 17:39:41 crc kubenswrapper[4805]: I0226 17:39:41.012540 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 26 17:39:41 crc kubenswrapper[4805]: I0226 17:39:41.013976 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 26 17:39:41 crc kubenswrapper[4805]: I0226 17:39:41.036577 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7d9687b795-zqn9b"] Feb 26 17:39:41 crc kubenswrapper[4805]: I0226 17:39:41.094862 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/01b8f655-2944-4562-89c4-d2bcf9516cde-public-tls-certs\") pod \"swift-proxy-7d9687b795-zqn9b\" (UID: \"01b8f655-2944-4562-89c4-d2bcf9516cde\") " pod="openstack/swift-proxy-7d9687b795-zqn9b" Feb 26 17:39:41 crc kubenswrapper[4805]: I0226 17:39:41.094931 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01b8f655-2944-4562-89c4-d2bcf9516cde-log-httpd\") pod \"swift-proxy-7d9687b795-zqn9b\" (UID: \"01b8f655-2944-4562-89c4-d2bcf9516cde\") " pod="openstack/swift-proxy-7d9687b795-zqn9b" Feb 26 17:39:41 crc kubenswrapper[4805]: I0226 17:39:41.094971 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/01b8f655-2944-4562-89c4-d2bcf9516cde-internal-tls-certs\") pod \"swift-proxy-7d9687b795-zqn9b\" (UID: \"01b8f655-2944-4562-89c4-d2bcf9516cde\") " pod="openstack/swift-proxy-7d9687b795-zqn9b" Feb 26 17:39:41 crc kubenswrapper[4805]: I0226 17:39:41.095007 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01b8f655-2944-4562-89c4-d2bcf9516cde-run-httpd\") pod \"swift-proxy-7d9687b795-zqn9b\" (UID: \"01b8f655-2944-4562-89c4-d2bcf9516cde\") " pod="openstack/swift-proxy-7d9687b795-zqn9b" Feb 26 17:39:41 crc kubenswrapper[4805]: I0226 17:39:41.095092 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01b8f655-2944-4562-89c4-d2bcf9516cde-combined-ca-bundle\") pod \"swift-proxy-7d9687b795-zqn9b\" (UID: \"01b8f655-2944-4562-89c4-d2bcf9516cde\") " pod="openstack/swift-proxy-7d9687b795-zqn9b" Feb 26 17:39:41 crc kubenswrapper[4805]: I0226 17:39:41.095133 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01b8f655-2944-4562-89c4-d2bcf9516cde-config-data\") pod \"swift-proxy-7d9687b795-zqn9b\" (UID: \"01b8f655-2944-4562-89c4-d2bcf9516cde\") " pod="openstack/swift-proxy-7d9687b795-zqn9b" Feb 26 17:39:41 crc kubenswrapper[4805]: I0226 17:39:41.095157 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jssqr\" (UniqueName: \"kubernetes.io/projected/01b8f655-2944-4562-89c4-d2bcf9516cde-kube-api-access-jssqr\") pod \"swift-proxy-7d9687b795-zqn9b\" (UID: \"01b8f655-2944-4562-89c4-d2bcf9516cde\") " pod="openstack/swift-proxy-7d9687b795-zqn9b" Feb 26 17:39:41 crc kubenswrapper[4805]: I0226 17:39:41.095202 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/01b8f655-2944-4562-89c4-d2bcf9516cde-etc-swift\") pod \"swift-proxy-7d9687b795-zqn9b\" (UID: \"01b8f655-2944-4562-89c4-d2bcf9516cde\") " pod="openstack/swift-proxy-7d9687b795-zqn9b" Feb 26 17:39:41 crc kubenswrapper[4805]: I0226 17:39:41.196328 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/01b8f655-2944-4562-89c4-d2bcf9516cde-public-tls-certs\") pod \"swift-proxy-7d9687b795-zqn9b\" (UID: \"01b8f655-2944-4562-89c4-d2bcf9516cde\") " pod="openstack/swift-proxy-7d9687b795-zqn9b" Feb 26 17:39:41 crc kubenswrapper[4805]: I0226 17:39:41.196383 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01b8f655-2944-4562-89c4-d2bcf9516cde-log-httpd\") pod \"swift-proxy-7d9687b795-zqn9b\" (UID: \"01b8f655-2944-4562-89c4-d2bcf9516cde\") " pod="openstack/swift-proxy-7d9687b795-zqn9b" Feb 26 17:39:41 crc kubenswrapper[4805]: I0226 17:39:41.196436 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/01b8f655-2944-4562-89c4-d2bcf9516cde-internal-tls-certs\") pod \"swift-proxy-7d9687b795-zqn9b\" (UID: \"01b8f655-2944-4562-89c4-d2bcf9516cde\") " pod="openstack/swift-proxy-7d9687b795-zqn9b" Feb 26 17:39:41 crc kubenswrapper[4805]: I0226 17:39:41.196478 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01b8f655-2944-4562-89c4-d2bcf9516cde-run-httpd\") pod \"swift-proxy-7d9687b795-zqn9b\" (UID: \"01b8f655-2944-4562-89c4-d2bcf9516cde\") " pod="openstack/swift-proxy-7d9687b795-zqn9b" Feb 26 17:39:41 crc kubenswrapper[4805]: I0226 17:39:41.196546 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01b8f655-2944-4562-89c4-d2bcf9516cde-combined-ca-bundle\") pod \"swift-proxy-7d9687b795-zqn9b\" (UID: \"01b8f655-2944-4562-89c4-d2bcf9516cde\") " pod="openstack/swift-proxy-7d9687b795-zqn9b" Feb 26 17:39:41 crc kubenswrapper[4805]: I0226 17:39:41.196585 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01b8f655-2944-4562-89c4-d2bcf9516cde-config-data\") pod \"swift-proxy-7d9687b795-zqn9b\" (UID: \"01b8f655-2944-4562-89c4-d2bcf9516cde\") " pod="openstack/swift-proxy-7d9687b795-zqn9b" Feb 26 17:39:41 crc kubenswrapper[4805]: I0226 17:39:41.196610 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jssqr\" (UniqueName: \"kubernetes.io/projected/01b8f655-2944-4562-89c4-d2bcf9516cde-kube-api-access-jssqr\") pod \"swift-proxy-7d9687b795-zqn9b\" (UID: \"01b8f655-2944-4562-89c4-d2bcf9516cde\") " pod="openstack/swift-proxy-7d9687b795-zqn9b" Feb 26 17:39:41 crc kubenswrapper[4805]: I0226 17:39:41.196659 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/01b8f655-2944-4562-89c4-d2bcf9516cde-etc-swift\") pod \"swift-proxy-7d9687b795-zqn9b\" (UID: \"01b8f655-2944-4562-89c4-d2bcf9516cde\") " pod="openstack/swift-proxy-7d9687b795-zqn9b" Feb 26 17:39:41 crc kubenswrapper[4805]: I0226 17:39:41.201567 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01b8f655-2944-4562-89c4-d2bcf9516cde-log-httpd\") pod \"swift-proxy-7d9687b795-zqn9b\" (UID: \"01b8f655-2944-4562-89c4-d2bcf9516cde\") " pod="openstack/swift-proxy-7d9687b795-zqn9b" Feb 26 17:39:41 crc kubenswrapper[4805]: I0226 17:39:41.201728 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01b8f655-2944-4562-89c4-d2bcf9516cde-run-httpd\") pod \"swift-proxy-7d9687b795-zqn9b\" (UID: \"01b8f655-2944-4562-89c4-d2bcf9516cde\") " pod="openstack/swift-proxy-7d9687b795-zqn9b" Feb 26 17:39:41 crc kubenswrapper[4805]: I0226 17:39:41.205973 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/01b8f655-2944-4562-89c4-d2bcf9516cde-public-tls-certs\") pod \"swift-proxy-7d9687b795-zqn9b\" (UID: \"01b8f655-2944-4562-89c4-d2bcf9516cde\") " pod="openstack/swift-proxy-7d9687b795-zqn9b" Feb 26 17:39:41 crc kubenswrapper[4805]: I0226 17:39:41.206052 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/01b8f655-2944-4562-89c4-d2bcf9516cde-internal-tls-certs\") pod \"swift-proxy-7d9687b795-zqn9b\" (UID: \"01b8f655-2944-4562-89c4-d2bcf9516cde\") " pod="openstack/swift-proxy-7d9687b795-zqn9b" Feb 26 17:39:41 crc kubenswrapper[4805]: I0226 17:39:41.207383 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01b8f655-2944-4562-89c4-d2bcf9516cde-combined-ca-bundle\") pod \"swift-proxy-7d9687b795-zqn9b\" (UID: \"01b8f655-2944-4562-89c4-d2bcf9516cde\") " pod="openstack/swift-proxy-7d9687b795-zqn9b" Feb 26 17:39:41 crc kubenswrapper[4805]: I0226 17:39:41.207843 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01b8f655-2944-4562-89c4-d2bcf9516cde-config-data\") pod \"swift-proxy-7d9687b795-zqn9b\" (UID: \"01b8f655-2944-4562-89c4-d2bcf9516cde\") " pod="openstack/swift-proxy-7d9687b795-zqn9b" Feb 26 17:39:41 crc kubenswrapper[4805]: I0226 17:39:41.213254 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"72a9ed44-dc10-4f81-be61-b6ba20c83548","Type":"ContainerStarted","Data":"198b395ccd7c0775cca85f690aa9da5aed7d703bf945bda979ab5a0685c36e39"} Feb 26 17:39:41 crc kubenswrapper[4805]: I0226 17:39:41.213306 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"72a9ed44-dc10-4f81-be61-b6ba20c83548","Type":"ContainerStarted","Data":"be23123dce29ad40004c65d95f88da324c0315aac4f3769b4844f9b274a23bd9"} Feb 26 17:39:41 crc kubenswrapper[4805]: I0226 17:39:41.223162 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/01b8f655-2944-4562-89c4-d2bcf9516cde-etc-swift\") pod \"swift-proxy-7d9687b795-zqn9b\" (UID: \"01b8f655-2944-4562-89c4-d2bcf9516cde\") " pod="openstack/swift-proxy-7d9687b795-zqn9b" Feb 26 17:39:41 crc kubenswrapper[4805]: I0226 17:39:41.224959 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jssqr\" (UniqueName: \"kubernetes.io/projected/01b8f655-2944-4562-89c4-d2bcf9516cde-kube-api-access-jssqr\") pod \"swift-proxy-7d9687b795-zqn9b\" (UID: \"01b8f655-2944-4562-89c4-d2bcf9516cde\") " pod="openstack/swift-proxy-7d9687b795-zqn9b" Feb 26 17:39:41 crc kubenswrapper[4805]: I0226 17:39:41.230064 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0549f363-78ec-4961-a646-a27f5d96d274","Type":"ContainerStarted","Data":"dd3fb7cf810fbce09aeedbf46f16216a4b3258c03dd067dbedb4d31d19f17a07"} Feb 26 17:39:41 crc kubenswrapper[4805]: I0226 17:39:41.230337 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 17:39:41 crc kubenswrapper[4805]: I0226 17:39:41.233727 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=2.233707041 podStartE2EDuration="2.233707041s" podCreationTimestamp="2026-02-26 17:39:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:39:41.229363961 +0000 UTC m=+1495.791118310" watchObservedRunningTime="2026-02-26 17:39:41.233707041 +0000 UTC m=+1495.795461380" Feb 26 17:39:41 crc kubenswrapper[4805]: I0226 17:39:41.285365 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.261475776 podStartE2EDuration="8.285341534s" podCreationTimestamp="2026-02-26 17:39:33 +0000 UTC" firstStartedPulling="2026-02-26 17:39:35.441252673 +0000 UTC m=+1490.003007022" lastFinishedPulling="2026-02-26 17:39:40.465118441 +0000 UTC m=+1495.026872780" observedRunningTime="2026-02-26 17:39:41.254516566 +0000 UTC m=+1495.816270915" watchObservedRunningTime="2026-02-26 17:39:41.285341534 +0000 UTC m=+1495.847095873" Feb 26 17:39:41 crc kubenswrapper[4805]: I0226 17:39:41.328781 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7d9687b795-zqn9b" Feb 26 17:39:42 crc kubenswrapper[4805]: I0226 17:39:42.058383 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7d9687b795-zqn9b"] Feb 26 17:39:42 crc kubenswrapper[4805]: W0226 17:39:42.058996 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01b8f655_2944_4562_89c4_d2bcf9516cde.slice/crio-6026994ff53e4a03d5e0053e45341252960f62519a2989421a67583dfb9c5b40 WatchSource:0}: Error finding container 6026994ff53e4a03d5e0053e45341252960f62519a2989421a67583dfb9c5b40: Status 404 returned error can't find the container with id 6026994ff53e4a03d5e0053e45341252960f62519a2989421a67583dfb9c5b40 Feb 26 17:39:42 crc kubenswrapper[4805]: I0226 17:39:42.244770 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7d9687b795-zqn9b" event={"ID":"01b8f655-2944-4562-89c4-d2bcf9516cde","Type":"ContainerStarted","Data":"6026994ff53e4a03d5e0053e45341252960f62519a2989421a67583dfb9c5b40"} Feb 26 17:39:43 crc kubenswrapper[4805]: I0226 17:39:43.266579 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7d9687b795-zqn9b" event={"ID":"01b8f655-2944-4562-89c4-d2bcf9516cde","Type":"ContainerStarted","Data":"0c2a0fb2890549e56fad73842e3b6b3550b36a560114928f88d3f31cf11c390d"} Feb 26 17:39:43 crc kubenswrapper[4805]: I0226 17:39:43.266949 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7d9687b795-zqn9b" event={"ID":"01b8f655-2944-4562-89c4-d2bcf9516cde","Type":"ContainerStarted","Data":"deda96eb7d330dbc1028a25afbb024543958d1cf8bc8e0b72ad463c19dc44cd0"} Feb 26 17:39:43 crc kubenswrapper[4805]: I0226 17:39:43.268150 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7d9687b795-zqn9b" Feb 26 17:39:43 crc kubenswrapper[4805]: I0226 17:39:43.268219 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7d9687b795-zqn9b" Feb 26 17:39:43 crc kubenswrapper[4805]: I0226 17:39:43.298989 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7d9687b795-zqn9b" podStartSLOduration=3.298965921 podStartE2EDuration="3.298965921s" podCreationTimestamp="2026-02-26 17:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:39:43.291176004 +0000 UTC m=+1497.852930343" watchObservedRunningTime="2026-02-26 17:39:43.298965921 +0000 UTC m=+1497.860720260" Feb 26 17:39:43 crc kubenswrapper[4805]: I0226 17:39:43.982731 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:39:43 crc kubenswrapper[4805]: I0226 17:39:43.983951 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0549f363-78ec-4961-a646-a27f5d96d274" containerName="proxy-httpd" containerID="cri-o://dd3fb7cf810fbce09aeedbf46f16216a4b3258c03dd067dbedb4d31d19f17a07" gracePeriod=30 Feb 26 17:39:43 crc kubenswrapper[4805]: I0226 17:39:43.984285 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0549f363-78ec-4961-a646-a27f5d96d274" containerName="sg-core" containerID="cri-o://5a5032f76c98c3e4ee19606b038ab920869297fa49d8488fc0e15655381f6fca" gracePeriod=30 Feb 26 17:39:43 crc kubenswrapper[4805]: I0226 17:39:43.984382 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0549f363-78ec-4961-a646-a27f5d96d274" containerName="ceilometer-notification-agent" containerID="cri-o://99dbac660ab581ebe7b60def48e24e85a96c5d34193dc74a562e2a1a27fb7832" gracePeriod=30 Feb 26 17:39:43 crc kubenswrapper[4805]: I0226 17:39:43.987540 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0549f363-78ec-4961-a646-a27f5d96d274" containerName="ceilometer-central-agent" containerID="cri-o://d8ace3db2e32c46a0ce235ea10a44df24146d7d66e6e5eb5fd930e21e9bb4b8e" gracePeriod=30 Feb 26 17:39:44 crc kubenswrapper[4805]: I0226 17:39:44.219733 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 26 17:39:45 crc kubenswrapper[4805]: I0226 17:39:45.307139 4805 generic.go:334] "Generic (PLEG): container finished" podID="0549f363-78ec-4961-a646-a27f5d96d274" containerID="dd3fb7cf810fbce09aeedbf46f16216a4b3258c03dd067dbedb4d31d19f17a07" exitCode=0 Feb 26 17:39:45 crc kubenswrapper[4805]: I0226 17:39:45.307771 4805 generic.go:334] "Generic (PLEG): container finished" podID="0549f363-78ec-4961-a646-a27f5d96d274" containerID="5a5032f76c98c3e4ee19606b038ab920869297fa49d8488fc0e15655381f6fca" exitCode=2 Feb 26 17:39:45 crc kubenswrapper[4805]: I0226 17:39:45.307788 4805 generic.go:334] "Generic (PLEG): container finished" podID="0549f363-78ec-4961-a646-a27f5d96d274" containerID="99dbac660ab581ebe7b60def48e24e85a96c5d34193dc74a562e2a1a27fb7832" exitCode=0 Feb 26 17:39:45 crc kubenswrapper[4805]: I0226 17:39:45.307797 4805 generic.go:334] "Generic (PLEG): container finished" podID="0549f363-78ec-4961-a646-a27f5d96d274" containerID="d8ace3db2e32c46a0ce235ea10a44df24146d7d66e6e5eb5fd930e21e9bb4b8e" exitCode=0 Feb 26 17:39:45 crc kubenswrapper[4805]: I0226 17:39:45.307208 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0549f363-78ec-4961-a646-a27f5d96d274","Type":"ContainerDied","Data":"dd3fb7cf810fbce09aeedbf46f16216a4b3258c03dd067dbedb4d31d19f17a07"} Feb 26 17:39:45 crc kubenswrapper[4805]: I0226 17:39:45.307832 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0549f363-78ec-4961-a646-a27f5d96d274","Type":"ContainerDied","Data":"5a5032f76c98c3e4ee19606b038ab920869297fa49d8488fc0e15655381f6fca"} Feb 26 17:39:45 crc kubenswrapper[4805]: I0226 17:39:45.307850 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0549f363-78ec-4961-a646-a27f5d96d274","Type":"ContainerDied","Data":"99dbac660ab581ebe7b60def48e24e85a96c5d34193dc74a562e2a1a27fb7832"} Feb 26 17:39:45 crc kubenswrapper[4805]: I0226 17:39:45.307865 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0549f363-78ec-4961-a646-a27f5d96d274","Type":"ContainerDied","Data":"d8ace3db2e32c46a0ce235ea10a44df24146d7d66e6e5eb5fd930e21e9bb4b8e"} Feb 26 17:39:46 crc kubenswrapper[4805]: I0226 17:39:46.605475 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 17:39:46 crc kubenswrapper[4805]: I0226 17:39:46.663821 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0549f363-78ec-4961-a646-a27f5d96d274-combined-ca-bundle\") pod \"0549f363-78ec-4961-a646-a27f5d96d274\" (UID: \"0549f363-78ec-4961-a646-a27f5d96d274\") " Feb 26 17:39:46 crc kubenswrapper[4805]: I0226 17:39:46.664025 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0549f363-78ec-4961-a646-a27f5d96d274-run-httpd\") pod \"0549f363-78ec-4961-a646-a27f5d96d274\" (UID: \"0549f363-78ec-4961-a646-a27f5d96d274\") " Feb 26 17:39:46 crc kubenswrapper[4805]: I0226 17:39:46.664185 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0549f363-78ec-4961-a646-a27f5d96d274-config-data\") pod \"0549f363-78ec-4961-a646-a27f5d96d274\" (UID: \"0549f363-78ec-4961-a646-a27f5d96d274\") " Feb 26 17:39:46 crc kubenswrapper[4805]: I0226 17:39:46.664217 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jksnr\" (UniqueName: \"kubernetes.io/projected/0549f363-78ec-4961-a646-a27f5d96d274-kube-api-access-jksnr\") pod \"0549f363-78ec-4961-a646-a27f5d96d274\" (UID: \"0549f363-78ec-4961-a646-a27f5d96d274\") " Feb 26 17:39:46 crc kubenswrapper[4805]: I0226 17:39:46.664243 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0549f363-78ec-4961-a646-a27f5d96d274-scripts\") pod \"0549f363-78ec-4961-a646-a27f5d96d274\" (UID: \"0549f363-78ec-4961-a646-a27f5d96d274\") " Feb 26 17:39:46 crc kubenswrapper[4805]: I0226 17:39:46.664294 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0549f363-78ec-4961-a646-a27f5d96d274-log-httpd\") pod \"0549f363-78ec-4961-a646-a27f5d96d274\" (UID: \"0549f363-78ec-4961-a646-a27f5d96d274\") " Feb 26 17:39:46 crc kubenswrapper[4805]: I0226 17:39:46.664369 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0549f363-78ec-4961-a646-a27f5d96d274-sg-core-conf-yaml\") pod \"0549f363-78ec-4961-a646-a27f5d96d274\" (UID: \"0549f363-78ec-4961-a646-a27f5d96d274\") " Feb 26 17:39:46 crc kubenswrapper[4805]: I0226 17:39:46.664902 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0549f363-78ec-4961-a646-a27f5d96d274-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0549f363-78ec-4961-a646-a27f5d96d274" (UID: "0549f363-78ec-4961-a646-a27f5d96d274"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:39:46 crc kubenswrapper[4805]: I0226 17:39:46.665343 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0549f363-78ec-4961-a646-a27f5d96d274-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0549f363-78ec-4961-a646-a27f5d96d274" (UID: "0549f363-78ec-4961-a646-a27f5d96d274"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:39:46 crc kubenswrapper[4805]: I0226 17:39:46.679160 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0549f363-78ec-4961-a646-a27f5d96d274-scripts" (OuterVolumeSpecName: "scripts") pod "0549f363-78ec-4961-a646-a27f5d96d274" (UID: "0549f363-78ec-4961-a646-a27f5d96d274"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:46 crc kubenswrapper[4805]: I0226 17:39:46.693996 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0549f363-78ec-4961-a646-a27f5d96d274-kube-api-access-jksnr" (OuterVolumeSpecName: "kube-api-access-jksnr") pod "0549f363-78ec-4961-a646-a27f5d96d274" (UID: "0549f363-78ec-4961-a646-a27f5d96d274"). InnerVolumeSpecName "kube-api-access-jksnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:39:46 crc kubenswrapper[4805]: I0226 17:39:46.739146 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0549f363-78ec-4961-a646-a27f5d96d274-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0549f363-78ec-4961-a646-a27f5d96d274" (UID: "0549f363-78ec-4961-a646-a27f5d96d274"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:46 crc kubenswrapper[4805]: I0226 17:39:46.766479 4805 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0549f363-78ec-4961-a646-a27f5d96d274-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:46 crc kubenswrapper[4805]: I0226 17:39:46.766514 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jksnr\" (UniqueName: \"kubernetes.io/projected/0549f363-78ec-4961-a646-a27f5d96d274-kube-api-access-jksnr\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:46 crc kubenswrapper[4805]: I0226 17:39:46.766525 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0549f363-78ec-4961-a646-a27f5d96d274-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:46 crc kubenswrapper[4805]: I0226 17:39:46.766533 4805 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0549f363-78ec-4961-a646-a27f5d96d274-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:46 crc kubenswrapper[4805]: I0226 17:39:46.766541 4805 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0549f363-78ec-4961-a646-a27f5d96d274-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:46 crc kubenswrapper[4805]: I0226 17:39:46.815126 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0549f363-78ec-4961-a646-a27f5d96d274-config-data" (OuterVolumeSpecName: "config-data") pod "0549f363-78ec-4961-a646-a27f5d96d274" (UID: "0549f363-78ec-4961-a646-a27f5d96d274"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:46 crc kubenswrapper[4805]: I0226 17:39:46.868695 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0549f363-78ec-4961-a646-a27f5d96d274-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:46 crc kubenswrapper[4805]: I0226 17:39:46.873148 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0549f363-78ec-4961-a646-a27f5d96d274-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0549f363-78ec-4961-a646-a27f5d96d274" (UID: "0549f363-78ec-4961-a646-a27f5d96d274"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:39:46 crc kubenswrapper[4805]: I0226 17:39:46.970147 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0549f363-78ec-4961-a646-a27f5d96d274-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.371226 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0549f363-78ec-4961-a646-a27f5d96d274","Type":"ContainerDied","Data":"3e223a849dfa02fc3ab2de42e2595a226f913343a163b69247b1c3fe274c880c"} Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.371308 4805 scope.go:117] "RemoveContainer" containerID="dd3fb7cf810fbce09aeedbf46f16216a4b3258c03dd067dbedb4d31d19f17a07" Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.371392 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.400561 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.411983 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.431492 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:39:47 crc kubenswrapper[4805]: E0226 17:39:47.432301 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0549f363-78ec-4961-a646-a27f5d96d274" containerName="ceilometer-central-agent" Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.432425 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0549f363-78ec-4961-a646-a27f5d96d274" containerName="ceilometer-central-agent" Feb 26 17:39:47 crc kubenswrapper[4805]: E0226 17:39:47.432533 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0549f363-78ec-4961-a646-a27f5d96d274" containerName="ceilometer-notification-agent" Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.432636 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0549f363-78ec-4961-a646-a27f5d96d274" containerName="ceilometer-notification-agent" Feb 26 17:39:47 crc kubenswrapper[4805]: E0226 17:39:47.432746 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0549f363-78ec-4961-a646-a27f5d96d274" containerName="proxy-httpd" Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.432830 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0549f363-78ec-4961-a646-a27f5d96d274" containerName="proxy-httpd" Feb 26 17:39:47 crc kubenswrapper[4805]: E0226 17:39:47.432913 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0549f363-78ec-4961-a646-a27f5d96d274" containerName="sg-core" Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.432990 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0549f363-78ec-4961-a646-a27f5d96d274" containerName="sg-core" Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.433390 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="0549f363-78ec-4961-a646-a27f5d96d274" containerName="proxy-httpd" Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.433492 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="0549f363-78ec-4961-a646-a27f5d96d274" containerName="ceilometer-notification-agent" Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.435254 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="0549f363-78ec-4961-a646-a27f5d96d274" containerName="ceilometer-central-agent" Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.435586 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="0549f363-78ec-4961-a646-a27f5d96d274" containerName="sg-core" Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.443924 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.449325 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.449657 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.450653 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.485042 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6njv5\" (UniqueName: \"kubernetes.io/projected/1920fe1a-941a-4968-b797-bbdbe80a08de-kube-api-access-6njv5\") pod \"ceilometer-0\" (UID: \"1920fe1a-941a-4968-b797-bbdbe80a08de\") " pod="openstack/ceilometer-0" Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.485176 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1920fe1a-941a-4968-b797-bbdbe80a08de-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1920fe1a-941a-4968-b797-bbdbe80a08de\") " pod="openstack/ceilometer-0" Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.485210 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1920fe1a-941a-4968-b797-bbdbe80a08de-log-httpd\") pod \"ceilometer-0\" (UID: \"1920fe1a-941a-4968-b797-bbdbe80a08de\") " pod="openstack/ceilometer-0" Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.485304 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1920fe1a-941a-4968-b797-bbdbe80a08de-run-httpd\") pod \"ceilometer-0\" (UID: \"1920fe1a-941a-4968-b797-bbdbe80a08de\") " pod="openstack/ceilometer-0" Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.485334 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1920fe1a-941a-4968-b797-bbdbe80a08de-scripts\") pod \"ceilometer-0\" (UID: \"1920fe1a-941a-4968-b797-bbdbe80a08de\") " pod="openstack/ceilometer-0" Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.485377 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1920fe1a-941a-4968-b797-bbdbe80a08de-config-data\") pod \"ceilometer-0\" (UID: \"1920fe1a-941a-4968-b797-bbdbe80a08de\") " pod="openstack/ceilometer-0" Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.485416 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1920fe1a-941a-4968-b797-bbdbe80a08de-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1920fe1a-941a-4968-b797-bbdbe80a08de\") " pod="openstack/ceilometer-0" Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.496074 4805 scope.go:117] "RemoveContainer" containerID="5a5032f76c98c3e4ee19606b038ab920869297fa49d8488fc0e15655381f6fca" Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.528585 4805 scope.go:117] "RemoveContainer" containerID="99dbac660ab581ebe7b60def48e24e85a96c5d34193dc74a562e2a1a27fb7832" Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.558954 4805 scope.go:117] "RemoveContainer" containerID="d8ace3db2e32c46a0ce235ea10a44df24146d7d66e6e5eb5fd930e21e9bb4b8e" Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.587472 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6njv5\" (UniqueName: \"kubernetes.io/projected/1920fe1a-941a-4968-b797-bbdbe80a08de-kube-api-access-6njv5\") pod \"ceilometer-0\" (UID: \"1920fe1a-941a-4968-b797-bbdbe80a08de\") " pod="openstack/ceilometer-0" Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.587617 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1920fe1a-941a-4968-b797-bbdbe80a08de-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1920fe1a-941a-4968-b797-bbdbe80a08de\") " pod="openstack/ceilometer-0" Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.587657 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1920fe1a-941a-4968-b797-bbdbe80a08de-log-httpd\") pod \"ceilometer-0\" (UID: \"1920fe1a-941a-4968-b797-bbdbe80a08de\") " pod="openstack/ceilometer-0" Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.587760 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1920fe1a-941a-4968-b797-bbdbe80a08de-run-httpd\") pod \"ceilometer-0\" (UID: \"1920fe1a-941a-4968-b797-bbdbe80a08de\") " pod="openstack/ceilometer-0" Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.587794 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1920fe1a-941a-4968-b797-bbdbe80a08de-scripts\") pod \"ceilometer-0\" (UID: \"1920fe1a-941a-4968-b797-bbdbe80a08de\") " pod="openstack/ceilometer-0" Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.587909 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1920fe1a-941a-4968-b797-bbdbe80a08de-config-data\") pod \"ceilometer-0\" (UID: \"1920fe1a-941a-4968-b797-bbdbe80a08de\") " pod="openstack/ceilometer-0" Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.587964 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1920fe1a-941a-4968-b797-bbdbe80a08de-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1920fe1a-941a-4968-b797-bbdbe80a08de\") " pod="openstack/ceilometer-0" Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.589562 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1920fe1a-941a-4968-b797-bbdbe80a08de-run-httpd\") pod \"ceilometer-0\" (UID: \"1920fe1a-941a-4968-b797-bbdbe80a08de\") " pod="openstack/ceilometer-0" Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.589841 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1920fe1a-941a-4968-b797-bbdbe80a08de-log-httpd\") pod \"ceilometer-0\" (UID: \"1920fe1a-941a-4968-b797-bbdbe80a08de\") " pod="openstack/ceilometer-0" Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.593913 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1920fe1a-941a-4968-b797-bbdbe80a08de-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1920fe1a-941a-4968-b797-bbdbe80a08de\") " pod="openstack/ceilometer-0" Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.594531 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1920fe1a-941a-4968-b797-bbdbe80a08de-config-data\") pod \"ceilometer-0\" (UID: \"1920fe1a-941a-4968-b797-bbdbe80a08de\") " pod="openstack/ceilometer-0" Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.596789 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1920fe1a-941a-4968-b797-bbdbe80a08de-scripts\") pod \"ceilometer-0\" (UID: \"1920fe1a-941a-4968-b797-bbdbe80a08de\") " pod="openstack/ceilometer-0" Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.605980 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6njv5\" (UniqueName: \"kubernetes.io/projected/1920fe1a-941a-4968-b797-bbdbe80a08de-kube-api-access-6njv5\") pod \"ceilometer-0\" (UID: \"1920fe1a-941a-4968-b797-bbdbe80a08de\") " pod="openstack/ceilometer-0" Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.606526 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1920fe1a-941a-4968-b797-bbdbe80a08de-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1920fe1a-941a-4968-b797-bbdbe80a08de\") " pod="openstack/ceilometer-0" Feb 26 17:39:47 crc kubenswrapper[4805]: I0226 17:39:47.810627 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 17:39:48 crc kubenswrapper[4805]: I0226 17:39:48.967546 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0549f363-78ec-4961-a646-a27f5d96d274" path="/var/lib/kubelet/pods/0549f363-78ec-4961-a646-a27f5d96d274/volumes" Feb 26 17:39:50 crc kubenswrapper[4805]: E0226 17:39:50.735320 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0549f363_78ec_4961_a646_a27f5d96d274.slice\": RecentStats: unable to find data in memory cache]" Feb 26 17:39:51 crc kubenswrapper[4805]: I0226 17:39:51.344942 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7d9687b795-zqn9b" Feb 26 17:39:51 crc kubenswrapper[4805]: I0226 17:39:51.350295 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:39:51 crc kubenswrapper[4805]: I0226 17:39:51.354781 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7d9687b795-zqn9b" Feb 26 17:40:00 crc kubenswrapper[4805]: I0226 17:40:00.145465 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535460-z5f9r"] Feb 26 17:40:00 crc kubenswrapper[4805]: I0226 17:40:00.149101 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535460-z5f9r" Feb 26 17:40:00 crc kubenswrapper[4805]: I0226 17:40:00.152582 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 17:40:00 crc kubenswrapper[4805]: I0226 17:40:00.153064 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 17:40:00 crc kubenswrapper[4805]: I0226 17:40:00.154058 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 17:40:00 crc kubenswrapper[4805]: I0226 17:40:00.164111 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535460-z5f9r"] Feb 26 17:40:00 crc kubenswrapper[4805]: I0226 17:40:00.183274 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7j8d\" (UniqueName: \"kubernetes.io/projected/4b33cba2-320e-4c3b-986b-9d7e3225d30e-kube-api-access-z7j8d\") pod \"auto-csr-approver-29535460-z5f9r\" (UID: \"4b33cba2-320e-4c3b-986b-9d7e3225d30e\") " pod="openshift-infra/auto-csr-approver-29535460-z5f9r" Feb 26 17:40:00 crc kubenswrapper[4805]: I0226 17:40:00.285418 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7j8d\" (UniqueName: \"kubernetes.io/projected/4b33cba2-320e-4c3b-986b-9d7e3225d30e-kube-api-access-z7j8d\") pod \"auto-csr-approver-29535460-z5f9r\" (UID: \"4b33cba2-320e-4c3b-986b-9d7e3225d30e\") " pod="openshift-infra/auto-csr-approver-29535460-z5f9r" Feb 26 17:40:00 crc kubenswrapper[4805]: I0226 17:40:00.306418 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7j8d\" (UniqueName: \"kubernetes.io/projected/4b33cba2-320e-4c3b-986b-9d7e3225d30e-kube-api-access-z7j8d\") pod \"auto-csr-approver-29535460-z5f9r\" (UID: \"4b33cba2-320e-4c3b-986b-9d7e3225d30e\") " pod="openshift-infra/auto-csr-approver-29535460-z5f9r" Feb 26 17:40:00 crc kubenswrapper[4805]: I0226 17:40:00.478464 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535460-z5f9r" Feb 26 17:40:01 crc kubenswrapper[4805]: E0226 17:40:01.005831 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0549f363_78ec_4961_a646_a27f5d96d274.slice\": RecentStats: unable to find data in memory cache]" Feb 26 17:40:02 crc kubenswrapper[4805]: E0226 17:40:02.205190 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 26 17:40:02 crc kubenswrapper[4805]: E0226 17:40:02.206490 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bckv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-rb2p9_openshift-marketplace(db58eaee-5842-4d11-babf-1ededef9c68e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 26 17:40:02 crc kubenswrapper[4805]: E0226 17:40:02.207886 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-rb2p9" podUID="db58eaee-5842-4d11-babf-1ededef9c68e" Feb 26 17:40:02 crc kubenswrapper[4805]: I0226 17:40:02.318979 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 17:40:02 crc kubenswrapper[4805]: I0226 17:40:02.319242 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="07f64bde-8a12-4512-a78a-2ba3e0077fa3" containerName="glance-log" containerID="cri-o://85a94329c97ffad5ba4c23516d05fe998f94f87ac5bde025ab4799069bd4cffc" gracePeriod=30 Feb 26 17:40:02 crc kubenswrapper[4805]: I0226 17:40:02.319409 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="07f64bde-8a12-4512-a78a-2ba3e0077fa3" containerName="glance-httpd" containerID="cri-o://7b007b98697e38999f4d180dc1a5dac324d0b5e128ea21e29d5db8dc6917692b" gracePeriod=30 Feb 26 17:40:02 crc kubenswrapper[4805]: I0226 17:40:02.589058 4805 generic.go:334] "Generic (PLEG): container finished" podID="07f64bde-8a12-4512-a78a-2ba3e0077fa3" containerID="85a94329c97ffad5ba4c23516d05fe998f94f87ac5bde025ab4799069bd4cffc" exitCode=143 Feb 26 17:40:02 crc kubenswrapper[4805]: I0226 17:40:02.589895 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"07f64bde-8a12-4512-a78a-2ba3e0077fa3","Type":"ContainerDied","Data":"85a94329c97ffad5ba4c23516d05fe998f94f87ac5bde025ab4799069bd4cffc"} Feb 26 17:40:02 crc kubenswrapper[4805]: E0226 17:40:02.599335 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-rb2p9" podUID="db58eaee-5842-4d11-babf-1ededef9c68e" Feb 26 17:40:02 crc kubenswrapper[4805]: I0226 17:40:02.979245 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 17:40:02 crc kubenswrapper[4805]: I0226 17:40:02.979572 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 17:40:03 crc kubenswrapper[4805]: I0226 17:40:03.166988 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535460-z5f9r"] Feb 26 17:40:03 crc kubenswrapper[4805]: I0226 17:40:03.188714 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:40:03 crc kubenswrapper[4805]: W0226 17:40:03.212527 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b33cba2_320e_4c3b_986b_9d7e3225d30e.slice/crio-dadebc1f2219505180445de98755171b7d20a299e68127a48a37dba76c59c5d0 WatchSource:0}: Error finding container dadebc1f2219505180445de98755171b7d20a299e68127a48a37dba76c59c5d0: Status 404 returned error can't find the container with id dadebc1f2219505180445de98755171b7d20a299e68127a48a37dba76c59c5d0 Feb 26 17:40:03 crc kubenswrapper[4805]: W0226 17:40:03.214351 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1920fe1a_941a_4968_b797_bbdbe80a08de.slice/crio-c45aec9e07416eae3e4b367d501f045c7ccd21b69c5b255ec5cd9ac9af26c745 WatchSource:0}: Error finding container c45aec9e07416eae3e4b367d501f045c7ccd21b69c5b255ec5cd9ac9af26c745: Status 404 returned error can't find the container with id c45aec9e07416eae3e4b367d501f045c7ccd21b69c5b255ec5cd9ac9af26c745 Feb 26 17:40:03 crc kubenswrapper[4805]: I0226 17:40:03.599643 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535460-z5f9r" event={"ID":"4b33cba2-320e-4c3b-986b-9d7e3225d30e","Type":"ContainerStarted","Data":"dadebc1f2219505180445de98755171b7d20a299e68127a48a37dba76c59c5d0"} Feb 26 17:40:03 crc kubenswrapper[4805]: I0226 17:40:03.601545 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1920fe1a-941a-4968-b797-bbdbe80a08de","Type":"ContainerStarted","Data":"c45aec9e07416eae3e4b367d501f045c7ccd21b69c5b255ec5cd9ac9af26c745"} Feb 26 17:40:05 crc kubenswrapper[4805]: I0226 17:40:05.412616 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-66c9c57f69-5r7hv" podUID="04a821a2-53df-4081-a120-61b7b90b3120" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.176:9696/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 26 17:40:07 crc kubenswrapper[4805]: I0226 17:40:07.641038 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"3bfa8b25-9a4e-482c-b2f6-15347757aec2","Type":"ContainerStarted","Data":"95862db801b9deeda7c27f75f65463331b182f5e88a30e6ca91534db100d72a9"} Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.558826 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.698188 4805 generic.go:334] "Generic (PLEG): container finished" podID="07f64bde-8a12-4512-a78a-2ba3e0077fa3" containerID="7b007b98697e38999f4d180dc1a5dac324d0b5e128ea21e29d5db8dc6917692b" exitCode=0 Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.698315 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.698863 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"07f64bde-8a12-4512-a78a-2ba3e0077fa3","Type":"ContainerDied","Data":"7b007b98697e38999f4d180dc1a5dac324d0b5e128ea21e29d5db8dc6917692b"} Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.698902 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"07f64bde-8a12-4512-a78a-2ba3e0077fa3","Type":"ContainerDied","Data":"d80b0f3ba8bed38ae8c2df0a58206a5a3d321b1eff6bbb91f84b043a2089be5e"} Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.698921 4805 scope.go:117] "RemoveContainer" containerID="7b007b98697e38999f4d180dc1a5dac324d0b5e128ea21e29d5db8dc6917692b" Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.710052 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-jktxs"] Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.710235 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64\") pod \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\" (UID: \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\") " Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.710349 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07f64bde-8a12-4512-a78a-2ba3e0077fa3-logs\") pod \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\" (UID: \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\") " Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.710469 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07f64bde-8a12-4512-a78a-2ba3e0077fa3-config-data\") pod \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\" (UID: \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\") " Feb 26 17:40:08 crc kubenswrapper[4805]: E0226 17:40:08.710495 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07f64bde-8a12-4512-a78a-2ba3e0077fa3" containerName="glance-log" Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.710509 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f64bde-8a12-4512-a78a-2ba3e0077fa3" containerName="glance-log" Feb 26 17:40:08 crc kubenswrapper[4805]: E0226 17:40:08.710523 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07f64bde-8a12-4512-a78a-2ba3e0077fa3" containerName="glance-httpd" Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.710529 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f64bde-8a12-4512-a78a-2ba3e0077fa3" containerName="glance-httpd" Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.710598 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8lc9\" (UniqueName: \"kubernetes.io/projected/07f64bde-8a12-4512-a78a-2ba3e0077fa3-kube-api-access-b8lc9\") pod \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\" (UID: \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\") " Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.710648 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07f64bde-8a12-4512-a78a-2ba3e0077fa3-scripts\") pod \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\" (UID: \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\") " Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.710671 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07f64bde-8a12-4512-a78a-2ba3e0077fa3-combined-ca-bundle\") pod \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\" (UID: \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\") " Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.710704 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/07f64bde-8a12-4512-a78a-2ba3e0077fa3-httpd-run\") pod \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\" (UID: \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\") " Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.710752 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="07f64bde-8a12-4512-a78a-2ba3e0077fa3" containerName="glance-httpd" Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.710765 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="07f64bde-8a12-4512-a78a-2ba3e0077fa3" containerName="glance-log" Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.710792 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07f64bde-8a12-4512-a78a-2ba3e0077fa3-public-tls-certs\") pod \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\" (UID: \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\") " Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.711490 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jktxs" Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.712894 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07f64bde-8a12-4512-a78a-2ba3e0077fa3-logs" (OuterVolumeSpecName: "logs") pod "07f64bde-8a12-4512-a78a-2ba3e0077fa3" (UID: "07f64bde-8a12-4512-a78a-2ba3e0077fa3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.715610 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07f64bde-8a12-4512-a78a-2ba3e0077fa3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "07f64bde-8a12-4512-a78a-2ba3e0077fa3" (UID: "07f64bde-8a12-4512-a78a-2ba3e0077fa3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.754112 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-a085-account-create-update-5t9kt"] Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.755823 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a085-account-create-update-5t9kt" Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.756580 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07f64bde-8a12-4512-a78a-2ba3e0077fa3-kube-api-access-b8lc9" (OuterVolumeSpecName: "kube-api-access-b8lc9") pod "07f64bde-8a12-4512-a78a-2ba3e0077fa3" (UID: "07f64bde-8a12-4512-a78a-2ba3e0077fa3"). InnerVolumeSpecName "kube-api-access-b8lc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.761292 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.761494 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07f64bde-8a12-4512-a78a-2ba3e0077fa3-scripts" (OuterVolumeSpecName: "scripts") pod "07f64bde-8a12-4512-a78a-2ba3e0077fa3" (UID: "07f64bde-8a12-4512-a78a-2ba3e0077fa3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.791155 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-jktxs"] Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.799252 4805 scope.go:117] "RemoveContainer" containerID="85a94329c97ffad5ba4c23516d05fe998f94f87ac5bde025ab4799069bd4cffc" Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.819147 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8365e8e6-fa3b-4a87-936c-d63936b861d0-operator-scripts\") pod \"nova-api-db-create-jktxs\" (UID: \"8365e8e6-fa3b-4a87-936c-d63936b861d0\") " pod="openstack/nova-api-db-create-jktxs" Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.819406 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w46cz\" (UniqueName: \"kubernetes.io/projected/8365e8e6-fa3b-4a87-936c-d63936b861d0-kube-api-access-w46cz\") pod \"nova-api-db-create-jktxs\" (UID: \"8365e8e6-fa3b-4a87-936c-d63936b861d0\") " pod="openstack/nova-api-db-create-jktxs" Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.819500 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8lc9\" (UniqueName: \"kubernetes.io/projected/07f64bde-8a12-4512-a78a-2ba3e0077fa3-kube-api-access-b8lc9\") on node \"crc\" DevicePath \"\"" Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.819512 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07f64bde-8a12-4512-a78a-2ba3e0077fa3-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.819522 4805 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/07f64bde-8a12-4512-a78a-2ba3e0077fa3-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.819530 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07f64bde-8a12-4512-a78a-2ba3e0077fa3-logs\") on node \"crc\" DevicePath \"\"" Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.833224 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a085-account-create-update-5t9kt"] Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.837236 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=8.08203646 podStartE2EDuration="38.837210266s" podCreationTimestamp="2026-02-26 17:39:30 +0000 UTC" firstStartedPulling="2026-02-26 17:39:31.614169903 +0000 UTC m=+1486.175924242" lastFinishedPulling="2026-02-26 17:40:02.369343709 +0000 UTC m=+1516.931098048" observedRunningTime="2026-02-26 17:40:08.734301509 +0000 UTC m=+1523.296055848" watchObservedRunningTime="2026-02-26 17:40:08.837210266 +0000 UTC m=+1523.398964605" Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.858629 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07f64bde-8a12-4512-a78a-2ba3e0077fa3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07f64bde-8a12-4512-a78a-2ba3e0077fa3" (UID: "07f64bde-8a12-4512-a78a-2ba3e0077fa3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.901500 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07f64bde-8a12-4512-a78a-2ba3e0077fa3-config-data" (OuterVolumeSpecName: "config-data") pod "07f64bde-8a12-4512-a78a-2ba3e0077fa3" (UID: "07f64bde-8a12-4512-a78a-2ba3e0077fa3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:40:08 crc kubenswrapper[4805]: E0226 17:40:08.923822 4805 reconciler_common.go:156] "operationExecutor.UnmountVolume failed (controllerAttachDetachEnabled true) for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64\") pod \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\" (UID: \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\") : UnmountVolume.NewUnmounter failed for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64\") pod \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\" (UID: \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/07f64bde-8a12-4512-a78a-2ba3e0077fa3/volumes/kubernetes.io~csi/pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/07f64bde-8a12-4512-a78a-2ba3e0077fa3/volumes/kubernetes.io~csi/pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64/vol_data.json]: open /var/lib/kubelet/pods/07f64bde-8a12-4512-a78a-2ba3e0077fa3/volumes/kubernetes.io~csi/pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64/vol_data.json: no such file or directory" err="UnmountVolume.NewUnmounter failed for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64\") pod \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\" (UID: \"07f64bde-8a12-4512-a78a-2ba3e0077fa3\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/07f64bde-8a12-4512-a78a-2ba3e0077fa3/volumes/kubernetes.io~csi/pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/07f64bde-8a12-4512-a78a-2ba3e0077fa3/volumes/kubernetes.io~csi/pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64/vol_data.json]: open /var/lib/kubelet/pods/07f64bde-8a12-4512-a78a-2ba3e0077fa3/volumes/kubernetes.io~csi/pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64/vol_data.json: no such file or directory" Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.924960 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d80f8620-b048-40aa-a97c-07cdd033379f-operator-scripts\") pod \"nova-api-a085-account-create-update-5t9kt\" (UID: \"d80f8620-b048-40aa-a97c-07cdd033379f\") " pod="openstack/nova-api-a085-account-create-update-5t9kt" Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.925357 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbj8q\" (UniqueName: \"kubernetes.io/projected/d80f8620-b048-40aa-a97c-07cdd033379f-kube-api-access-cbj8q\") pod \"nova-api-a085-account-create-update-5t9kt\" (UID: \"d80f8620-b048-40aa-a97c-07cdd033379f\") " pod="openstack/nova-api-a085-account-create-update-5t9kt" Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.925673 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w46cz\" (UniqueName: \"kubernetes.io/projected/8365e8e6-fa3b-4a87-936c-d63936b861d0-kube-api-access-w46cz\") pod \"nova-api-db-create-jktxs\" (UID: \"8365e8e6-fa3b-4a87-936c-d63936b861d0\") " pod="openstack/nova-api-db-create-jktxs" Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.925923 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8365e8e6-fa3b-4a87-936c-d63936b861d0-operator-scripts\") pod \"nova-api-db-create-jktxs\" (UID: \"8365e8e6-fa3b-4a87-936c-d63936b861d0\") " pod="openstack/nova-api-db-create-jktxs" Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.926243 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07f64bde-8a12-4512-a78a-2ba3e0077fa3-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.926769 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07f64bde-8a12-4512-a78a-2ba3e0077fa3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.931660 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64" (OuterVolumeSpecName: "glance") pod "07f64bde-8a12-4512-a78a-2ba3e0077fa3" (UID: "07f64bde-8a12-4512-a78a-2ba3e0077fa3"). InnerVolumeSpecName "pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.946277 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07f64bde-8a12-4512-a78a-2ba3e0077fa3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "07f64bde-8a12-4512-a78a-2ba3e0077fa3" (UID: "07f64bde-8a12-4512-a78a-2ba3e0077fa3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.947710 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-76npn"] Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.948624 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8365e8e6-fa3b-4a87-936c-d63936b861d0-operator-scripts\") pod \"nova-api-db-create-jktxs\" (UID: \"8365e8e6-fa3b-4a87-936c-d63936b861d0\") " pod="openstack/nova-api-db-create-jktxs" Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.951618 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-76npn" Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.959224 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w46cz\" (UniqueName: \"kubernetes.io/projected/8365e8e6-fa3b-4a87-936c-d63936b861d0-kube-api-access-w46cz\") pod \"nova-api-db-create-jktxs\" (UID: \"8365e8e6-fa3b-4a87-936c-d63936b861d0\") " pod="openstack/nova-api-db-create-jktxs" Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.984793 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-76npn"] Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.995118 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-snh6t"] Feb 26 17:40:08 crc kubenswrapper[4805]: I0226 17:40:08.997682 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-snh6t" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.029830 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d80f8620-b048-40aa-a97c-07cdd033379f-operator-scripts\") pod \"nova-api-a085-account-create-update-5t9kt\" (UID: \"d80f8620-b048-40aa-a97c-07cdd033379f\") " pod="openstack/nova-api-a085-account-create-update-5t9kt" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.029976 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbj8q\" (UniqueName: \"kubernetes.io/projected/d80f8620-b048-40aa-a97c-07cdd033379f-kube-api-access-cbj8q\") pod \"nova-api-a085-account-create-update-5t9kt\" (UID: \"d80f8620-b048-40aa-a97c-07cdd033379f\") " pod="openstack/nova-api-a085-account-create-update-5t9kt" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.030110 4805 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64\") on node \"crc\" " Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.030129 4805 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07f64bde-8a12-4512-a78a-2ba3e0077fa3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.030653 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d80f8620-b048-40aa-a97c-07cdd033379f-operator-scripts\") pod \"nova-api-a085-account-create-update-5t9kt\" (UID: \"d80f8620-b048-40aa-a97c-07cdd033379f\") " pod="openstack/nova-api-a085-account-create-update-5t9kt" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.055300 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-snh6t"] Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.058118 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbj8q\" (UniqueName: \"kubernetes.io/projected/d80f8620-b048-40aa-a97c-07cdd033379f-kube-api-access-cbj8q\") pod \"nova-api-a085-account-create-update-5t9kt\" (UID: \"d80f8620-b048-40aa-a97c-07cdd033379f\") " pod="openstack/nova-api-a085-account-create-update-5t9kt" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.106102 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-1518-account-create-update-ktdrg"] Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.107917 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1518-account-create-update-ktdrg" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.113435 4805 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.114006 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.121793 4805 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64") on node "crc" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.124317 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1518-account-create-update-ktdrg"] Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.134094 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/435245a7-a717-4cb3-8125-9800dd40f909-operator-scripts\") pod \"nova-cell0-db-create-76npn\" (UID: \"435245a7-a717-4cb3-8125-9800dd40f909\") " pod="openstack/nova-cell0-db-create-76npn" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.134143 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t5n9\" (UniqueName: \"kubernetes.io/projected/435245a7-a717-4cb3-8125-9800dd40f909-kube-api-access-6t5n9\") pod \"nova-cell0-db-create-76npn\" (UID: \"435245a7-a717-4cb3-8125-9800dd40f909\") " pod="openstack/nova-cell0-db-create-76npn" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.134243 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztvcz\" (UniqueName: \"kubernetes.io/projected/cec8ed7b-d0af-4d30-b8d0-764518020645-kube-api-access-ztvcz\") pod \"nova-cell1-db-create-snh6t\" (UID: \"cec8ed7b-d0af-4d30-b8d0-764518020645\") " pod="openstack/nova-cell1-db-create-snh6t" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.134273 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cec8ed7b-d0af-4d30-b8d0-764518020645-operator-scripts\") pod \"nova-cell1-db-create-snh6t\" (UID: \"cec8ed7b-d0af-4d30-b8d0-764518020645\") " pod="openstack/nova-cell1-db-create-snh6t" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.134458 4805 reconciler_common.go:293] "Volume detached for volume \"pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64\") on node \"crc\" DevicePath \"\"" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.164836 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-c5c5-account-create-update-675kp"] Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.179244 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-c5c5-account-create-update-675kp"] Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.179823 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c5c5-account-create-update-675kp" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.185407 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.224186 4805 scope.go:117] "RemoveContainer" containerID="7b007b98697e38999f4d180dc1a5dac324d0b5e128ea21e29d5db8dc6917692b" Feb 26 17:40:09 crc kubenswrapper[4805]: E0226 17:40:09.228169 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b007b98697e38999f4d180dc1a5dac324d0b5e128ea21e29d5db8dc6917692b\": container with ID starting with 7b007b98697e38999f4d180dc1a5dac324d0b5e128ea21e29d5db8dc6917692b not found: ID does not exist" containerID="7b007b98697e38999f4d180dc1a5dac324d0b5e128ea21e29d5db8dc6917692b" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.228207 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b007b98697e38999f4d180dc1a5dac324d0b5e128ea21e29d5db8dc6917692b"} err="failed to get container status \"7b007b98697e38999f4d180dc1a5dac324d0b5e128ea21e29d5db8dc6917692b\": rpc error: code = NotFound desc = could not find container \"7b007b98697e38999f4d180dc1a5dac324d0b5e128ea21e29d5db8dc6917692b\": container with ID starting with 7b007b98697e38999f4d180dc1a5dac324d0b5e128ea21e29d5db8dc6917692b not found: ID does not exist" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.228232 4805 scope.go:117] "RemoveContainer" containerID="85a94329c97ffad5ba4c23516d05fe998f94f87ac5bde025ab4799069bd4cffc" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.228310 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 17:40:09 crc kubenswrapper[4805]: E0226 17:40:09.229567 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85a94329c97ffad5ba4c23516d05fe998f94f87ac5bde025ab4799069bd4cffc\": container with ID starting with 85a94329c97ffad5ba4c23516d05fe998f94f87ac5bde025ab4799069bd4cffc not found: ID does not exist" containerID="85a94329c97ffad5ba4c23516d05fe998f94f87ac5bde025ab4799069bd4cffc" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.229590 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85a94329c97ffad5ba4c23516d05fe998f94f87ac5bde025ab4799069bd4cffc"} err="failed to get container status \"85a94329c97ffad5ba4c23516d05fe998f94f87ac5bde025ab4799069bd4cffc\": rpc error: code = NotFound desc = could not find container \"85a94329c97ffad5ba4c23516d05fe998f94f87ac5bde025ab4799069bd4cffc\": container with ID starting with 85a94329c97ffad5ba4c23516d05fe998f94f87ac5bde025ab4799069bd4cffc not found: ID does not exist" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.236337 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jktxs" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.237810 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gdkx\" (UniqueName: \"kubernetes.io/projected/ad35cf63-aace-4b6f-b063-1f3642da07da-kube-api-access-9gdkx\") pod \"nova-cell0-1518-account-create-update-ktdrg\" (UID: \"ad35cf63-aace-4b6f-b063-1f3642da07da\") " pod="openstack/nova-cell0-1518-account-create-update-ktdrg" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.237923 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/435245a7-a717-4cb3-8125-9800dd40f909-operator-scripts\") pod \"nova-cell0-db-create-76npn\" (UID: \"435245a7-a717-4cb3-8125-9800dd40f909\") " pod="openstack/nova-cell0-db-create-76npn" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.237954 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t5n9\" (UniqueName: \"kubernetes.io/projected/435245a7-a717-4cb3-8125-9800dd40f909-kube-api-access-6t5n9\") pod \"nova-cell0-db-create-76npn\" (UID: \"435245a7-a717-4cb3-8125-9800dd40f909\") " pod="openstack/nova-cell0-db-create-76npn" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.237992 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztvcz\" (UniqueName: \"kubernetes.io/projected/cec8ed7b-d0af-4d30-b8d0-764518020645-kube-api-access-ztvcz\") pod \"nova-cell1-db-create-snh6t\" (UID: \"cec8ed7b-d0af-4d30-b8d0-764518020645\") " pod="openstack/nova-cell1-db-create-snh6t" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.238016 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cec8ed7b-d0af-4d30-b8d0-764518020645-operator-scripts\") pod \"nova-cell1-db-create-snh6t\" (UID: \"cec8ed7b-d0af-4d30-b8d0-764518020645\") " pod="openstack/nova-cell1-db-create-snh6t" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.240438 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/435245a7-a717-4cb3-8125-9800dd40f909-operator-scripts\") pod \"nova-cell0-db-create-76npn\" (UID: \"435245a7-a717-4cb3-8125-9800dd40f909\") " pod="openstack/nova-cell0-db-create-76npn" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.252156 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.254002 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad35cf63-aace-4b6f-b063-1f3642da07da-operator-scripts\") pod \"nova-cell0-1518-account-create-update-ktdrg\" (UID: \"ad35cf63-aace-4b6f-b063-1f3642da07da\") " pod="openstack/nova-cell0-1518-account-create-update-ktdrg" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.254716 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a085-account-create-update-5t9kt" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.278649 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t5n9\" (UniqueName: \"kubernetes.io/projected/435245a7-a717-4cb3-8125-9800dd40f909-kube-api-access-6t5n9\") pod \"nova-cell0-db-create-76npn\" (UID: \"435245a7-a717-4cb3-8125-9800dd40f909\") " pod="openstack/nova-cell0-db-create-76npn" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.283103 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.284829 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.289427 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.289784 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.290091 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.290511 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-76npn" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.356694 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gdkx\" (UniqueName: \"kubernetes.io/projected/ad35cf63-aace-4b6f-b063-1f3642da07da-kube-api-access-9gdkx\") pod \"nova-cell0-1518-account-create-update-ktdrg\" (UID: \"ad35cf63-aace-4b6f-b063-1f3642da07da\") " pod="openstack/nova-cell0-1518-account-create-update-ktdrg" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.357312 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/873a3075-565b-48e7-a3d7-e0bcc6a0a60b-operator-scripts\") pod \"nova-cell1-c5c5-account-create-update-675kp\" (UID: \"873a3075-565b-48e7-a3d7-e0bcc6a0a60b\") " pod="openstack/nova-cell1-c5c5-account-create-update-675kp" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.358018 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67tn4\" (UniqueName: \"kubernetes.io/projected/873a3075-565b-48e7-a3d7-e0bcc6a0a60b-kube-api-access-67tn4\") pod \"nova-cell1-c5c5-account-create-update-675kp\" (UID: \"873a3075-565b-48e7-a3d7-e0bcc6a0a60b\") " pod="openstack/nova-cell1-c5c5-account-create-update-675kp" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.358191 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad35cf63-aace-4b6f-b063-1f3642da07da-operator-scripts\") pod \"nova-cell0-1518-account-create-update-ktdrg\" (UID: \"ad35cf63-aace-4b6f-b063-1f3642da07da\") " pod="openstack/nova-cell0-1518-account-create-update-ktdrg" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.383449 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad35cf63-aace-4b6f-b063-1f3642da07da-operator-scripts\") pod \"nova-cell0-1518-account-create-update-ktdrg\" (UID: \"ad35cf63-aace-4b6f-b063-1f3642da07da\") " pod="openstack/nova-cell0-1518-account-create-update-ktdrg" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.384180 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cec8ed7b-d0af-4d30-b8d0-764518020645-operator-scripts\") pod \"nova-cell1-db-create-snh6t\" (UID: \"cec8ed7b-d0af-4d30-b8d0-764518020645\") " pod="openstack/nova-cell1-db-create-snh6t" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.403749 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gdkx\" (UniqueName: \"kubernetes.io/projected/ad35cf63-aace-4b6f-b063-1f3642da07da-kube-api-access-9gdkx\") pod \"nova-cell0-1518-account-create-update-ktdrg\" (UID: \"ad35cf63-aace-4b6f-b063-1f3642da07da\") " pod="openstack/nova-cell0-1518-account-create-update-ktdrg" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.405568 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztvcz\" (UniqueName: \"kubernetes.io/projected/cec8ed7b-d0af-4d30-b8d0-764518020645-kube-api-access-ztvcz\") pod \"nova-cell1-db-create-snh6t\" (UID: \"cec8ed7b-d0af-4d30-b8d0-764518020645\") " pod="openstack/nova-cell1-db-create-snh6t" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.460290 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67tn4\" (UniqueName: \"kubernetes.io/projected/873a3075-565b-48e7-a3d7-e0bcc6a0a60b-kube-api-access-67tn4\") pod \"nova-cell1-c5c5-account-create-update-675kp\" (UID: \"873a3075-565b-48e7-a3d7-e0bcc6a0a60b\") " pod="openstack/nova-cell1-c5c5-account-create-update-675kp" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.460361 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ab72a0e1-fd82-4334-9124-8a3cc815bdd6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ab72a0e1-fd82-4334-9124-8a3cc815bdd6\") " pod="openstack/glance-default-external-api-0" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.460382 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab72a0e1-fd82-4334-9124-8a3cc815bdd6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ab72a0e1-fd82-4334-9124-8a3cc815bdd6\") " pod="openstack/glance-default-external-api-0" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.460499 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab72a0e1-fd82-4334-9124-8a3cc815bdd6-config-data\") pod \"glance-default-external-api-0\" (UID: \"ab72a0e1-fd82-4334-9124-8a3cc815bdd6\") " pod="openstack/glance-default-external-api-0" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.460543 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab72a0e1-fd82-4334-9124-8a3cc815bdd6-scripts\") pod \"glance-default-external-api-0\" (UID: \"ab72a0e1-fd82-4334-9124-8a3cc815bdd6\") " pod="openstack/glance-default-external-api-0" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.460797 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab72a0e1-fd82-4334-9124-8a3cc815bdd6-logs\") pod \"glance-default-external-api-0\" (UID: \"ab72a0e1-fd82-4334-9124-8a3cc815bdd6\") " pod="openstack/glance-default-external-api-0" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.461221 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82zkm\" (UniqueName: \"kubernetes.io/projected/ab72a0e1-fd82-4334-9124-8a3cc815bdd6-kube-api-access-82zkm\") pod \"glance-default-external-api-0\" (UID: \"ab72a0e1-fd82-4334-9124-8a3cc815bdd6\") " pod="openstack/glance-default-external-api-0" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.461283 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64\") pod \"glance-default-external-api-0\" (UID: \"ab72a0e1-fd82-4334-9124-8a3cc815bdd6\") " pod="openstack/glance-default-external-api-0" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.461376 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/873a3075-565b-48e7-a3d7-e0bcc6a0a60b-operator-scripts\") pod \"nova-cell1-c5c5-account-create-update-675kp\" (UID: \"873a3075-565b-48e7-a3d7-e0bcc6a0a60b\") " pod="openstack/nova-cell1-c5c5-account-create-update-675kp" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.461485 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab72a0e1-fd82-4334-9124-8a3cc815bdd6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ab72a0e1-fd82-4334-9124-8a3cc815bdd6\") " pod="openstack/glance-default-external-api-0" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.462165 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/873a3075-565b-48e7-a3d7-e0bcc6a0a60b-operator-scripts\") pod \"nova-cell1-c5c5-account-create-update-675kp\" (UID: \"873a3075-565b-48e7-a3d7-e0bcc6a0a60b\") " pod="openstack/nova-cell1-c5c5-account-create-update-675kp" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.487688 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1518-account-create-update-ktdrg" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.487751 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67tn4\" (UniqueName: \"kubernetes.io/projected/873a3075-565b-48e7-a3d7-e0bcc6a0a60b-kube-api-access-67tn4\") pod \"nova-cell1-c5c5-account-create-update-675kp\" (UID: \"873a3075-565b-48e7-a3d7-e0bcc6a0a60b\") " pod="openstack/nova-cell1-c5c5-account-create-update-675kp" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.545597 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c5c5-account-create-update-675kp" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.563325 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82zkm\" (UniqueName: \"kubernetes.io/projected/ab72a0e1-fd82-4334-9124-8a3cc815bdd6-kube-api-access-82zkm\") pod \"glance-default-external-api-0\" (UID: \"ab72a0e1-fd82-4334-9124-8a3cc815bdd6\") " pod="openstack/glance-default-external-api-0" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.563613 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64\") pod \"glance-default-external-api-0\" (UID: \"ab72a0e1-fd82-4334-9124-8a3cc815bdd6\") " pod="openstack/glance-default-external-api-0" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.564136 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab72a0e1-fd82-4334-9124-8a3cc815bdd6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ab72a0e1-fd82-4334-9124-8a3cc815bdd6\") " pod="openstack/glance-default-external-api-0" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.564373 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ab72a0e1-fd82-4334-9124-8a3cc815bdd6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ab72a0e1-fd82-4334-9124-8a3cc815bdd6\") " pod="openstack/glance-default-external-api-0" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.564487 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab72a0e1-fd82-4334-9124-8a3cc815bdd6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ab72a0e1-fd82-4334-9124-8a3cc815bdd6\") " pod="openstack/glance-default-external-api-0" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.564638 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab72a0e1-fd82-4334-9124-8a3cc815bdd6-config-data\") pod \"glance-default-external-api-0\" (UID: \"ab72a0e1-fd82-4334-9124-8a3cc815bdd6\") " pod="openstack/glance-default-external-api-0" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.564758 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab72a0e1-fd82-4334-9124-8a3cc815bdd6-scripts\") pod \"glance-default-external-api-0\" (UID: \"ab72a0e1-fd82-4334-9124-8a3cc815bdd6\") " pod="openstack/glance-default-external-api-0" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.564914 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab72a0e1-fd82-4334-9124-8a3cc815bdd6-logs\") pod \"glance-default-external-api-0\" (UID: \"ab72a0e1-fd82-4334-9124-8a3cc815bdd6\") " pod="openstack/glance-default-external-api-0" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.564959 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ab72a0e1-fd82-4334-9124-8a3cc815bdd6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ab72a0e1-fd82-4334-9124-8a3cc815bdd6\") " pod="openstack/glance-default-external-api-0" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.565335 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab72a0e1-fd82-4334-9124-8a3cc815bdd6-logs\") pod \"glance-default-external-api-0\" (UID: \"ab72a0e1-fd82-4334-9124-8a3cc815bdd6\") " pod="openstack/glance-default-external-api-0" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.567838 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab72a0e1-fd82-4334-9124-8a3cc815bdd6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ab72a0e1-fd82-4334-9124-8a3cc815bdd6\") " pod="openstack/glance-default-external-api-0" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.569163 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab72a0e1-fd82-4334-9124-8a3cc815bdd6-config-data\") pod \"glance-default-external-api-0\" (UID: \"ab72a0e1-fd82-4334-9124-8a3cc815bdd6\") " pod="openstack/glance-default-external-api-0" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.569572 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab72a0e1-fd82-4334-9124-8a3cc815bdd6-scripts\") pod \"glance-default-external-api-0\" (UID: \"ab72a0e1-fd82-4334-9124-8a3cc815bdd6\") " pod="openstack/glance-default-external-api-0" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.570379 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab72a0e1-fd82-4334-9124-8a3cc815bdd6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ab72a0e1-fd82-4334-9124-8a3cc815bdd6\") " pod="openstack/glance-default-external-api-0" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.578771 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.578846 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64\") pod \"glance-default-external-api-0\" (UID: \"ab72a0e1-fd82-4334-9124-8a3cc815bdd6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a78d83e29b9efed98adc7cd32a238f67178734183d9dc19f1da819604a3e7a12/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.591798 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82zkm\" (UniqueName: \"kubernetes.io/projected/ab72a0e1-fd82-4334-9124-8a3cc815bdd6-kube-api-access-82zkm\") pod \"glance-default-external-api-0\" (UID: \"ab72a0e1-fd82-4334-9124-8a3cc815bdd6\") " pod="openstack/glance-default-external-api-0" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.629569 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-snh6t" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.718527 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e690e11b-9388-4576-8b72-ee5e3fec1e64\") pod \"glance-default-external-api-0\" (UID: \"ab72a0e1-fd82-4334-9124-8a3cc815bdd6\") " pod="openstack/glance-default-external-api-0" Feb 26 17:40:09 crc kubenswrapper[4805]: I0226 17:40:09.913248 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 26 17:40:10 crc kubenswrapper[4805]: I0226 17:40:10.462570 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-api-0" podUID="69411227-14e0-40b4-a753-f2178bfbdd2a" containerName="cloudkitty-api" probeResult="failure" output="Get \"https://10.217.0.198:8889/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 17:40:10 crc kubenswrapper[4805]: I0226 17:40:10.471810 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cloudkitty-api-0" podUID="69411227-14e0-40b4-a753-f2178bfbdd2a" containerName="cloudkitty-api" probeResult="failure" output="Get \"https://10.217.0.198:8889/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 17:40:10 crc kubenswrapper[4805]: I0226 17:40:10.626866 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a085-account-create-update-5t9kt"] Feb 26 17:40:10 crc kubenswrapper[4805]: I0226 17:40:10.860920 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a085-account-create-update-5t9kt" event={"ID":"d80f8620-b048-40aa-a97c-07cdd033379f","Type":"ContainerStarted","Data":"e3b27dfbef3edf50b57455c77733b3dd7f7f2f6e5aa14d922769cbf461904e27"} Feb 26 17:40:10 crc kubenswrapper[4805]: W0226 17:40:10.989024 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad35cf63_aace_4b6f_b063_1f3642da07da.slice/crio-711d04c33eced6b35eaed355ab4a5611976cea371b75b033257b39d70180bce8 WatchSource:0}: Error finding container 711d04c33eced6b35eaed355ab4a5611976cea371b75b033257b39d70180bce8: Status 404 returned error can't find the container with id 711d04c33eced6b35eaed355ab4a5611976cea371b75b033257b39d70180bce8 Feb 26 17:40:10 crc kubenswrapper[4805]: I0226 17:40:10.997270 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07f64bde-8a12-4512-a78a-2ba3e0077fa3" path="/var/lib/kubelet/pods/07f64bde-8a12-4512-a78a-2ba3e0077fa3/volumes" Feb 26 17:40:10 crc kubenswrapper[4805]: I0226 17:40:10.998776 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1518-account-create-update-ktdrg"] Feb 26 17:40:11 crc kubenswrapper[4805]: I0226 17:40:11.205046 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-c5c5-account-create-update-675kp"] Feb 26 17:40:11 crc kubenswrapper[4805]: I0226 17:40:11.356807 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-jktxs"] Feb 26 17:40:11 crc kubenswrapper[4805]: I0226 17:40:11.422726 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-76npn"] Feb 26 17:40:11 crc kubenswrapper[4805]: I0226 17:40:11.542026 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-snh6t"] Feb 26 17:40:11 crc kubenswrapper[4805]: E0226 17:40:11.614381 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0549f363_78ec_4961_a646_a27f5d96d274.slice\": RecentStats: unable to find data in memory cache]" Feb 26 17:40:11 crc kubenswrapper[4805]: I0226 17:40:11.696986 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 26 17:40:11 crc kubenswrapper[4805]: I0226 17:40:11.944283 4805 generic.go:334] "Generic (PLEG): container finished" podID="d80f8620-b048-40aa-a97c-07cdd033379f" containerID="e5b35dab67f4fa36723c7c2b347b7be0dd192c369bbba9e62e5e5adc1902b738" exitCode=0 Feb 26 17:40:11 crc kubenswrapper[4805]: I0226 17:40:11.944389 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a085-account-create-update-5t9kt" event={"ID":"d80f8620-b048-40aa-a97c-07cdd033379f","Type":"ContainerDied","Data":"e5b35dab67f4fa36723c7c2b347b7be0dd192c369bbba9e62e5e5adc1902b738"} Feb 26 17:40:11 crc kubenswrapper[4805]: I0226 17:40:11.962284 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1920fe1a-941a-4968-b797-bbdbe80a08de","Type":"ContainerStarted","Data":"329345891e99836c225e1a4614bb4d1a4bb31c797cb36ff68b3f3fc76f419839"} Feb 26 17:40:12 crc kubenswrapper[4805]: I0226 17:40:12.008381 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535460-z5f9r" event={"ID":"4b33cba2-320e-4c3b-986b-9d7e3225d30e","Type":"ContainerStarted","Data":"bc41c763572f16446ceb6bcb0b0103b7f75d518389412a0e07eceae1b928ee7c"} Feb 26 17:40:12 crc kubenswrapper[4805]: I0226 17:40:12.033317 4805 generic.go:334] "Generic (PLEG): container finished" podID="ad35cf63-aace-4b6f-b063-1f3642da07da" containerID="c6156fc5edcfeb21bc35b251b08eaebf27d36d89c0dc749e64b96adbd1f0bd11" exitCode=0 Feb 26 17:40:12 crc kubenswrapper[4805]: I0226 17:40:12.033433 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1518-account-create-update-ktdrg" event={"ID":"ad35cf63-aace-4b6f-b063-1f3642da07da","Type":"ContainerDied","Data":"c6156fc5edcfeb21bc35b251b08eaebf27d36d89c0dc749e64b96adbd1f0bd11"} Feb 26 17:40:12 crc kubenswrapper[4805]: I0226 17:40:12.033471 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1518-account-create-update-ktdrg" event={"ID":"ad35cf63-aace-4b6f-b063-1f3642da07da","Type":"ContainerStarted","Data":"711d04c33eced6b35eaed355ab4a5611976cea371b75b033257b39d70180bce8"} Feb 26 17:40:12 crc kubenswrapper[4805]: I0226 17:40:12.047784 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jktxs" event={"ID":"8365e8e6-fa3b-4a87-936c-d63936b861d0","Type":"ContainerStarted","Data":"7c47204c5c9d250c6aef32a80eed724f4df446352460f07b8536e5d51c6bcb19"} Feb 26 17:40:12 crc kubenswrapper[4805]: I0226 17:40:12.076422 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ab72a0e1-fd82-4334-9124-8a3cc815bdd6","Type":"ContainerStarted","Data":"27220b029b8c2e05dbe071e449741095a09c578a26199627d854dbe98d169bcc"} Feb 26 17:40:12 crc kubenswrapper[4805]: I0226 17:40:12.087411 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-snh6t" event={"ID":"cec8ed7b-d0af-4d30-b8d0-764518020645","Type":"ContainerStarted","Data":"e80b5be90cb9e095d07400790b33ce5eb9fac36a2056192dfafceb3cb20948e2"} Feb 26 17:40:12 crc kubenswrapper[4805]: I0226 17:40:12.088632 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c5c5-account-create-update-675kp" event={"ID":"873a3075-565b-48e7-a3d7-e0bcc6a0a60b","Type":"ContainerStarted","Data":"acedf37dfa3dd13d23f4707dee188e7e7b37c50574245473acb8496ad1dafcc4"} Feb 26 17:40:12 crc kubenswrapper[4805]: I0226 17:40:12.088659 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c5c5-account-create-update-675kp" event={"ID":"873a3075-565b-48e7-a3d7-e0bcc6a0a60b","Type":"ContainerStarted","Data":"728e0af3e992a6d5c5fdc4f4fd62079d1fe5e3a5e484e490b270ae6daadd0a90"} Feb 26 17:40:12 crc kubenswrapper[4805]: I0226 17:40:12.090382 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-76npn" event={"ID":"435245a7-a717-4cb3-8125-9800dd40f909","Type":"ContainerStarted","Data":"c7578015f42fe2b3612d827fe11a5e147894303d6758e5b6239f677894f66546"} Feb 26 17:40:12 crc kubenswrapper[4805]: I0226 17:40:12.299635 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-c5c5-account-create-update-675kp" podStartSLOduration=4.299598631 podStartE2EDuration="4.299598631s" podCreationTimestamp="2026-02-26 17:40:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:40:12.116476589 +0000 UTC m=+1526.678230928" watchObservedRunningTime="2026-02-26 17:40:12.299598631 +0000 UTC m=+1526.861352970" Feb 26 17:40:12 crc kubenswrapper[4805]: I0226 17:40:12.990305 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 17:40:12 crc kubenswrapper[4805]: I0226 17:40:12.990534 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3ad5bd95-e53c-443e-8204-69377ae9600c" containerName="glance-log" containerID="cri-o://7cb331726954a0f3646c4ed8dbb743dca78dddad2acba37a2ca3d42f2edc5afa" gracePeriod=30 Feb 26 17:40:12 crc kubenswrapper[4805]: I0226 17:40:12.990923 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3ad5bd95-e53c-443e-8204-69377ae9600c" containerName="glance-httpd" containerID="cri-o://db82634890c0444fb4366b978e1d6658530b01df1c672681376cf76fed06ad90" gracePeriod=30 Feb 26 17:40:13 crc kubenswrapper[4805]: I0226 17:40:13.245737 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jktxs" event={"ID":"8365e8e6-fa3b-4a87-936c-d63936b861d0","Type":"ContainerStarted","Data":"bfb8e199a6ecabf2e9849b0fdf0e6af6ad70d102d761356009ddd49368d17355"} Feb 26 17:40:13 crc kubenswrapper[4805]: I0226 17:40:13.249130 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-snh6t" event={"ID":"cec8ed7b-d0af-4d30-b8d0-764518020645","Type":"ContainerStarted","Data":"56d5eddd25a3f123749b0212de503c05892bf64ea83aab75b0a641264ab47e1d"} Feb 26 17:40:13 crc kubenswrapper[4805]: I0226 17:40:13.266994 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-jktxs" podStartSLOduration=5.266970428 podStartE2EDuration="5.266970428s" podCreationTimestamp="2026-02-26 17:40:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:40:13.261733516 +0000 UTC m=+1527.823487855" watchObservedRunningTime="2026-02-26 17:40:13.266970428 +0000 UTC m=+1527.828724767" Feb 26 17:40:13 crc kubenswrapper[4805]: I0226 17:40:13.316568 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-snh6t" podStartSLOduration=5.3165442689999995 podStartE2EDuration="5.316544269s" podCreationTimestamp="2026-02-26 17:40:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:40:13.294439111 +0000 UTC m=+1527.856193460" watchObservedRunningTime="2026-02-26 17:40:13.316544269 +0000 UTC m=+1527.878298618" Feb 26 17:40:13 crc kubenswrapper[4805]: I0226 17:40:13.327991 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1920fe1a-941a-4968-b797-bbdbe80a08de","Type":"ContainerStarted","Data":"7c557c48dc4d838f2dd8d079505049fe8e9b417524e92681d3fa949b84ffc02e"} Feb 26 17:40:13 crc kubenswrapper[4805]: I0226 17:40:13.348065 4805 generic.go:334] "Generic (PLEG): container finished" podID="873a3075-565b-48e7-a3d7-e0bcc6a0a60b" containerID="acedf37dfa3dd13d23f4707dee188e7e7b37c50574245473acb8496ad1dafcc4" exitCode=0 Feb 26 17:40:13 crc kubenswrapper[4805]: I0226 17:40:13.348179 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c5c5-account-create-update-675kp" event={"ID":"873a3075-565b-48e7-a3d7-e0bcc6a0a60b","Type":"ContainerDied","Data":"acedf37dfa3dd13d23f4707dee188e7e7b37c50574245473acb8496ad1dafcc4"} Feb 26 17:40:13 crc kubenswrapper[4805]: I0226 17:40:13.388736 4805 generic.go:334] "Generic (PLEG): container finished" podID="4b33cba2-320e-4c3b-986b-9d7e3225d30e" containerID="bc41c763572f16446ceb6bcb0b0103b7f75d518389412a0e07eceae1b928ee7c" exitCode=0 Feb 26 17:40:13 crc kubenswrapper[4805]: I0226 17:40:13.388896 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535460-z5f9r" event={"ID":"4b33cba2-320e-4c3b-986b-9d7e3225d30e","Type":"ContainerDied","Data":"bc41c763572f16446ceb6bcb0b0103b7f75d518389412a0e07eceae1b928ee7c"} Feb 26 17:40:13 crc kubenswrapper[4805]: I0226 17:40:13.427230 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-76npn" event={"ID":"435245a7-a717-4cb3-8125-9800dd40f909","Type":"ContainerStarted","Data":"53f74aaeea74527f6b76cab70cebba1ec8737bf0505435a9496bfc8e895269ed"} Feb 26 17:40:14 crc kubenswrapper[4805]: I0226 17:40:14.403475 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a085-account-create-update-5t9kt" Feb 26 17:40:14 crc kubenswrapper[4805]: I0226 17:40:14.460083 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbj8q\" (UniqueName: \"kubernetes.io/projected/d80f8620-b048-40aa-a97c-07cdd033379f-kube-api-access-cbj8q\") pod \"d80f8620-b048-40aa-a97c-07cdd033379f\" (UID: \"d80f8620-b048-40aa-a97c-07cdd033379f\") " Feb 26 17:40:14 crc kubenswrapper[4805]: I0226 17:40:14.460218 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d80f8620-b048-40aa-a97c-07cdd033379f-operator-scripts\") pod \"d80f8620-b048-40aa-a97c-07cdd033379f\" (UID: \"d80f8620-b048-40aa-a97c-07cdd033379f\") " Feb 26 17:40:14 crc kubenswrapper[4805]: I0226 17:40:14.461036 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d80f8620-b048-40aa-a97c-07cdd033379f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d80f8620-b048-40aa-a97c-07cdd033379f" (UID: "d80f8620-b048-40aa-a97c-07cdd033379f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:40:14 crc kubenswrapper[4805]: I0226 17:40:14.461114 4805 generic.go:334] "Generic (PLEG): container finished" podID="3ad5bd95-e53c-443e-8204-69377ae9600c" containerID="7cb331726954a0f3646c4ed8dbb743dca78dddad2acba37a2ca3d42f2edc5afa" exitCode=143 Feb 26 17:40:14 crc kubenswrapper[4805]: I0226 17:40:14.461265 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3ad5bd95-e53c-443e-8204-69377ae9600c","Type":"ContainerDied","Data":"7cb331726954a0f3646c4ed8dbb743dca78dddad2acba37a2ca3d42f2edc5afa"} Feb 26 17:40:14 crc kubenswrapper[4805]: I0226 17:40:14.465993 4805 generic.go:334] "Generic (PLEG): container finished" podID="435245a7-a717-4cb3-8125-9800dd40f909" containerID="53f74aaeea74527f6b76cab70cebba1ec8737bf0505435a9496bfc8e895269ed" exitCode=0 Feb 26 17:40:14 crc kubenswrapper[4805]: I0226 17:40:14.466114 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-76npn" event={"ID":"435245a7-a717-4cb3-8125-9800dd40f909","Type":"ContainerDied","Data":"53f74aaeea74527f6b76cab70cebba1ec8737bf0505435a9496bfc8e895269ed"} Feb 26 17:40:14 crc kubenswrapper[4805]: I0226 17:40:14.476544 4805 generic.go:334] "Generic (PLEG): container finished" podID="8365e8e6-fa3b-4a87-936c-d63936b861d0" containerID="bfb8e199a6ecabf2e9849b0fdf0e6af6ad70d102d761356009ddd49368d17355" exitCode=0 Feb 26 17:40:14 crc kubenswrapper[4805]: I0226 17:40:14.476673 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jktxs" event={"ID":"8365e8e6-fa3b-4a87-936c-d63936b861d0","Type":"ContainerDied","Data":"bfb8e199a6ecabf2e9849b0fdf0e6af6ad70d102d761356009ddd49368d17355"} Feb 26 17:40:14 crc kubenswrapper[4805]: I0226 17:40:14.492255 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d80f8620-b048-40aa-a97c-07cdd033379f-kube-api-access-cbj8q" (OuterVolumeSpecName: "kube-api-access-cbj8q") pod "d80f8620-b048-40aa-a97c-07cdd033379f" (UID: "d80f8620-b048-40aa-a97c-07cdd033379f"). InnerVolumeSpecName "kube-api-access-cbj8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:40:14 crc kubenswrapper[4805]: I0226 17:40:14.502312 4805 generic.go:334] "Generic (PLEG): container finished" podID="cec8ed7b-d0af-4d30-b8d0-764518020645" containerID="56d5eddd25a3f123749b0212de503c05892bf64ea83aab75b0a641264ab47e1d" exitCode=0 Feb 26 17:40:14 crc kubenswrapper[4805]: I0226 17:40:14.502403 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-snh6t" event={"ID":"cec8ed7b-d0af-4d30-b8d0-764518020645","Type":"ContainerDied","Data":"56d5eddd25a3f123749b0212de503c05892bf64ea83aab75b0a641264ab47e1d"} Feb 26 17:40:14 crc kubenswrapper[4805]: I0226 17:40:14.521999 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a085-account-create-update-5t9kt" event={"ID":"d80f8620-b048-40aa-a97c-07cdd033379f","Type":"ContainerDied","Data":"e3b27dfbef3edf50b57455c77733b3dd7f7f2f6e5aa14d922769cbf461904e27"} Feb 26 17:40:14 crc kubenswrapper[4805]: I0226 17:40:14.522052 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3b27dfbef3edf50b57455c77733b3dd7f7f2f6e5aa14d922769cbf461904e27" Feb 26 17:40:14 crc kubenswrapper[4805]: I0226 17:40:14.522121 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a085-account-create-update-5t9kt" Feb 26 17:40:14 crc kubenswrapper[4805]: I0226 17:40:14.562569 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d80f8620-b048-40aa-a97c-07cdd033379f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:40:14 crc kubenswrapper[4805]: I0226 17:40:14.562616 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbj8q\" (UniqueName: \"kubernetes.io/projected/d80f8620-b048-40aa-a97c-07cdd033379f-kube-api-access-cbj8q\") on node \"crc\" DevicePath \"\"" Feb 26 17:40:14 crc kubenswrapper[4805]: I0226 17:40:14.576680 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1920fe1a-941a-4968-b797-bbdbe80a08de","Type":"ContainerStarted","Data":"e24990ab9eca2de91c5a9148c5e94645110eb14d7478a87dc8631868d3915cb9"} Feb 26 17:40:14 crc kubenswrapper[4805]: I0226 17:40:14.577940 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535460-z5f9r" Feb 26 17:40:14 crc kubenswrapper[4805]: I0226 17:40:14.614961 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1518-account-create-update-ktdrg" Feb 26 17:40:14 crc kubenswrapper[4805]: I0226 17:40:14.664145 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gdkx\" (UniqueName: \"kubernetes.io/projected/ad35cf63-aace-4b6f-b063-1f3642da07da-kube-api-access-9gdkx\") pod \"ad35cf63-aace-4b6f-b063-1f3642da07da\" (UID: \"ad35cf63-aace-4b6f-b063-1f3642da07da\") " Feb 26 17:40:14 crc kubenswrapper[4805]: I0226 17:40:14.664413 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7j8d\" (UniqueName: \"kubernetes.io/projected/4b33cba2-320e-4c3b-986b-9d7e3225d30e-kube-api-access-z7j8d\") pod \"4b33cba2-320e-4c3b-986b-9d7e3225d30e\" (UID: \"4b33cba2-320e-4c3b-986b-9d7e3225d30e\") " Feb 26 17:40:14 crc kubenswrapper[4805]: I0226 17:40:14.664571 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad35cf63-aace-4b6f-b063-1f3642da07da-operator-scripts\") pod \"ad35cf63-aace-4b6f-b063-1f3642da07da\" (UID: \"ad35cf63-aace-4b6f-b063-1f3642da07da\") " Feb 26 17:40:14 crc kubenswrapper[4805]: I0226 17:40:14.674297 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad35cf63-aace-4b6f-b063-1f3642da07da-kube-api-access-9gdkx" (OuterVolumeSpecName: "kube-api-access-9gdkx") pod "ad35cf63-aace-4b6f-b063-1f3642da07da" (UID: "ad35cf63-aace-4b6f-b063-1f3642da07da"). InnerVolumeSpecName "kube-api-access-9gdkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:40:14 crc kubenswrapper[4805]: I0226 17:40:14.679436 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad35cf63-aace-4b6f-b063-1f3642da07da-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ad35cf63-aace-4b6f-b063-1f3642da07da" (UID: "ad35cf63-aace-4b6f-b063-1f3642da07da"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:40:14 crc kubenswrapper[4805]: I0226 17:40:14.694223 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b33cba2-320e-4c3b-986b-9d7e3225d30e-kube-api-access-z7j8d" (OuterVolumeSpecName: "kube-api-access-z7j8d") pod "4b33cba2-320e-4c3b-986b-9d7e3225d30e" (UID: "4b33cba2-320e-4c3b-986b-9d7e3225d30e"). InnerVolumeSpecName "kube-api-access-z7j8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:40:14 crc kubenswrapper[4805]: I0226 17:40:14.767582 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad35cf63-aace-4b6f-b063-1f3642da07da-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:40:14 crc kubenswrapper[4805]: I0226 17:40:14.767621 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gdkx\" (UniqueName: \"kubernetes.io/projected/ad35cf63-aace-4b6f-b063-1f3642da07da-kube-api-access-9gdkx\") on node \"crc\" DevicePath \"\"" Feb 26 17:40:14 crc kubenswrapper[4805]: I0226 17:40:14.767636 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7j8d\" (UniqueName: \"kubernetes.io/projected/4b33cba2-320e-4c3b-986b-9d7e3225d30e-kube-api-access-z7j8d\") on node \"crc\" DevicePath \"\"" Feb 26 17:40:15 crc kubenswrapper[4805]: I0226 17:40:15.199193 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-76npn" Feb 26 17:40:15 crc kubenswrapper[4805]: I0226 17:40:15.213663 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c5c5-account-create-update-675kp" Feb 26 17:40:15 crc kubenswrapper[4805]: I0226 17:40:15.315685 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/435245a7-a717-4cb3-8125-9800dd40f909-operator-scripts\") pod \"435245a7-a717-4cb3-8125-9800dd40f909\" (UID: \"435245a7-a717-4cb3-8125-9800dd40f909\") " Feb 26 17:40:15 crc kubenswrapper[4805]: I0226 17:40:15.316053 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6t5n9\" (UniqueName: \"kubernetes.io/projected/435245a7-a717-4cb3-8125-9800dd40f909-kube-api-access-6t5n9\") pod \"435245a7-a717-4cb3-8125-9800dd40f909\" (UID: \"435245a7-a717-4cb3-8125-9800dd40f909\") " Feb 26 17:40:15 crc kubenswrapper[4805]: I0226 17:40:15.316249 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/873a3075-565b-48e7-a3d7-e0bcc6a0a60b-operator-scripts\") pod \"873a3075-565b-48e7-a3d7-e0bcc6a0a60b\" (UID: \"873a3075-565b-48e7-a3d7-e0bcc6a0a60b\") " Feb 26 17:40:15 crc kubenswrapper[4805]: I0226 17:40:15.316370 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67tn4\" (UniqueName: \"kubernetes.io/projected/873a3075-565b-48e7-a3d7-e0bcc6a0a60b-kube-api-access-67tn4\") pod \"873a3075-565b-48e7-a3d7-e0bcc6a0a60b\" (UID: \"873a3075-565b-48e7-a3d7-e0bcc6a0a60b\") " Feb 26 17:40:15 crc kubenswrapper[4805]: I0226 17:40:15.317988 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/435245a7-a717-4cb3-8125-9800dd40f909-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "435245a7-a717-4cb3-8125-9800dd40f909" (UID: "435245a7-a717-4cb3-8125-9800dd40f909"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:40:15 crc kubenswrapper[4805]: I0226 17:40:15.318648 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/873a3075-565b-48e7-a3d7-e0bcc6a0a60b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "873a3075-565b-48e7-a3d7-e0bcc6a0a60b" (UID: "873a3075-565b-48e7-a3d7-e0bcc6a0a60b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:40:15 crc kubenswrapper[4805]: I0226 17:40:15.323389 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/873a3075-565b-48e7-a3d7-e0bcc6a0a60b-kube-api-access-67tn4" (OuterVolumeSpecName: "kube-api-access-67tn4") pod "873a3075-565b-48e7-a3d7-e0bcc6a0a60b" (UID: "873a3075-565b-48e7-a3d7-e0bcc6a0a60b"). InnerVolumeSpecName "kube-api-access-67tn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:40:15 crc kubenswrapper[4805]: I0226 17:40:15.325379 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/435245a7-a717-4cb3-8125-9800dd40f909-kube-api-access-6t5n9" (OuterVolumeSpecName: "kube-api-access-6t5n9") pod "435245a7-a717-4cb3-8125-9800dd40f909" (UID: "435245a7-a717-4cb3-8125-9800dd40f909"). InnerVolumeSpecName "kube-api-access-6t5n9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:40:15 crc kubenswrapper[4805]: I0226 17:40:15.424376 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/435245a7-a717-4cb3-8125-9800dd40f909-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:40:15 crc kubenswrapper[4805]: I0226 17:40:15.424428 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6t5n9\" (UniqueName: \"kubernetes.io/projected/435245a7-a717-4cb3-8125-9800dd40f909-kube-api-access-6t5n9\") on node \"crc\" DevicePath \"\"" Feb 26 17:40:15 crc kubenswrapper[4805]: I0226 17:40:15.424447 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/873a3075-565b-48e7-a3d7-e0bcc6a0a60b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:40:15 crc kubenswrapper[4805]: I0226 17:40:15.424457 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67tn4\" (UniqueName: \"kubernetes.io/projected/873a3075-565b-48e7-a3d7-e0bcc6a0a60b-kube-api-access-67tn4\") on node \"crc\" DevicePath \"\"" Feb 26 17:40:15 crc kubenswrapper[4805]: I0226 17:40:15.483062 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cloudkitty-api-0" podUID="69411227-14e0-40b4-a753-f2178bfbdd2a" containerName="cloudkitty-api" probeResult="failure" output="Get \"https://10.217.0.198:8889/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 17:40:15 crc kubenswrapper[4805]: I0226 17:40:15.483563 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-api-0" podUID="69411227-14e0-40b4-a753-f2178bfbdd2a" containerName="cloudkitty-api" probeResult="failure" output="Get \"https://10.217.0.198:8889/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 17:40:15 crc kubenswrapper[4805]: I0226 17:40:15.593782 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-api-0" Feb 26 17:40:15 crc kubenswrapper[4805]: I0226 17:40:15.610134 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ab72a0e1-fd82-4334-9124-8a3cc815bdd6","Type":"ContainerStarted","Data":"48bafb12bc0bf86b42e5acc918d86a945ca96543bae0c22c9cd36461341d0fdb"} Feb 26 17:40:15 crc kubenswrapper[4805]: I0226 17:40:15.613349 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1518-account-create-update-ktdrg" Feb 26 17:40:15 crc kubenswrapper[4805]: I0226 17:40:15.613362 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1518-account-create-update-ktdrg" event={"ID":"ad35cf63-aace-4b6f-b063-1f3642da07da","Type":"ContainerDied","Data":"711d04c33eced6b35eaed355ab4a5611976cea371b75b033257b39d70180bce8"} Feb 26 17:40:15 crc kubenswrapper[4805]: I0226 17:40:15.613434 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="711d04c33eced6b35eaed355ab4a5611976cea371b75b033257b39d70180bce8" Feb 26 17:40:15 crc kubenswrapper[4805]: I0226 17:40:15.625184 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c5c5-account-create-update-675kp" Feb 26 17:40:15 crc kubenswrapper[4805]: I0226 17:40:15.629520 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c5c5-account-create-update-675kp" event={"ID":"873a3075-565b-48e7-a3d7-e0bcc6a0a60b","Type":"ContainerDied","Data":"728e0af3e992a6d5c5fdc4f4fd62079d1fe5e3a5e484e490b270ae6daadd0a90"} Feb 26 17:40:15 crc kubenswrapper[4805]: I0226 17:40:15.629561 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="728e0af3e992a6d5c5fdc4f4fd62079d1fe5e3a5e484e490b270ae6daadd0a90" Feb 26 17:40:15 crc kubenswrapper[4805]: I0226 17:40:15.662545 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535460-z5f9r" event={"ID":"4b33cba2-320e-4c3b-986b-9d7e3225d30e","Type":"ContainerDied","Data":"dadebc1f2219505180445de98755171b7d20a299e68127a48a37dba76c59c5d0"} Feb 26 17:40:15 crc kubenswrapper[4805]: I0226 17:40:15.662591 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dadebc1f2219505180445de98755171b7d20a299e68127a48a37dba76c59c5d0" Feb 26 17:40:15 crc kubenswrapper[4805]: I0226 17:40:15.662698 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535460-z5f9r" Feb 26 17:40:15 crc kubenswrapper[4805]: I0226 17:40:15.682609 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-76npn" event={"ID":"435245a7-a717-4cb3-8125-9800dd40f909","Type":"ContainerDied","Data":"c7578015f42fe2b3612d827fe11a5e147894303d6758e5b6239f677894f66546"} Feb 26 17:40:15 crc kubenswrapper[4805]: I0226 17:40:15.682678 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7578015f42fe2b3612d827fe11a5e147894303d6758e5b6239f677894f66546" Feb 26 17:40:15 crc kubenswrapper[4805]: I0226 17:40:15.684810 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-76npn" Feb 26 17:40:15 crc kubenswrapper[4805]: I0226 17:40:15.745771 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535454-zzksv"] Feb 26 17:40:15 crc kubenswrapper[4805]: I0226 17:40:15.759396 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535454-zzksv"] Feb 26 17:40:16 crc kubenswrapper[4805]: I0226 17:40:16.289633 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jktxs" Feb 26 17:40:16 crc kubenswrapper[4805]: I0226 17:40:16.309358 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8365e8e6-fa3b-4a87-936c-d63936b861d0-operator-scripts\") pod \"8365e8e6-fa3b-4a87-936c-d63936b861d0\" (UID: \"8365e8e6-fa3b-4a87-936c-d63936b861d0\") " Feb 26 17:40:16 crc kubenswrapper[4805]: I0226 17:40:16.309412 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w46cz\" (UniqueName: \"kubernetes.io/projected/8365e8e6-fa3b-4a87-936c-d63936b861d0-kube-api-access-w46cz\") pod \"8365e8e6-fa3b-4a87-936c-d63936b861d0\" (UID: \"8365e8e6-fa3b-4a87-936c-d63936b861d0\") " Feb 26 17:40:16 crc kubenswrapper[4805]: I0226 17:40:16.309942 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8365e8e6-fa3b-4a87-936c-d63936b861d0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8365e8e6-fa3b-4a87-936c-d63936b861d0" (UID: "8365e8e6-fa3b-4a87-936c-d63936b861d0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:40:16 crc kubenswrapper[4805]: I0226 17:40:16.310427 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8365e8e6-fa3b-4a87-936c-d63936b861d0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:40:16 crc kubenswrapper[4805]: I0226 17:40:16.315337 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8365e8e6-fa3b-4a87-936c-d63936b861d0-kube-api-access-w46cz" (OuterVolumeSpecName: "kube-api-access-w46cz") pod "8365e8e6-fa3b-4a87-936c-d63936b861d0" (UID: "8365e8e6-fa3b-4a87-936c-d63936b861d0"). InnerVolumeSpecName "kube-api-access-w46cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:40:16 crc kubenswrapper[4805]: I0226 17:40:16.413650 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w46cz\" (UniqueName: \"kubernetes.io/projected/8365e8e6-fa3b-4a87-936c-d63936b861d0-kube-api-access-w46cz\") on node \"crc\" DevicePath \"\"" Feb 26 17:40:16 crc kubenswrapper[4805]: I0226 17:40:16.486278 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-snh6t" Feb 26 17:40:16 crc kubenswrapper[4805]: I0226 17:40:16.515133 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cec8ed7b-d0af-4d30-b8d0-764518020645-operator-scripts\") pod \"cec8ed7b-d0af-4d30-b8d0-764518020645\" (UID: \"cec8ed7b-d0af-4d30-b8d0-764518020645\") " Feb 26 17:40:16 crc kubenswrapper[4805]: I0226 17:40:16.515191 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztvcz\" (UniqueName: \"kubernetes.io/projected/cec8ed7b-d0af-4d30-b8d0-764518020645-kube-api-access-ztvcz\") pod \"cec8ed7b-d0af-4d30-b8d0-764518020645\" (UID: \"cec8ed7b-d0af-4d30-b8d0-764518020645\") " Feb 26 17:40:16 crc kubenswrapper[4805]: I0226 17:40:16.515890 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cec8ed7b-d0af-4d30-b8d0-764518020645-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cec8ed7b-d0af-4d30-b8d0-764518020645" (UID: "cec8ed7b-d0af-4d30-b8d0-764518020645"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:40:16 crc kubenswrapper[4805]: I0226 17:40:16.522702 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cec8ed7b-d0af-4d30-b8d0-764518020645-kube-api-access-ztvcz" (OuterVolumeSpecName: "kube-api-access-ztvcz") pod "cec8ed7b-d0af-4d30-b8d0-764518020645" (UID: "cec8ed7b-d0af-4d30-b8d0-764518020645"). InnerVolumeSpecName "kube-api-access-ztvcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:40:16 crc kubenswrapper[4805]: I0226 17:40:16.618067 4805 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cec8ed7b-d0af-4d30-b8d0-764518020645-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:40:16 crc kubenswrapper[4805]: I0226 17:40:16.618100 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztvcz\" (UniqueName: \"kubernetes.io/projected/cec8ed7b-d0af-4d30-b8d0-764518020645-kube-api-access-ztvcz\") on node \"crc\" DevicePath \"\"" Feb 26 17:40:16 crc kubenswrapper[4805]: I0226 17:40:16.696561 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jktxs" event={"ID":"8365e8e6-fa3b-4a87-936c-d63936b861d0","Type":"ContainerDied","Data":"7c47204c5c9d250c6aef32a80eed724f4df446352460f07b8536e5d51c6bcb19"} Feb 26 17:40:16 crc kubenswrapper[4805]: I0226 17:40:16.696603 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c47204c5c9d250c6aef32a80eed724f4df446352460f07b8536e5d51c6bcb19" Feb 26 17:40:16 crc kubenswrapper[4805]: I0226 17:40:16.696675 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jktxs" Feb 26 17:40:16 crc kubenswrapper[4805]: I0226 17:40:16.716257 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ab72a0e1-fd82-4334-9124-8a3cc815bdd6","Type":"ContainerStarted","Data":"59d3909793bfd1604db6267e41cc006ae5a5a500f350adc4504f8205d881e5ae"} Feb 26 17:40:16 crc kubenswrapper[4805]: I0226 17:40:16.743513 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-snh6t" event={"ID":"cec8ed7b-d0af-4d30-b8d0-764518020645","Type":"ContainerDied","Data":"e80b5be90cb9e095d07400790b33ce5eb9fac36a2056192dfafceb3cb20948e2"} Feb 26 17:40:16 crc kubenswrapper[4805]: I0226 17:40:16.743563 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e80b5be90cb9e095d07400790b33ce5eb9fac36a2056192dfafceb3cb20948e2" Feb 26 17:40:16 crc kubenswrapper[4805]: I0226 17:40:16.743630 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-snh6t" Feb 26 17:40:16 crc kubenswrapper[4805]: I0226 17:40:16.748090 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.748071385 podStartE2EDuration="7.748071385s" podCreationTimestamp="2026-02-26 17:40:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:40:16.74469437 +0000 UTC m=+1531.306448709" watchObservedRunningTime="2026-02-26 17:40:16.748071385 +0000 UTC m=+1531.309825724" Feb 26 17:40:16 crc kubenswrapper[4805]: I0226 17:40:16.753870 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1920fe1a-941a-4968-b797-bbdbe80a08de","Type":"ContainerStarted","Data":"c14a1121d83eec4348d757a61121920b84c8900d7e9638fc9586e928ae43f6d7"} Feb 26 17:40:16 crc kubenswrapper[4805]: I0226 17:40:16.754052 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1920fe1a-941a-4968-b797-bbdbe80a08de" containerName="ceilometer-central-agent" containerID="cri-o://329345891e99836c225e1a4614bb4d1a4bb31c797cb36ff68b3f3fc76f419839" gracePeriod=30 Feb 26 17:40:16 crc kubenswrapper[4805]: I0226 17:40:16.754282 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 17:40:16 crc kubenswrapper[4805]: I0226 17:40:16.754569 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1920fe1a-941a-4968-b797-bbdbe80a08de" containerName="proxy-httpd" containerID="cri-o://c14a1121d83eec4348d757a61121920b84c8900d7e9638fc9586e928ae43f6d7" gracePeriod=30 Feb 26 17:40:16 crc kubenswrapper[4805]: I0226 17:40:16.754620 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1920fe1a-941a-4968-b797-bbdbe80a08de" containerName="sg-core" containerID="cri-o://e24990ab9eca2de91c5a9148c5e94645110eb14d7478a87dc8631868d3915cb9" gracePeriod=30 Feb 26 17:40:16 crc kubenswrapper[4805]: I0226 17:40:16.754653 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1920fe1a-941a-4968-b797-bbdbe80a08de" containerName="ceilometer-notification-agent" containerID="cri-o://7c557c48dc4d838f2dd8d079505049fe8e9b417524e92681d3fa949b84ffc02e" gracePeriod=30 Feb 26 17:40:16 crc kubenswrapper[4805]: I0226 17:40:16.794972 4805 generic.go:334] "Generic (PLEG): container finished" podID="3ad5bd95-e53c-443e-8204-69377ae9600c" containerID="db82634890c0444fb4366b978e1d6658530b01df1c672681376cf76fed06ad90" exitCode=0 Feb 26 17:40:16 crc kubenswrapper[4805]: I0226 17:40:16.795095 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3ad5bd95-e53c-443e-8204-69377ae9600c","Type":"ContainerDied","Data":"db82634890c0444fb4366b978e1d6658530b01df1c672681376cf76fed06ad90"} Feb 26 17:40:16 crc kubenswrapper[4805]: I0226 17:40:16.798862 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rb2p9" event={"ID":"db58eaee-5842-4d11-babf-1ededef9c68e","Type":"ContainerStarted","Data":"113db6e91221a2ea497bdd4d34555ec57003e510533109059f4e833a141eb1f5"} Feb 26 17:40:16 crc kubenswrapper[4805]: I0226 17:40:16.804380 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=16.759538089 podStartE2EDuration="29.804357716s" podCreationTimestamp="2026-02-26 17:39:47 +0000 UTC" firstStartedPulling="2026-02-26 17:40:03.217344854 +0000 UTC m=+1517.779099193" lastFinishedPulling="2026-02-26 17:40:16.262164481 +0000 UTC m=+1530.823918820" observedRunningTime="2026-02-26 17:40:16.794383194 +0000 UTC m=+1531.356137543" watchObservedRunningTime="2026-02-26 17:40:16.804357716 +0000 UTC m=+1531.366112055" Feb 26 17:40:16 crc kubenswrapper[4805]: I0226 17:40:16.856420 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="3ad5bd95-e53c-443e-8204-69377ae9600c" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.177:9292/healthcheck\": dial tcp 10.217.0.177:9292: connect: connection refused" Feb 26 17:40:16 crc kubenswrapper[4805]: I0226 17:40:16.856534 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="3ad5bd95-e53c-443e-8204-69377ae9600c" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.177:9292/healthcheck\": dial tcp 10.217.0.177:9292: connect: connection refused" Feb 26 17:40:16 crc kubenswrapper[4805]: I0226 17:40:16.970268 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ea99294-1de4-49ab-8e64-ae73b59d2b0d" path="/var/lib/kubelet/pods/8ea99294-1de4-49ab-8e64-ae73b59d2b0d/volumes" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.317344 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.334416 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ad5bd95-e53c-443e-8204-69377ae9600c-config-data\") pod \"3ad5bd95-e53c-443e-8204-69377ae9600c\" (UID: \"3ad5bd95-e53c-443e-8204-69377ae9600c\") " Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.334486 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klgtj\" (UniqueName: \"kubernetes.io/projected/3ad5bd95-e53c-443e-8204-69377ae9600c-kube-api-access-klgtj\") pod \"3ad5bd95-e53c-443e-8204-69377ae9600c\" (UID: \"3ad5bd95-e53c-443e-8204-69377ae9600c\") " Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.334558 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ad5bd95-e53c-443e-8204-69377ae9600c-combined-ca-bundle\") pod \"3ad5bd95-e53c-443e-8204-69377ae9600c\" (UID: \"3ad5bd95-e53c-443e-8204-69377ae9600c\") " Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.334610 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ad5bd95-e53c-443e-8204-69377ae9600c-scripts\") pod \"3ad5bd95-e53c-443e-8204-69377ae9600c\" (UID: \"3ad5bd95-e53c-443e-8204-69377ae9600c\") " Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.334677 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3ad5bd95-e53c-443e-8204-69377ae9600c-httpd-run\") pod \"3ad5bd95-e53c-443e-8204-69377ae9600c\" (UID: \"3ad5bd95-e53c-443e-8204-69377ae9600c\") " Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.334701 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ad5bd95-e53c-443e-8204-69377ae9600c-internal-tls-certs\") pod \"3ad5bd95-e53c-443e-8204-69377ae9600c\" (UID: \"3ad5bd95-e53c-443e-8204-69377ae9600c\") " Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.341416 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ad5bd95-e53c-443e-8204-69377ae9600c-kube-api-access-klgtj" (OuterVolumeSpecName: "kube-api-access-klgtj") pod "3ad5bd95-e53c-443e-8204-69377ae9600c" (UID: "3ad5bd95-e53c-443e-8204-69377ae9600c"). InnerVolumeSpecName "kube-api-access-klgtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.342674 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ad5bd95-e53c-443e-8204-69377ae9600c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3ad5bd95-e53c-443e-8204-69377ae9600c" (UID: "3ad5bd95-e53c-443e-8204-69377ae9600c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.368529 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74\") pod \"3ad5bd95-e53c-443e-8204-69377ae9600c\" (UID: \"3ad5bd95-e53c-443e-8204-69377ae9600c\") " Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.368660 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ad5bd95-e53c-443e-8204-69377ae9600c-logs\") pod \"3ad5bd95-e53c-443e-8204-69377ae9600c\" (UID: \"3ad5bd95-e53c-443e-8204-69377ae9600c\") " Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.370643 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ad5bd95-e53c-443e-8204-69377ae9600c-logs" (OuterVolumeSpecName: "logs") pod "3ad5bd95-e53c-443e-8204-69377ae9600c" (UID: "3ad5bd95-e53c-443e-8204-69377ae9600c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.380746 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klgtj\" (UniqueName: \"kubernetes.io/projected/3ad5bd95-e53c-443e-8204-69377ae9600c-kube-api-access-klgtj\") on node \"crc\" DevicePath \"\"" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.380782 4805 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3ad5bd95-e53c-443e-8204-69377ae9600c-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.382108 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ad5bd95-e53c-443e-8204-69377ae9600c-scripts" (OuterVolumeSpecName: "scripts") pod "3ad5bd95-e53c-443e-8204-69377ae9600c" (UID: "3ad5bd95-e53c-443e-8204-69377ae9600c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.427178 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ad5bd95-e53c-443e-8204-69377ae9600c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ad5bd95-e53c-443e-8204-69377ae9600c" (UID: "3ad5bd95-e53c-443e-8204-69377ae9600c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.439050 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74" (OuterVolumeSpecName: "glance") pod "3ad5bd95-e53c-443e-8204-69377ae9600c" (UID: "3ad5bd95-e53c-443e-8204-69377ae9600c"). InnerVolumeSpecName "pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.469295 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ad5bd95-e53c-443e-8204-69377ae9600c-config-data" (OuterVolumeSpecName: "config-data") pod "3ad5bd95-e53c-443e-8204-69377ae9600c" (UID: "3ad5bd95-e53c-443e-8204-69377ae9600c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.471904 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ad5bd95-e53c-443e-8204-69377ae9600c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3ad5bd95-e53c-443e-8204-69377ae9600c" (UID: "3ad5bd95-e53c-443e-8204-69377ae9600c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.502764 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ad5bd95-e53c-443e-8204-69377ae9600c-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.502800 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ad5bd95-e53c-443e-8204-69377ae9600c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.502814 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ad5bd95-e53c-443e-8204-69377ae9600c-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.502825 4805 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ad5bd95-e53c-443e-8204-69377ae9600c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.502857 4805 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74\") on node \"crc\" " Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.502871 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ad5bd95-e53c-443e-8204-69377ae9600c-logs\") on node \"crc\" DevicePath \"\"" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.593406 4805 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.593588 4805 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74") on node "crc" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.606199 4805 reconciler_common.go:293] "Volume detached for volume \"pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74\") on node \"crc\" DevicePath \"\"" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.836052 4805 generic.go:334] "Generic (PLEG): container finished" podID="1920fe1a-941a-4968-b797-bbdbe80a08de" containerID="e24990ab9eca2de91c5a9148c5e94645110eb14d7478a87dc8631868d3915cb9" exitCode=2 Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.836331 4805 generic.go:334] "Generic (PLEG): container finished" podID="1920fe1a-941a-4968-b797-bbdbe80a08de" containerID="7c557c48dc4d838f2dd8d079505049fe8e9b417524e92681d3fa949b84ffc02e" exitCode=0 Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.836470 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1920fe1a-941a-4968-b797-bbdbe80a08de","Type":"ContainerDied","Data":"e24990ab9eca2de91c5a9148c5e94645110eb14d7478a87dc8631868d3915cb9"} Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.836568 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1920fe1a-941a-4968-b797-bbdbe80a08de","Type":"ContainerDied","Data":"7c557c48dc4d838f2dd8d079505049fe8e9b417524e92681d3fa949b84ffc02e"} Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.855677 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3ad5bd95-e53c-443e-8204-69377ae9600c","Type":"ContainerDied","Data":"31e6ff7463df7efa84faf4562817385d34f3ded147a83729f64830f1927f48ca"} Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.855731 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.855752 4805 scope.go:117] "RemoveContainer" containerID="db82634890c0444fb4366b978e1d6658530b01df1c672681376cf76fed06ad90" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.873541 4805 generic.go:334] "Generic (PLEG): container finished" podID="db58eaee-5842-4d11-babf-1ededef9c68e" containerID="113db6e91221a2ea497bdd4d34555ec57003e510533109059f4e833a141eb1f5" exitCode=0 Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.873962 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rb2p9" event={"ID":"db58eaee-5842-4d11-babf-1ededef9c68e","Type":"ContainerDied","Data":"113db6e91221a2ea497bdd4d34555ec57003e510533109059f4e833a141eb1f5"} Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.929849 4805 scope.go:117] "RemoveContainer" containerID="7cb331726954a0f3646c4ed8dbb743dca78dddad2acba37a2ca3d42f2edc5afa" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.942843 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.954128 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.965477 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 17:40:17 crc kubenswrapper[4805]: E0226 17:40:17.965958 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ad5bd95-e53c-443e-8204-69377ae9600c" containerName="glance-log" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.965978 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ad5bd95-e53c-443e-8204-69377ae9600c" containerName="glance-log" Feb 26 17:40:17 crc kubenswrapper[4805]: E0226 17:40:17.965993 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d80f8620-b048-40aa-a97c-07cdd033379f" containerName="mariadb-account-create-update" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.966006 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d80f8620-b048-40aa-a97c-07cdd033379f" containerName="mariadb-account-create-update" Feb 26 17:40:17 crc kubenswrapper[4805]: E0226 17:40:17.966048 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ad5bd95-e53c-443e-8204-69377ae9600c" containerName="glance-httpd" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.966057 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ad5bd95-e53c-443e-8204-69377ae9600c" containerName="glance-httpd" Feb 26 17:40:17 crc kubenswrapper[4805]: E0226 17:40:17.966063 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8365e8e6-fa3b-4a87-936c-d63936b861d0" containerName="mariadb-database-create" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.966068 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8365e8e6-fa3b-4a87-936c-d63936b861d0" containerName="mariadb-database-create" Feb 26 17:40:17 crc kubenswrapper[4805]: E0226 17:40:17.966091 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="873a3075-565b-48e7-a3d7-e0bcc6a0a60b" containerName="mariadb-account-create-update" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.966098 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="873a3075-565b-48e7-a3d7-e0bcc6a0a60b" containerName="mariadb-account-create-update" Feb 26 17:40:17 crc kubenswrapper[4805]: E0226 17:40:17.966110 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad35cf63-aace-4b6f-b063-1f3642da07da" containerName="mariadb-account-create-update" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.966116 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad35cf63-aace-4b6f-b063-1f3642da07da" containerName="mariadb-account-create-update" Feb 26 17:40:17 crc kubenswrapper[4805]: E0226 17:40:17.966133 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b33cba2-320e-4c3b-986b-9d7e3225d30e" containerName="oc" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.966139 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b33cba2-320e-4c3b-986b-9d7e3225d30e" containerName="oc" Feb 26 17:40:17 crc kubenswrapper[4805]: E0226 17:40:17.966148 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cec8ed7b-d0af-4d30-b8d0-764518020645" containerName="mariadb-database-create" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.966154 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="cec8ed7b-d0af-4d30-b8d0-764518020645" containerName="mariadb-database-create" Feb 26 17:40:17 crc kubenswrapper[4805]: E0226 17:40:17.966168 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="435245a7-a717-4cb3-8125-9800dd40f909" containerName="mariadb-database-create" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.966174 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="435245a7-a717-4cb3-8125-9800dd40f909" containerName="mariadb-database-create" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.966364 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="d80f8620-b048-40aa-a97c-07cdd033379f" containerName="mariadb-account-create-update" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.966383 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ad5bd95-e53c-443e-8204-69377ae9600c" containerName="glance-log" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.966395 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ad5bd95-e53c-443e-8204-69377ae9600c" containerName="glance-httpd" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.966406 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b33cba2-320e-4c3b-986b-9d7e3225d30e" containerName="oc" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.966412 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="873a3075-565b-48e7-a3d7-e0bcc6a0a60b" containerName="mariadb-account-create-update" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.966420 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="435245a7-a717-4cb3-8125-9800dd40f909" containerName="mariadb-database-create" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.966431 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad35cf63-aace-4b6f-b063-1f3642da07da" containerName="mariadb-account-create-update" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.966443 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="cec8ed7b-d0af-4d30-b8d0-764518020645" containerName="mariadb-database-create" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.966451 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8365e8e6-fa3b-4a87-936c-d63936b861d0" containerName="mariadb-database-create" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.967746 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.972347 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.972531 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 26 17:40:17 crc kubenswrapper[4805]: I0226 17:40:17.974781 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 17:40:18 crc kubenswrapper[4805]: I0226 17:40:18.119544 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2e2871a-f575-4f8a-ae77-51f8da2aad53-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f2e2871a-f575-4f8a-ae77-51f8da2aad53\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:40:18 crc kubenswrapper[4805]: I0226 17:40:18.119982 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2e2871a-f575-4f8a-ae77-51f8da2aad53-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f2e2871a-f575-4f8a-ae77-51f8da2aad53\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:40:18 crc kubenswrapper[4805]: I0226 17:40:18.120201 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbbpk\" (UniqueName: \"kubernetes.io/projected/f2e2871a-f575-4f8a-ae77-51f8da2aad53-kube-api-access-mbbpk\") pod \"glance-default-internal-api-0\" (UID: \"f2e2871a-f575-4f8a-ae77-51f8da2aad53\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:40:18 crc kubenswrapper[4805]: I0226 17:40:18.120262 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2e2871a-f575-4f8a-ae77-51f8da2aad53-logs\") pod \"glance-default-internal-api-0\" (UID: \"f2e2871a-f575-4f8a-ae77-51f8da2aad53\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:40:18 crc kubenswrapper[4805]: I0226 17:40:18.120376 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2e2871a-f575-4f8a-ae77-51f8da2aad53-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f2e2871a-f575-4f8a-ae77-51f8da2aad53\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:40:18 crc kubenswrapper[4805]: I0226 17:40:18.120485 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74\") pod \"glance-default-internal-api-0\" (UID: \"f2e2871a-f575-4f8a-ae77-51f8da2aad53\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:40:18 crc kubenswrapper[4805]: I0226 17:40:18.120544 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2e2871a-f575-4f8a-ae77-51f8da2aad53-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f2e2871a-f575-4f8a-ae77-51f8da2aad53\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:40:18 crc kubenswrapper[4805]: I0226 17:40:18.120680 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2e2871a-f575-4f8a-ae77-51f8da2aad53-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f2e2871a-f575-4f8a-ae77-51f8da2aad53\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:40:18 crc kubenswrapper[4805]: I0226 17:40:18.222760 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2e2871a-f575-4f8a-ae77-51f8da2aad53-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f2e2871a-f575-4f8a-ae77-51f8da2aad53\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:40:18 crc kubenswrapper[4805]: I0226 17:40:18.222862 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2e2871a-f575-4f8a-ae77-51f8da2aad53-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f2e2871a-f575-4f8a-ae77-51f8da2aad53\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:40:18 crc kubenswrapper[4805]: I0226 17:40:18.222951 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbbpk\" (UniqueName: \"kubernetes.io/projected/f2e2871a-f575-4f8a-ae77-51f8da2aad53-kube-api-access-mbbpk\") pod \"glance-default-internal-api-0\" (UID: \"f2e2871a-f575-4f8a-ae77-51f8da2aad53\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:40:18 crc kubenswrapper[4805]: I0226 17:40:18.222986 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2e2871a-f575-4f8a-ae77-51f8da2aad53-logs\") pod \"glance-default-internal-api-0\" (UID: \"f2e2871a-f575-4f8a-ae77-51f8da2aad53\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:40:18 crc kubenswrapper[4805]: I0226 17:40:18.223060 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2e2871a-f575-4f8a-ae77-51f8da2aad53-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f2e2871a-f575-4f8a-ae77-51f8da2aad53\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:40:18 crc kubenswrapper[4805]: I0226 17:40:18.223122 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74\") pod \"glance-default-internal-api-0\" (UID: \"f2e2871a-f575-4f8a-ae77-51f8da2aad53\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:40:18 crc kubenswrapper[4805]: I0226 17:40:18.223159 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2e2871a-f575-4f8a-ae77-51f8da2aad53-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f2e2871a-f575-4f8a-ae77-51f8da2aad53\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:40:18 crc kubenswrapper[4805]: I0226 17:40:18.223218 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2e2871a-f575-4f8a-ae77-51f8da2aad53-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f2e2871a-f575-4f8a-ae77-51f8da2aad53\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:40:18 crc kubenswrapper[4805]: I0226 17:40:18.223610 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2e2871a-f575-4f8a-ae77-51f8da2aad53-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f2e2871a-f575-4f8a-ae77-51f8da2aad53\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:40:18 crc kubenswrapper[4805]: I0226 17:40:18.224593 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2e2871a-f575-4f8a-ae77-51f8da2aad53-logs\") pod \"glance-default-internal-api-0\" (UID: \"f2e2871a-f575-4f8a-ae77-51f8da2aad53\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:40:18 crc kubenswrapper[4805]: I0226 17:40:18.232861 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2e2871a-f575-4f8a-ae77-51f8da2aad53-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f2e2871a-f575-4f8a-ae77-51f8da2aad53\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:40:18 crc kubenswrapper[4805]: I0226 17:40:18.237323 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 17:40:18 crc kubenswrapper[4805]: I0226 17:40:18.237367 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74\") pod \"glance-default-internal-api-0\" (UID: \"f2e2871a-f575-4f8a-ae77-51f8da2aad53\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0507a3f442b97fe1466b7dfe12c3b0da8e1c69cf48d5061e83bd97d1212f1f63/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 26 17:40:18 crc kubenswrapper[4805]: I0226 17:40:18.238373 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2e2871a-f575-4f8a-ae77-51f8da2aad53-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f2e2871a-f575-4f8a-ae77-51f8da2aad53\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:40:18 crc kubenswrapper[4805]: I0226 17:40:18.240179 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2e2871a-f575-4f8a-ae77-51f8da2aad53-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f2e2871a-f575-4f8a-ae77-51f8da2aad53\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:40:18 crc kubenswrapper[4805]: I0226 17:40:18.241330 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2e2871a-f575-4f8a-ae77-51f8da2aad53-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f2e2871a-f575-4f8a-ae77-51f8da2aad53\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:40:18 crc kubenswrapper[4805]: I0226 17:40:18.252902 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbbpk\" (UniqueName: \"kubernetes.io/projected/f2e2871a-f575-4f8a-ae77-51f8da2aad53-kube-api-access-mbbpk\") pod \"glance-default-internal-api-0\" (UID: \"f2e2871a-f575-4f8a-ae77-51f8da2aad53\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:40:18 crc kubenswrapper[4805]: I0226 17:40:18.434708 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c80c6db-f4bf-4b43-ba80-2a1a1e993e74\") pod \"glance-default-internal-api-0\" (UID: \"f2e2871a-f575-4f8a-ae77-51f8da2aad53\") " pod="openstack/glance-default-internal-api-0" Feb 26 17:40:18 crc kubenswrapper[4805]: I0226 17:40:18.598534 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 26 17:40:18 crc kubenswrapper[4805]: I0226 17:40:18.949107 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rb2p9" event={"ID":"db58eaee-5842-4d11-babf-1ededef9c68e","Type":"ContainerStarted","Data":"7e577679c53bcdc647317c660a2b7232ce5e884dc2b9e0f1438a958a082d6d71"} Feb 26 17:40:19 crc kubenswrapper[4805]: I0226 17:40:19.011490 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ad5bd95-e53c-443e-8204-69377ae9600c" path="/var/lib/kubelet/pods/3ad5bd95-e53c-443e-8204-69377ae9600c/volumes" Feb 26 17:40:19 crc kubenswrapper[4805]: I0226 17:40:19.316763 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-82ztd"] Feb 26 17:40:19 crc kubenswrapper[4805]: I0226 17:40:19.318268 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-82ztd" Feb 26 17:40:19 crc kubenswrapper[4805]: I0226 17:40:19.323481 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 26 17:40:19 crc kubenswrapper[4805]: I0226 17:40:19.323856 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-xgs65" Feb 26 17:40:19 crc kubenswrapper[4805]: I0226 17:40:19.324054 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 26 17:40:19 crc kubenswrapper[4805]: I0226 17:40:19.331622 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-82ztd"] Feb 26 17:40:19 crc kubenswrapper[4805]: I0226 17:40:19.364340 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/328f9e45-178a-4f2d-b4f5-cf870b94e1a2-scripts\") pod \"nova-cell0-conductor-db-sync-82ztd\" (UID: \"328f9e45-178a-4f2d-b4f5-cf870b94e1a2\") " pod="openstack/nova-cell0-conductor-db-sync-82ztd" Feb 26 17:40:19 crc kubenswrapper[4805]: I0226 17:40:19.364471 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328f9e45-178a-4f2d-b4f5-cf870b94e1a2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-82ztd\" (UID: \"328f9e45-178a-4f2d-b4f5-cf870b94e1a2\") " pod="openstack/nova-cell0-conductor-db-sync-82ztd" Feb 26 17:40:19 crc kubenswrapper[4805]: I0226 17:40:19.364491 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkrtw\" (UniqueName: \"kubernetes.io/projected/328f9e45-178a-4f2d-b4f5-cf870b94e1a2-kube-api-access-jkrtw\") pod \"nova-cell0-conductor-db-sync-82ztd\" (UID: \"328f9e45-178a-4f2d-b4f5-cf870b94e1a2\") " pod="openstack/nova-cell0-conductor-db-sync-82ztd" Feb 26 17:40:19 crc kubenswrapper[4805]: I0226 17:40:19.364518 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/328f9e45-178a-4f2d-b4f5-cf870b94e1a2-config-data\") pod \"nova-cell0-conductor-db-sync-82ztd\" (UID: \"328f9e45-178a-4f2d-b4f5-cf870b94e1a2\") " pod="openstack/nova-cell0-conductor-db-sync-82ztd" Feb 26 17:40:19 crc kubenswrapper[4805]: I0226 17:40:19.466563 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/328f9e45-178a-4f2d-b4f5-cf870b94e1a2-scripts\") pod \"nova-cell0-conductor-db-sync-82ztd\" (UID: \"328f9e45-178a-4f2d-b4f5-cf870b94e1a2\") " pod="openstack/nova-cell0-conductor-db-sync-82ztd" Feb 26 17:40:19 crc kubenswrapper[4805]: I0226 17:40:19.466733 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328f9e45-178a-4f2d-b4f5-cf870b94e1a2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-82ztd\" (UID: \"328f9e45-178a-4f2d-b4f5-cf870b94e1a2\") " pod="openstack/nova-cell0-conductor-db-sync-82ztd" Feb 26 17:40:19 crc kubenswrapper[4805]: I0226 17:40:19.466762 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkrtw\" (UniqueName: \"kubernetes.io/projected/328f9e45-178a-4f2d-b4f5-cf870b94e1a2-kube-api-access-jkrtw\") pod \"nova-cell0-conductor-db-sync-82ztd\" (UID: \"328f9e45-178a-4f2d-b4f5-cf870b94e1a2\") " pod="openstack/nova-cell0-conductor-db-sync-82ztd" Feb 26 17:40:19 crc kubenswrapper[4805]: I0226 17:40:19.466802 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/328f9e45-178a-4f2d-b4f5-cf870b94e1a2-config-data\") pod \"nova-cell0-conductor-db-sync-82ztd\" (UID: \"328f9e45-178a-4f2d-b4f5-cf870b94e1a2\") " pod="openstack/nova-cell0-conductor-db-sync-82ztd" Feb 26 17:40:19 crc kubenswrapper[4805]: I0226 17:40:19.474410 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/328f9e45-178a-4f2d-b4f5-cf870b94e1a2-config-data\") pod \"nova-cell0-conductor-db-sync-82ztd\" (UID: \"328f9e45-178a-4f2d-b4f5-cf870b94e1a2\") " pod="openstack/nova-cell0-conductor-db-sync-82ztd" Feb 26 17:40:19 crc kubenswrapper[4805]: I0226 17:40:19.475328 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/328f9e45-178a-4f2d-b4f5-cf870b94e1a2-scripts\") pod \"nova-cell0-conductor-db-sync-82ztd\" (UID: \"328f9e45-178a-4f2d-b4f5-cf870b94e1a2\") " pod="openstack/nova-cell0-conductor-db-sync-82ztd" Feb 26 17:40:19 crc kubenswrapper[4805]: I0226 17:40:19.489050 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkrtw\" (UniqueName: \"kubernetes.io/projected/328f9e45-178a-4f2d-b4f5-cf870b94e1a2-kube-api-access-jkrtw\") pod \"nova-cell0-conductor-db-sync-82ztd\" (UID: \"328f9e45-178a-4f2d-b4f5-cf870b94e1a2\") " pod="openstack/nova-cell0-conductor-db-sync-82ztd" Feb 26 17:40:19 crc kubenswrapper[4805]: I0226 17:40:19.489810 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328f9e45-178a-4f2d-b4f5-cf870b94e1a2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-82ztd\" (UID: \"328f9e45-178a-4f2d-b4f5-cf870b94e1a2\") " pod="openstack/nova-cell0-conductor-db-sync-82ztd" Feb 26 17:40:19 crc kubenswrapper[4805]: I0226 17:40:19.695964 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-82ztd" Feb 26 17:40:19 crc kubenswrapper[4805]: I0226 17:40:19.984827 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 26 17:40:19 crc kubenswrapper[4805]: I0226 17:40:19.984867 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 26 17:40:20 crc kubenswrapper[4805]: I0226 17:40:20.026681 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 26 17:40:20 crc kubenswrapper[4805]: I0226 17:40:20.052239 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rb2p9" podStartSLOduration=9.225551267 podStartE2EDuration="1m3.052216975s" podCreationTimestamp="2026-02-26 17:39:17 +0000 UTC" firstStartedPulling="2026-02-26 17:39:24.563950057 +0000 UTC m=+1479.125704396" lastFinishedPulling="2026-02-26 17:40:18.390615765 +0000 UTC m=+1532.952370104" observedRunningTime="2026-02-26 17:40:20.042361806 +0000 UTC m=+1534.604116165" watchObservedRunningTime="2026-02-26 17:40:20.052216975 +0000 UTC m=+1534.613971314" Feb 26 17:40:20 crc kubenswrapper[4805]: I0226 17:40:20.170448 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 26 17:40:20 crc kubenswrapper[4805]: I0226 17:40:20.171297 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 26 17:40:20 crc kubenswrapper[4805]: I0226 17:40:20.449722 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-82ztd"] Feb 26 17:40:21 crc kubenswrapper[4805]: I0226 17:40:21.129708 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f2e2871a-f575-4f8a-ae77-51f8da2aad53","Type":"ContainerStarted","Data":"a5f7744a3c178b41e2f1af96a5352437c6c29ca929115af286acd015a062a866"} Feb 26 17:40:21 crc kubenswrapper[4805]: I0226 17:40:21.129928 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f2e2871a-f575-4f8a-ae77-51f8da2aad53","Type":"ContainerStarted","Data":"1615d183ec3735938a73c5bfa624d371cd2438762726b9a931f7bae49b434345"} Feb 26 17:40:21 crc kubenswrapper[4805]: I0226 17:40:21.129945 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 26 17:40:21 crc kubenswrapper[4805]: I0226 17:40:21.129957 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 26 17:40:21 crc kubenswrapper[4805]: I0226 17:40:21.129965 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-82ztd" event={"ID":"328f9e45-178a-4f2d-b4f5-cf870b94e1a2","Type":"ContainerStarted","Data":"d277350d5a243a8a723c8715633e762ca64a94b2c9613906c450e1933b0e2abe"} Feb 26 17:40:21 crc kubenswrapper[4805]: E0226 17:40:21.994153 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0549f363_78ec_4961_a646_a27f5d96d274.slice\": RecentStats: unable to find data in memory cache]" Feb 26 17:40:22 crc kubenswrapper[4805]: I0226 17:40:22.055899 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f2e2871a-f575-4f8a-ae77-51f8da2aad53","Type":"ContainerStarted","Data":"607bc30370e426e790c6234ccad71270dde6e97e38d73634af48f7e2c5f439d5"} Feb 26 17:40:22 crc kubenswrapper[4805]: I0226 17:40:22.087667 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.08764804 podStartE2EDuration="5.08764804s" podCreationTimestamp="2026-02-26 17:40:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:40:22.077982766 +0000 UTC m=+1536.639737105" watchObservedRunningTime="2026-02-26 17:40:22.08764804 +0000 UTC m=+1536.649402379" Feb 26 17:40:23 crc kubenswrapper[4805]: I0226 17:40:23.192213 4805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 17:40:23 crc kubenswrapper[4805]: I0226 17:40:23.192252 4805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 17:40:25 crc kubenswrapper[4805]: I0226 17:40:25.222382 4805 generic.go:334] "Generic (PLEG): container finished" podID="1920fe1a-941a-4968-b797-bbdbe80a08de" containerID="329345891e99836c225e1a4614bb4d1a4bb31c797cb36ff68b3f3fc76f419839" exitCode=0 Feb 26 17:40:25 crc kubenswrapper[4805]: I0226 17:40:25.223568 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1920fe1a-941a-4968-b797-bbdbe80a08de","Type":"ContainerDied","Data":"329345891e99836c225e1a4614bb4d1a4bb31c797cb36ff68b3f3fc76f419839"} Feb 26 17:40:27 crc kubenswrapper[4805]: I0226 17:40:27.595162 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 26 17:40:27 crc kubenswrapper[4805]: I0226 17:40:27.595742 4805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 17:40:27 crc kubenswrapper[4805]: I0226 17:40:27.598070 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 26 17:40:28 crc kubenswrapper[4805]: I0226 17:40:28.100629 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rb2p9" Feb 26 17:40:28 crc kubenswrapper[4805]: I0226 17:40:28.100683 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rb2p9" Feb 26 17:40:28 crc kubenswrapper[4805]: I0226 17:40:28.199247 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rb2p9" Feb 26 17:40:28 crc kubenswrapper[4805]: I0226 17:40:28.347340 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rb2p9" Feb 26 17:40:28 crc kubenswrapper[4805]: I0226 17:40:28.466619 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rb2p9"] Feb 26 17:40:28 crc kubenswrapper[4805]: I0226 17:40:28.546684 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8k8mj"] Feb 26 17:40:28 crc kubenswrapper[4805]: I0226 17:40:28.547041 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8k8mj" podUID="366c763a-8d22-4e08-a81e-77464e51ad74" containerName="registry-server" containerID="cri-o://1c8f3d618f51bc29cf3e0ad8f3fbd235e2676ae68aa6eb812e228daeb756ac9c" gracePeriod=2 Feb 26 17:40:28 crc kubenswrapper[4805]: I0226 17:40:28.600170 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 26 17:40:28 crc kubenswrapper[4805]: I0226 17:40:28.601428 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 26 17:40:28 crc kubenswrapper[4805]: I0226 17:40:28.701154 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 26 17:40:29 crc kubenswrapper[4805]: I0226 17:40:29.035460 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 26 17:40:29 crc kubenswrapper[4805]: I0226 17:40:29.289379 4805 generic.go:334] "Generic (PLEG): container finished" podID="366c763a-8d22-4e08-a81e-77464e51ad74" containerID="1c8f3d618f51bc29cf3e0ad8f3fbd235e2676ae68aa6eb812e228daeb756ac9c" exitCode=0 Feb 26 17:40:29 crc kubenswrapper[4805]: I0226 17:40:29.289483 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8k8mj" event={"ID":"366c763a-8d22-4e08-a81e-77464e51ad74","Type":"ContainerDied","Data":"1c8f3d618f51bc29cf3e0ad8f3fbd235e2676ae68aa6eb812e228daeb756ac9c"} Feb 26 17:40:29 crc kubenswrapper[4805]: I0226 17:40:29.290593 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 26 17:40:29 crc kubenswrapper[4805]: I0226 17:40:29.290619 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 26 17:40:32 crc kubenswrapper[4805]: E0226 17:40:32.511589 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0549f363_78ec_4961_a646_a27f5d96d274.slice\": RecentStats: unable to find data in memory cache]" Feb 26 17:40:32 crc kubenswrapper[4805]: I0226 17:40:32.977978 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 17:40:32 crc kubenswrapper[4805]: I0226 17:40:32.978039 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 17:40:34 crc kubenswrapper[4805]: I0226 17:40:34.136633 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 26 17:40:34 crc kubenswrapper[4805]: I0226 17:40:34.136984 4805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 26 17:40:34 crc kubenswrapper[4805]: I0226 17:40:34.361222 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 26 17:40:36 crc kubenswrapper[4805]: E0226 17:40:36.608065 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c8f3d618f51bc29cf3e0ad8f3fbd235e2676ae68aa6eb812e228daeb756ac9c is running failed: container process not found" containerID="1c8f3d618f51bc29cf3e0ad8f3fbd235e2676ae68aa6eb812e228daeb756ac9c" cmd=["grpc_health_probe","-addr=:50051"] Feb 26 17:40:36 crc kubenswrapper[4805]: E0226 17:40:36.608853 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c8f3d618f51bc29cf3e0ad8f3fbd235e2676ae68aa6eb812e228daeb756ac9c is running failed: container process not found" containerID="1c8f3d618f51bc29cf3e0ad8f3fbd235e2676ae68aa6eb812e228daeb756ac9c" cmd=["grpc_health_probe","-addr=:50051"] Feb 26 17:40:36 crc kubenswrapper[4805]: E0226 17:40:36.609306 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c8f3d618f51bc29cf3e0ad8f3fbd235e2676ae68aa6eb812e228daeb756ac9c is running failed: container process not found" containerID="1c8f3d618f51bc29cf3e0ad8f3fbd235e2676ae68aa6eb812e228daeb756ac9c" cmd=["grpc_health_probe","-addr=:50051"] Feb 26 17:40:36 crc kubenswrapper[4805]: E0226 17:40:36.609386 4805 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c8f3d618f51bc29cf3e0ad8f3fbd235e2676ae68aa6eb812e228daeb756ac9c is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-8k8mj" podUID="366c763a-8d22-4e08-a81e-77464e51ad74" containerName="registry-server" Feb 26 17:40:37 crc kubenswrapper[4805]: E0226 17:40:37.577156 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" Feb 26 17:40:37 crc kubenswrapper[4805]: E0226 17:40:37.577762 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jkrtw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-82ztd_openstack(328f9e45-178a-4f2d-b4f5-cf870b94e1a2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 17:40:37 crc kubenswrapper[4805]: E0226 17:40:37.579073 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell0-conductor-db-sync-82ztd" podUID="328f9e45-178a-4f2d-b4f5-cf870b94e1a2" Feb 26 17:40:37 crc kubenswrapper[4805]: I0226 17:40:37.663120 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8k8mj" Feb 26 17:40:37 crc kubenswrapper[4805]: I0226 17:40:37.719779 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8k8mj" Feb 26 17:40:37 crc kubenswrapper[4805]: I0226 17:40:37.720464 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8k8mj" event={"ID":"366c763a-8d22-4e08-a81e-77464e51ad74","Type":"ContainerDied","Data":"ecdfda411d571a2b5140566f8916f7d37ea214036f8a7e9472f94e1e66bc725b"} Feb 26 17:40:37 crc kubenswrapper[4805]: I0226 17:40:37.720529 4805 scope.go:117] "RemoveContainer" containerID="1c8f3d618f51bc29cf3e0ad8f3fbd235e2676ae68aa6eb812e228daeb756ac9c" Feb 26 17:40:37 crc kubenswrapper[4805]: E0226 17:40:37.725197 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-82ztd" podUID="328f9e45-178a-4f2d-b4f5-cf870b94e1a2" Feb 26 17:40:37 crc kubenswrapper[4805]: I0226 17:40:37.744270 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/366c763a-8d22-4e08-a81e-77464e51ad74-catalog-content\") pod \"366c763a-8d22-4e08-a81e-77464e51ad74\" (UID: \"366c763a-8d22-4e08-a81e-77464e51ad74\") " Feb 26 17:40:37 crc kubenswrapper[4805]: I0226 17:40:37.744604 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/366c763a-8d22-4e08-a81e-77464e51ad74-utilities\") pod \"366c763a-8d22-4e08-a81e-77464e51ad74\" (UID: \"366c763a-8d22-4e08-a81e-77464e51ad74\") " Feb 26 17:40:37 crc kubenswrapper[4805]: I0226 17:40:37.744635 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wnbd\" (UniqueName: \"kubernetes.io/projected/366c763a-8d22-4e08-a81e-77464e51ad74-kube-api-access-5wnbd\") pod \"366c763a-8d22-4e08-a81e-77464e51ad74\" (UID: \"366c763a-8d22-4e08-a81e-77464e51ad74\") " Feb 26 17:40:37 crc kubenswrapper[4805]: I0226 17:40:37.745815 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/366c763a-8d22-4e08-a81e-77464e51ad74-utilities" (OuterVolumeSpecName: "utilities") pod "366c763a-8d22-4e08-a81e-77464e51ad74" (UID: "366c763a-8d22-4e08-a81e-77464e51ad74"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:40:37 crc kubenswrapper[4805]: I0226 17:40:37.757356 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/366c763a-8d22-4e08-a81e-77464e51ad74-kube-api-access-5wnbd" (OuterVolumeSpecName: "kube-api-access-5wnbd") pod "366c763a-8d22-4e08-a81e-77464e51ad74" (UID: "366c763a-8d22-4e08-a81e-77464e51ad74"). InnerVolumeSpecName "kube-api-access-5wnbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:40:37 crc kubenswrapper[4805]: I0226 17:40:37.758547 4805 scope.go:117] "RemoveContainer" containerID="9208f8cd37127487d5b9a25b492eeb6499de05e250cc8d0409946a3e9c5a8526" Feb 26 17:40:37 crc kubenswrapper[4805]: I0226 17:40:37.840782 4805 scope.go:117] "RemoveContainer" containerID="f017b019df6cf6c4e4dd68e20a302751a4e4f67fe6d75a605164b81a1e2ab278" Feb 26 17:40:37 crc kubenswrapper[4805]: I0226 17:40:37.848197 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/366c763a-8d22-4e08-a81e-77464e51ad74-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 17:40:37 crc kubenswrapper[4805]: I0226 17:40:37.848227 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wnbd\" (UniqueName: \"kubernetes.io/projected/366c763a-8d22-4e08-a81e-77464e51ad74-kube-api-access-5wnbd\") on node \"crc\" DevicePath \"\"" Feb 26 17:40:37 crc kubenswrapper[4805]: I0226 17:40:37.938043 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/366c763a-8d22-4e08-a81e-77464e51ad74-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "366c763a-8d22-4e08-a81e-77464e51ad74" (UID: "366c763a-8d22-4e08-a81e-77464e51ad74"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:40:37 crc kubenswrapper[4805]: I0226 17:40:37.950606 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/366c763a-8d22-4e08-a81e-77464e51ad74-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 17:40:38 crc kubenswrapper[4805]: I0226 17:40:38.074217 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8k8mj"] Feb 26 17:40:38 crc kubenswrapper[4805]: I0226 17:40:38.086483 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8k8mj"] Feb 26 17:40:38 crc kubenswrapper[4805]: I0226 17:40:38.972508 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="366c763a-8d22-4e08-a81e-77464e51ad74" path="/var/lib/kubelet/pods/366c763a-8d22-4e08-a81e-77464e51ad74/volumes" Feb 26 17:40:40 crc kubenswrapper[4805]: I0226 17:40:40.410774 4805 scope.go:117] "RemoveContainer" containerID="4f9475e21c47b527dd3f54569e139ace42bcb514cf5521d32eb78e1fd7b024fc" Feb 26 17:40:42 crc kubenswrapper[4805]: E0226 17:40:42.751259 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0549f363_78ec_4961_a646_a27f5d96d274.slice\": RecentStats: unable to find data in memory cache]" Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.594892 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.658266 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6njv5\" (UniqueName: \"kubernetes.io/projected/1920fe1a-941a-4968-b797-bbdbe80a08de-kube-api-access-6njv5\") pod \"1920fe1a-941a-4968-b797-bbdbe80a08de\" (UID: \"1920fe1a-941a-4968-b797-bbdbe80a08de\") " Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.658345 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1920fe1a-941a-4968-b797-bbdbe80a08de-run-httpd\") pod \"1920fe1a-941a-4968-b797-bbdbe80a08de\" (UID: \"1920fe1a-941a-4968-b797-bbdbe80a08de\") " Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.658386 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1920fe1a-941a-4968-b797-bbdbe80a08de-scripts\") pod \"1920fe1a-941a-4968-b797-bbdbe80a08de\" (UID: \"1920fe1a-941a-4968-b797-bbdbe80a08de\") " Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.658455 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1920fe1a-941a-4968-b797-bbdbe80a08de-sg-core-conf-yaml\") pod \"1920fe1a-941a-4968-b797-bbdbe80a08de\" (UID: \"1920fe1a-941a-4968-b797-bbdbe80a08de\") " Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.658512 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1920fe1a-941a-4968-b797-bbdbe80a08de-log-httpd\") pod \"1920fe1a-941a-4968-b797-bbdbe80a08de\" (UID: \"1920fe1a-941a-4968-b797-bbdbe80a08de\") " Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.658548 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1920fe1a-941a-4968-b797-bbdbe80a08de-combined-ca-bundle\") pod \"1920fe1a-941a-4968-b797-bbdbe80a08de\" (UID: \"1920fe1a-941a-4968-b797-bbdbe80a08de\") " Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.658703 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1920fe1a-941a-4968-b797-bbdbe80a08de-config-data\") pod \"1920fe1a-941a-4968-b797-bbdbe80a08de\" (UID: \"1920fe1a-941a-4968-b797-bbdbe80a08de\") " Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.661258 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1920fe1a-941a-4968-b797-bbdbe80a08de-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1920fe1a-941a-4968-b797-bbdbe80a08de" (UID: "1920fe1a-941a-4968-b797-bbdbe80a08de"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.663084 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1920fe1a-941a-4968-b797-bbdbe80a08de-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1920fe1a-941a-4968-b797-bbdbe80a08de" (UID: "1920fe1a-941a-4968-b797-bbdbe80a08de"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.667420 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1920fe1a-941a-4968-b797-bbdbe80a08de-scripts" (OuterVolumeSpecName: "scripts") pod "1920fe1a-941a-4968-b797-bbdbe80a08de" (UID: "1920fe1a-941a-4968-b797-bbdbe80a08de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.669308 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1920fe1a-941a-4968-b797-bbdbe80a08de-kube-api-access-6njv5" (OuterVolumeSpecName: "kube-api-access-6njv5") pod "1920fe1a-941a-4968-b797-bbdbe80a08de" (UID: "1920fe1a-941a-4968-b797-bbdbe80a08de"). InnerVolumeSpecName "kube-api-access-6njv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.696251 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1920fe1a-941a-4968-b797-bbdbe80a08de-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1920fe1a-941a-4968-b797-bbdbe80a08de" (UID: "1920fe1a-941a-4968-b797-bbdbe80a08de"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.766076 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6njv5\" (UniqueName: \"kubernetes.io/projected/1920fe1a-941a-4968-b797-bbdbe80a08de-kube-api-access-6njv5\") on node \"crc\" DevicePath \"\"" Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.766580 4805 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1920fe1a-941a-4968-b797-bbdbe80a08de-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.766609 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1920fe1a-941a-4968-b797-bbdbe80a08de-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.766624 4805 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1920fe1a-941a-4968-b797-bbdbe80a08de-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.766637 4805 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1920fe1a-941a-4968-b797-bbdbe80a08de-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.793600 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1920fe1a-941a-4968-b797-bbdbe80a08de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1920fe1a-941a-4968-b797-bbdbe80a08de" (UID: "1920fe1a-941a-4968-b797-bbdbe80a08de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.797657 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1920fe1a-941a-4968-b797-bbdbe80a08de-config-data" (OuterVolumeSpecName: "config-data") pod "1920fe1a-941a-4968-b797-bbdbe80a08de" (UID: "1920fe1a-941a-4968-b797-bbdbe80a08de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.834085 4805 generic.go:334] "Generic (PLEG): container finished" podID="1920fe1a-941a-4968-b797-bbdbe80a08de" containerID="c14a1121d83eec4348d757a61121920b84c8900d7e9638fc9586e928ae43f6d7" exitCode=137 Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.834146 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1920fe1a-941a-4968-b797-bbdbe80a08de","Type":"ContainerDied","Data":"c14a1121d83eec4348d757a61121920b84c8900d7e9638fc9586e928ae43f6d7"} Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.834180 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1920fe1a-941a-4968-b797-bbdbe80a08de","Type":"ContainerDied","Data":"c45aec9e07416eae3e4b367d501f045c7ccd21b69c5b255ec5cd9ac9af26c745"} Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.834201 4805 scope.go:117] "RemoveContainer" containerID="c14a1121d83eec4348d757a61121920b84c8900d7e9638fc9586e928ae43f6d7" Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.834334 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.868291 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1920fe1a-941a-4968-b797-bbdbe80a08de-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.868334 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1920fe1a-941a-4968-b797-bbdbe80a08de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.876614 4805 scope.go:117] "RemoveContainer" containerID="e24990ab9eca2de91c5a9148c5e94645110eb14d7478a87dc8631868d3915cb9" Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.884551 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.899530 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.921079 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:40:47 crc kubenswrapper[4805]: E0226 17:40:47.921762 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1920fe1a-941a-4968-b797-bbdbe80a08de" containerName="ceilometer-central-agent" Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.921786 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1920fe1a-941a-4968-b797-bbdbe80a08de" containerName="ceilometer-central-agent" Feb 26 17:40:47 crc kubenswrapper[4805]: E0226 17:40:47.921816 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="366c763a-8d22-4e08-a81e-77464e51ad74" containerName="registry-server" Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.921824 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="366c763a-8d22-4e08-a81e-77464e51ad74" containerName="registry-server" Feb 26 17:40:47 crc kubenswrapper[4805]: E0226 17:40:47.921847 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1920fe1a-941a-4968-b797-bbdbe80a08de" containerName="proxy-httpd" Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.921855 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1920fe1a-941a-4968-b797-bbdbe80a08de" containerName="proxy-httpd" Feb 26 17:40:47 crc kubenswrapper[4805]: E0226 17:40:47.921869 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="366c763a-8d22-4e08-a81e-77464e51ad74" containerName="extract-content" Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.921876 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="366c763a-8d22-4e08-a81e-77464e51ad74" containerName="extract-content" Feb 26 17:40:47 crc kubenswrapper[4805]: E0226 17:40:47.921885 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="366c763a-8d22-4e08-a81e-77464e51ad74" containerName="extract-utilities" Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.921892 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="366c763a-8d22-4e08-a81e-77464e51ad74" containerName="extract-utilities" Feb 26 17:40:47 crc kubenswrapper[4805]: E0226 17:40:47.921918 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1920fe1a-941a-4968-b797-bbdbe80a08de" containerName="sg-core" Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.921924 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1920fe1a-941a-4968-b797-bbdbe80a08de" containerName="sg-core" Feb 26 17:40:47 crc kubenswrapper[4805]: E0226 17:40:47.921937 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1920fe1a-941a-4968-b797-bbdbe80a08de" containerName="ceilometer-notification-agent" Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.921943 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1920fe1a-941a-4968-b797-bbdbe80a08de" containerName="ceilometer-notification-agent" Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.922189 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1920fe1a-941a-4968-b797-bbdbe80a08de" containerName="sg-core" Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.922204 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="366c763a-8d22-4e08-a81e-77464e51ad74" containerName="registry-server" Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.922220 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1920fe1a-941a-4968-b797-bbdbe80a08de" containerName="proxy-httpd" Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.922242 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1920fe1a-941a-4968-b797-bbdbe80a08de" containerName="ceilometer-notification-agent" Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.922249 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1920fe1a-941a-4968-b797-bbdbe80a08de" containerName="ceilometer-central-agent" Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.924705 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.930494 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.930687 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.937980 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.953264 4805 scope.go:117] "RemoveContainer" containerID="7c557c48dc4d838f2dd8d079505049fe8e9b417524e92681d3fa949b84ffc02e" Feb 26 17:40:47 crc kubenswrapper[4805]: I0226 17:40:47.970553 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21b5c747-2fc2-4416-a6cb-b5eb067961ac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"21b5c747-2fc2-4416-a6cb-b5eb067961ac\") " pod="openstack/ceilometer-0" Feb 26 17:40:48 crc kubenswrapper[4805]: I0226 17:40:47.973474 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/21b5c747-2fc2-4416-a6cb-b5eb067961ac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"21b5c747-2fc2-4416-a6cb-b5eb067961ac\") " pod="openstack/ceilometer-0" Feb 26 17:40:48 crc kubenswrapper[4805]: I0226 17:40:47.973540 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21b5c747-2fc2-4416-a6cb-b5eb067961ac-config-data\") pod \"ceilometer-0\" (UID: \"21b5c747-2fc2-4416-a6cb-b5eb067961ac\") " pod="openstack/ceilometer-0" Feb 26 17:40:48 crc kubenswrapper[4805]: I0226 17:40:47.973577 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21b5c747-2fc2-4416-a6cb-b5eb067961ac-log-httpd\") pod \"ceilometer-0\" (UID: \"21b5c747-2fc2-4416-a6cb-b5eb067961ac\") " pod="openstack/ceilometer-0" Feb 26 17:40:48 crc kubenswrapper[4805]: I0226 17:40:47.973614 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21b5c747-2fc2-4416-a6cb-b5eb067961ac-run-httpd\") pod \"ceilometer-0\" (UID: \"21b5c747-2fc2-4416-a6cb-b5eb067961ac\") " pod="openstack/ceilometer-0" Feb 26 17:40:48 crc kubenswrapper[4805]: I0226 17:40:47.973725 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwtll\" (UniqueName: \"kubernetes.io/projected/21b5c747-2fc2-4416-a6cb-b5eb067961ac-kube-api-access-gwtll\") pod \"ceilometer-0\" (UID: \"21b5c747-2fc2-4416-a6cb-b5eb067961ac\") " pod="openstack/ceilometer-0" Feb 26 17:40:48 crc kubenswrapper[4805]: I0226 17:40:47.973775 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21b5c747-2fc2-4416-a6cb-b5eb067961ac-scripts\") pod \"ceilometer-0\" (UID: \"21b5c747-2fc2-4416-a6cb-b5eb067961ac\") " pod="openstack/ceilometer-0" Feb 26 17:40:48 crc kubenswrapper[4805]: I0226 17:40:47.998294 4805 scope.go:117] "RemoveContainer" containerID="329345891e99836c225e1a4614bb4d1a4bb31c797cb36ff68b3f3fc76f419839" Feb 26 17:40:48 crc kubenswrapper[4805]: I0226 17:40:48.074638 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwtll\" (UniqueName: \"kubernetes.io/projected/21b5c747-2fc2-4416-a6cb-b5eb067961ac-kube-api-access-gwtll\") pod \"ceilometer-0\" (UID: \"21b5c747-2fc2-4416-a6cb-b5eb067961ac\") " pod="openstack/ceilometer-0" Feb 26 17:40:48 crc kubenswrapper[4805]: I0226 17:40:48.074689 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21b5c747-2fc2-4416-a6cb-b5eb067961ac-scripts\") pod \"ceilometer-0\" (UID: \"21b5c747-2fc2-4416-a6cb-b5eb067961ac\") " pod="openstack/ceilometer-0" Feb 26 17:40:48 crc kubenswrapper[4805]: I0226 17:40:48.074793 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21b5c747-2fc2-4416-a6cb-b5eb067961ac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"21b5c747-2fc2-4416-a6cb-b5eb067961ac\") " pod="openstack/ceilometer-0" Feb 26 17:40:48 crc kubenswrapper[4805]: I0226 17:40:48.074824 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/21b5c747-2fc2-4416-a6cb-b5eb067961ac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"21b5c747-2fc2-4416-a6cb-b5eb067961ac\") " pod="openstack/ceilometer-0" Feb 26 17:40:48 crc kubenswrapper[4805]: I0226 17:40:48.074867 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21b5c747-2fc2-4416-a6cb-b5eb067961ac-config-data\") pod \"ceilometer-0\" (UID: \"21b5c747-2fc2-4416-a6cb-b5eb067961ac\") " pod="openstack/ceilometer-0" Feb 26 17:40:48 crc kubenswrapper[4805]: I0226 17:40:48.074899 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21b5c747-2fc2-4416-a6cb-b5eb067961ac-log-httpd\") pod \"ceilometer-0\" (UID: \"21b5c747-2fc2-4416-a6cb-b5eb067961ac\") " pod="openstack/ceilometer-0" Feb 26 17:40:48 crc kubenswrapper[4805]: I0226 17:40:48.074921 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21b5c747-2fc2-4416-a6cb-b5eb067961ac-run-httpd\") pod \"ceilometer-0\" (UID: \"21b5c747-2fc2-4416-a6cb-b5eb067961ac\") " pod="openstack/ceilometer-0" Feb 26 17:40:48 crc kubenswrapper[4805]: I0226 17:40:48.075458 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21b5c747-2fc2-4416-a6cb-b5eb067961ac-run-httpd\") pod \"ceilometer-0\" (UID: \"21b5c747-2fc2-4416-a6cb-b5eb067961ac\") " pod="openstack/ceilometer-0" Feb 26 17:40:48 crc kubenswrapper[4805]: I0226 17:40:48.079475 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21b5c747-2fc2-4416-a6cb-b5eb067961ac-log-httpd\") pod \"ceilometer-0\" (UID: \"21b5c747-2fc2-4416-a6cb-b5eb067961ac\") " pod="openstack/ceilometer-0" Feb 26 17:40:48 crc kubenswrapper[4805]: I0226 17:40:48.089844 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/21b5c747-2fc2-4416-a6cb-b5eb067961ac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"21b5c747-2fc2-4416-a6cb-b5eb067961ac\") " pod="openstack/ceilometer-0" Feb 26 17:40:48 crc kubenswrapper[4805]: I0226 17:40:48.098812 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21b5c747-2fc2-4416-a6cb-b5eb067961ac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"21b5c747-2fc2-4416-a6cb-b5eb067961ac\") " pod="openstack/ceilometer-0" Feb 26 17:40:48 crc kubenswrapper[4805]: I0226 17:40:48.105999 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21b5c747-2fc2-4416-a6cb-b5eb067961ac-config-data\") pod \"ceilometer-0\" (UID: \"21b5c747-2fc2-4416-a6cb-b5eb067961ac\") " pod="openstack/ceilometer-0" Feb 26 17:40:48 crc kubenswrapper[4805]: I0226 17:40:48.197585 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21b5c747-2fc2-4416-a6cb-b5eb067961ac-scripts\") pod \"ceilometer-0\" (UID: \"21b5c747-2fc2-4416-a6cb-b5eb067961ac\") " pod="openstack/ceilometer-0" Feb 26 17:40:48 crc kubenswrapper[4805]: I0226 17:40:48.222076 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwtll\" (UniqueName: \"kubernetes.io/projected/21b5c747-2fc2-4416-a6cb-b5eb067961ac-kube-api-access-gwtll\") pod \"ceilometer-0\" (UID: \"21b5c747-2fc2-4416-a6cb-b5eb067961ac\") " pod="openstack/ceilometer-0" Feb 26 17:40:48 crc kubenswrapper[4805]: I0226 17:40:48.270787 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 17:40:48 crc kubenswrapper[4805]: I0226 17:40:48.379715 4805 scope.go:117] "RemoveContainer" containerID="c14a1121d83eec4348d757a61121920b84c8900d7e9638fc9586e928ae43f6d7" Feb 26 17:40:48 crc kubenswrapper[4805]: E0226 17:40:48.380202 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c14a1121d83eec4348d757a61121920b84c8900d7e9638fc9586e928ae43f6d7\": container with ID starting with c14a1121d83eec4348d757a61121920b84c8900d7e9638fc9586e928ae43f6d7 not found: ID does not exist" containerID="c14a1121d83eec4348d757a61121920b84c8900d7e9638fc9586e928ae43f6d7" Feb 26 17:40:48 crc kubenswrapper[4805]: I0226 17:40:48.380290 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c14a1121d83eec4348d757a61121920b84c8900d7e9638fc9586e928ae43f6d7"} err="failed to get container status \"c14a1121d83eec4348d757a61121920b84c8900d7e9638fc9586e928ae43f6d7\": rpc error: code = NotFound desc = could not find container \"c14a1121d83eec4348d757a61121920b84c8900d7e9638fc9586e928ae43f6d7\": container with ID starting with c14a1121d83eec4348d757a61121920b84c8900d7e9638fc9586e928ae43f6d7 not found: ID does not exist" Feb 26 17:40:48 crc kubenswrapper[4805]: I0226 17:40:48.380314 4805 scope.go:117] "RemoveContainer" containerID="e24990ab9eca2de91c5a9148c5e94645110eb14d7478a87dc8631868d3915cb9" Feb 26 17:40:48 crc kubenswrapper[4805]: E0226 17:40:48.380604 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e24990ab9eca2de91c5a9148c5e94645110eb14d7478a87dc8631868d3915cb9\": container with ID starting with e24990ab9eca2de91c5a9148c5e94645110eb14d7478a87dc8631868d3915cb9 not found: ID does not exist" containerID="e24990ab9eca2de91c5a9148c5e94645110eb14d7478a87dc8631868d3915cb9" Feb 26 17:40:48 crc kubenswrapper[4805]: I0226 17:40:48.380640 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e24990ab9eca2de91c5a9148c5e94645110eb14d7478a87dc8631868d3915cb9"} err="failed to get container status \"e24990ab9eca2de91c5a9148c5e94645110eb14d7478a87dc8631868d3915cb9\": rpc error: code = NotFound desc = could not find container \"e24990ab9eca2de91c5a9148c5e94645110eb14d7478a87dc8631868d3915cb9\": container with ID starting with e24990ab9eca2de91c5a9148c5e94645110eb14d7478a87dc8631868d3915cb9 not found: ID does not exist" Feb 26 17:40:48 crc kubenswrapper[4805]: I0226 17:40:48.380674 4805 scope.go:117] "RemoveContainer" containerID="7c557c48dc4d838f2dd8d079505049fe8e9b417524e92681d3fa949b84ffc02e" Feb 26 17:40:48 crc kubenswrapper[4805]: E0226 17:40:48.380941 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c557c48dc4d838f2dd8d079505049fe8e9b417524e92681d3fa949b84ffc02e\": container with ID starting with 7c557c48dc4d838f2dd8d079505049fe8e9b417524e92681d3fa949b84ffc02e not found: ID does not exist" containerID="7c557c48dc4d838f2dd8d079505049fe8e9b417524e92681d3fa949b84ffc02e" Feb 26 17:40:48 crc kubenswrapper[4805]: I0226 17:40:48.380973 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c557c48dc4d838f2dd8d079505049fe8e9b417524e92681d3fa949b84ffc02e"} err="failed to get container status \"7c557c48dc4d838f2dd8d079505049fe8e9b417524e92681d3fa949b84ffc02e\": rpc error: code = NotFound desc = could not find container \"7c557c48dc4d838f2dd8d079505049fe8e9b417524e92681d3fa949b84ffc02e\": container with ID starting with 7c557c48dc4d838f2dd8d079505049fe8e9b417524e92681d3fa949b84ffc02e not found: ID does not exist" Feb 26 17:40:48 crc kubenswrapper[4805]: I0226 17:40:48.380995 4805 scope.go:117] "RemoveContainer" containerID="329345891e99836c225e1a4614bb4d1a4bb31c797cb36ff68b3f3fc76f419839" Feb 26 17:40:48 crc kubenswrapper[4805]: E0226 17:40:48.381216 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"329345891e99836c225e1a4614bb4d1a4bb31c797cb36ff68b3f3fc76f419839\": container with ID starting with 329345891e99836c225e1a4614bb4d1a4bb31c797cb36ff68b3f3fc76f419839 not found: ID does not exist" containerID="329345891e99836c225e1a4614bb4d1a4bb31c797cb36ff68b3f3fc76f419839" Feb 26 17:40:48 crc kubenswrapper[4805]: I0226 17:40:48.381238 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"329345891e99836c225e1a4614bb4d1a4bb31c797cb36ff68b3f3fc76f419839"} err="failed to get container status \"329345891e99836c225e1a4614bb4d1a4bb31c797cb36ff68b3f3fc76f419839\": rpc error: code = NotFound desc = could not find container \"329345891e99836c225e1a4614bb4d1a4bb31c797cb36ff68b3f3fc76f419839\": container with ID starting with 329345891e99836c225e1a4614bb4d1a4bb31c797cb36ff68b3f3fc76f419839 not found: ID does not exist" Feb 26 17:40:49 crc kubenswrapper[4805]: I0226 17:40:49.005540 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1920fe1a-941a-4968-b797-bbdbe80a08de" path="/var/lib/kubelet/pods/1920fe1a-941a-4968-b797-bbdbe80a08de/volumes" Feb 26 17:40:49 crc kubenswrapper[4805]: I0226 17:40:49.006977 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:40:49 crc kubenswrapper[4805]: W0226 17:40:49.020780 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21b5c747_2fc2_4416_a6cb_b5eb067961ac.slice/crio-0085b62d06bf0520a2d5adcaaed538a882df8e1aee8040704b235bcb09a99a33 WatchSource:0}: Error finding container 0085b62d06bf0520a2d5adcaaed538a882df8e1aee8040704b235bcb09a99a33: Status 404 returned error can't find the container with id 0085b62d06bf0520a2d5adcaaed538a882df8e1aee8040704b235bcb09a99a33 Feb 26 17:40:49 crc kubenswrapper[4805]: I0226 17:40:49.962593 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21b5c747-2fc2-4416-a6cb-b5eb067961ac","Type":"ContainerStarted","Data":"f1acd28960072b22e9f55169b9158f874e79e369c402406fb9a46cdf5d0a176c"} Feb 26 17:40:49 crc kubenswrapper[4805]: I0226 17:40:49.962956 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21b5c747-2fc2-4416-a6cb-b5eb067961ac","Type":"ContainerStarted","Data":"0085b62d06bf0520a2d5adcaaed538a882df8e1aee8040704b235bcb09a99a33"} Feb 26 17:40:50 crc kubenswrapper[4805]: I0226 17:40:50.971872 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21b5c747-2fc2-4416-a6cb-b5eb067961ac","Type":"ContainerStarted","Data":"756ade5208c4f97999cee04da130327fe400ecafb0837e387eb1c54efe7d8070"} Feb 26 17:40:51 crc kubenswrapper[4805]: I0226 17:40:51.984253 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21b5c747-2fc2-4416-a6cb-b5eb067961ac","Type":"ContainerStarted","Data":"991ff0240934b77951a77103fed542d23b2db853834592adbb0aa587221bc9fd"} Feb 26 17:40:55 crc kubenswrapper[4805]: I0226 17:40:55.024672 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21b5c747-2fc2-4416-a6cb-b5eb067961ac","Type":"ContainerStarted","Data":"4ac5d4aeb6dcb3ee180b54d30a12c74eca84c9f3096cb4bb27df7f07f9ad6782"} Feb 26 17:40:55 crc kubenswrapper[4805]: I0226 17:40:55.025347 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 17:40:55 crc kubenswrapper[4805]: I0226 17:40:55.028119 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-82ztd" event={"ID":"328f9e45-178a-4f2d-b4f5-cf870b94e1a2","Type":"ContainerStarted","Data":"0e34f29d8dadc2970f60d8318c3a9e879c93c91e421544a2764c267db66e03bd"} Feb 26 17:40:55 crc kubenswrapper[4805]: I0226 17:40:55.096754 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.319041703 podStartE2EDuration="8.096728688s" podCreationTimestamp="2026-02-26 17:40:47 +0000 UTC" firstStartedPulling="2026-02-26 17:40:49.03096336 +0000 UTC m=+1563.592717699" lastFinishedPulling="2026-02-26 17:40:53.808650345 +0000 UTC m=+1568.370404684" observedRunningTime="2026-02-26 17:40:55.046126591 +0000 UTC m=+1569.607880940" watchObservedRunningTime="2026-02-26 17:40:55.096728688 +0000 UTC m=+1569.658483027" Feb 26 17:40:55 crc kubenswrapper[4805]: I0226 17:40:55.103489 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-82ztd" podStartSLOduration=2.081532745 podStartE2EDuration="36.103472688s" podCreationTimestamp="2026-02-26 17:40:19 +0000 UTC" firstStartedPulling="2026-02-26 17:40:20.478497074 +0000 UTC m=+1535.040251413" lastFinishedPulling="2026-02-26 17:40:54.500437017 +0000 UTC m=+1569.062191356" observedRunningTime="2026-02-26 17:40:55.079451572 +0000 UTC m=+1569.641205931" watchObservedRunningTime="2026-02-26 17:40:55.103472688 +0000 UTC m=+1569.665227027" Feb 26 17:40:56 crc kubenswrapper[4805]: I0226 17:40:56.542415 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:40:57 crc kubenswrapper[4805]: I0226 17:40:57.059226 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="21b5c747-2fc2-4416-a6cb-b5eb067961ac" containerName="sg-core" containerID="cri-o://991ff0240934b77951a77103fed542d23b2db853834592adbb0aa587221bc9fd" gracePeriod=30 Feb 26 17:40:57 crc kubenswrapper[4805]: I0226 17:40:57.059231 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="21b5c747-2fc2-4416-a6cb-b5eb067961ac" containerName="proxy-httpd" containerID="cri-o://4ac5d4aeb6dcb3ee180b54d30a12c74eca84c9f3096cb4bb27df7f07f9ad6782" gracePeriod=30 Feb 26 17:40:57 crc kubenswrapper[4805]: I0226 17:40:57.059263 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="21b5c747-2fc2-4416-a6cb-b5eb067961ac" containerName="ceilometer-central-agent" containerID="cri-o://f1acd28960072b22e9f55169b9158f874e79e369c402406fb9a46cdf5d0a176c" gracePeriod=30 Feb 26 17:40:57 crc kubenswrapper[4805]: I0226 17:40:57.059379 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="21b5c747-2fc2-4416-a6cb-b5eb067961ac" containerName="ceilometer-notification-agent" containerID="cri-o://756ade5208c4f97999cee04da130327fe400ecafb0837e387eb1c54efe7d8070" gracePeriod=30 Feb 26 17:40:58 crc kubenswrapper[4805]: I0226 17:40:58.071807 4805 generic.go:334] "Generic (PLEG): container finished" podID="21b5c747-2fc2-4416-a6cb-b5eb067961ac" containerID="4ac5d4aeb6dcb3ee180b54d30a12c74eca84c9f3096cb4bb27df7f07f9ad6782" exitCode=0 Feb 26 17:40:58 crc kubenswrapper[4805]: I0226 17:40:58.072428 4805 generic.go:334] "Generic (PLEG): container finished" podID="21b5c747-2fc2-4416-a6cb-b5eb067961ac" containerID="991ff0240934b77951a77103fed542d23b2db853834592adbb0aa587221bc9fd" exitCode=2 Feb 26 17:40:58 crc kubenswrapper[4805]: I0226 17:40:58.071877 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21b5c747-2fc2-4416-a6cb-b5eb067961ac","Type":"ContainerDied","Data":"4ac5d4aeb6dcb3ee180b54d30a12c74eca84c9f3096cb4bb27df7f07f9ad6782"} Feb 26 17:40:58 crc kubenswrapper[4805]: I0226 17:40:58.072489 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21b5c747-2fc2-4416-a6cb-b5eb067961ac","Type":"ContainerDied","Data":"991ff0240934b77951a77103fed542d23b2db853834592adbb0aa587221bc9fd"} Feb 26 17:40:58 crc kubenswrapper[4805]: I0226 17:40:58.072505 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21b5c747-2fc2-4416-a6cb-b5eb067961ac","Type":"ContainerDied","Data":"756ade5208c4f97999cee04da130327fe400ecafb0837e387eb1c54efe7d8070"} Feb 26 17:40:58 crc kubenswrapper[4805]: I0226 17:40:58.072446 4805 generic.go:334] "Generic (PLEG): container finished" podID="21b5c747-2fc2-4416-a6cb-b5eb067961ac" containerID="756ade5208c4f97999cee04da130327fe400ecafb0837e387eb1c54efe7d8070" exitCode=0 Feb 26 17:41:02 crc kubenswrapper[4805]: I0226 17:41:02.113926 4805 generic.go:334] "Generic (PLEG): container finished" podID="21b5c747-2fc2-4416-a6cb-b5eb067961ac" containerID="f1acd28960072b22e9f55169b9158f874e79e369c402406fb9a46cdf5d0a176c" exitCode=0 Feb 26 17:41:02 crc kubenswrapper[4805]: I0226 17:41:02.114040 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21b5c747-2fc2-4416-a6cb-b5eb067961ac","Type":"ContainerDied","Data":"f1acd28960072b22e9f55169b9158f874e79e369c402406fb9a46cdf5d0a176c"} Feb 26 17:41:02 crc kubenswrapper[4805]: I0226 17:41:02.272274 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 17:41:02 crc kubenswrapper[4805]: I0226 17:41:02.421471 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21b5c747-2fc2-4416-a6cb-b5eb067961ac-scripts\") pod \"21b5c747-2fc2-4416-a6cb-b5eb067961ac\" (UID: \"21b5c747-2fc2-4416-a6cb-b5eb067961ac\") " Feb 26 17:41:02 crc kubenswrapper[4805]: I0226 17:41:02.421581 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21b5c747-2fc2-4416-a6cb-b5eb067961ac-config-data\") pod \"21b5c747-2fc2-4416-a6cb-b5eb067961ac\" (UID: \"21b5c747-2fc2-4416-a6cb-b5eb067961ac\") " Feb 26 17:41:02 crc kubenswrapper[4805]: I0226 17:41:02.421678 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/21b5c747-2fc2-4416-a6cb-b5eb067961ac-sg-core-conf-yaml\") pod \"21b5c747-2fc2-4416-a6cb-b5eb067961ac\" (UID: \"21b5c747-2fc2-4416-a6cb-b5eb067961ac\") " Feb 26 17:41:02 crc kubenswrapper[4805]: I0226 17:41:02.421727 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21b5c747-2fc2-4416-a6cb-b5eb067961ac-run-httpd\") pod \"21b5c747-2fc2-4416-a6cb-b5eb067961ac\" (UID: \"21b5c747-2fc2-4416-a6cb-b5eb067961ac\") " Feb 26 17:41:02 crc kubenswrapper[4805]: I0226 17:41:02.421768 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21b5c747-2fc2-4416-a6cb-b5eb067961ac-log-httpd\") pod \"21b5c747-2fc2-4416-a6cb-b5eb067961ac\" (UID: \"21b5c747-2fc2-4416-a6cb-b5eb067961ac\") " Feb 26 17:41:02 crc kubenswrapper[4805]: I0226 17:41:02.421798 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21b5c747-2fc2-4416-a6cb-b5eb067961ac-combined-ca-bundle\") pod \"21b5c747-2fc2-4416-a6cb-b5eb067961ac\" (UID: \"21b5c747-2fc2-4416-a6cb-b5eb067961ac\") " Feb 26 17:41:02 crc kubenswrapper[4805]: I0226 17:41:02.421921 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwtll\" (UniqueName: \"kubernetes.io/projected/21b5c747-2fc2-4416-a6cb-b5eb067961ac-kube-api-access-gwtll\") pod \"21b5c747-2fc2-4416-a6cb-b5eb067961ac\" (UID: \"21b5c747-2fc2-4416-a6cb-b5eb067961ac\") " Feb 26 17:41:02 crc kubenswrapper[4805]: I0226 17:41:02.422275 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21b5c747-2fc2-4416-a6cb-b5eb067961ac-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "21b5c747-2fc2-4416-a6cb-b5eb067961ac" (UID: "21b5c747-2fc2-4416-a6cb-b5eb067961ac"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:41:02 crc kubenswrapper[4805]: I0226 17:41:02.422792 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21b5c747-2fc2-4416-a6cb-b5eb067961ac-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "21b5c747-2fc2-4416-a6cb-b5eb067961ac" (UID: "21b5c747-2fc2-4416-a6cb-b5eb067961ac"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:41:02 crc kubenswrapper[4805]: I0226 17:41:02.423257 4805 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21b5c747-2fc2-4416-a6cb-b5eb067961ac-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:02 crc kubenswrapper[4805]: I0226 17:41:02.423287 4805 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21b5c747-2fc2-4416-a6cb-b5eb067961ac-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:02 crc kubenswrapper[4805]: I0226 17:41:02.427835 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21b5c747-2fc2-4416-a6cb-b5eb067961ac-scripts" (OuterVolumeSpecName: "scripts") pod "21b5c747-2fc2-4416-a6cb-b5eb067961ac" (UID: "21b5c747-2fc2-4416-a6cb-b5eb067961ac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:41:02 crc kubenswrapper[4805]: I0226 17:41:02.430416 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21b5c747-2fc2-4416-a6cb-b5eb067961ac-kube-api-access-gwtll" (OuterVolumeSpecName: "kube-api-access-gwtll") pod "21b5c747-2fc2-4416-a6cb-b5eb067961ac" (UID: "21b5c747-2fc2-4416-a6cb-b5eb067961ac"). InnerVolumeSpecName "kube-api-access-gwtll". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:41:02 crc kubenswrapper[4805]: I0226 17:41:02.452994 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21b5c747-2fc2-4416-a6cb-b5eb067961ac-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "21b5c747-2fc2-4416-a6cb-b5eb067961ac" (UID: "21b5c747-2fc2-4416-a6cb-b5eb067961ac"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:41:02 crc kubenswrapper[4805]: I0226 17:41:02.517246 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21b5c747-2fc2-4416-a6cb-b5eb067961ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "21b5c747-2fc2-4416-a6cb-b5eb067961ac" (UID: "21b5c747-2fc2-4416-a6cb-b5eb067961ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:41:02 crc kubenswrapper[4805]: I0226 17:41:02.525958 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwtll\" (UniqueName: \"kubernetes.io/projected/21b5c747-2fc2-4416-a6cb-b5eb067961ac-kube-api-access-gwtll\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:02 crc kubenswrapper[4805]: I0226 17:41:02.525991 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21b5c747-2fc2-4416-a6cb-b5eb067961ac-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:02 crc kubenswrapper[4805]: I0226 17:41:02.526007 4805 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/21b5c747-2fc2-4416-a6cb-b5eb067961ac-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:02 crc kubenswrapper[4805]: I0226 17:41:02.526060 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21b5c747-2fc2-4416-a6cb-b5eb067961ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:02 crc kubenswrapper[4805]: I0226 17:41:02.550281 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21b5c747-2fc2-4416-a6cb-b5eb067961ac-config-data" (OuterVolumeSpecName: "config-data") pod "21b5c747-2fc2-4416-a6cb-b5eb067961ac" (UID: "21b5c747-2fc2-4416-a6cb-b5eb067961ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:41:02 crc kubenswrapper[4805]: I0226 17:41:02.627501 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21b5c747-2fc2-4416-a6cb-b5eb067961ac-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:02 crc kubenswrapper[4805]: I0226 17:41:02.978135 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 17:41:02 crc kubenswrapper[4805]: I0226 17:41:02.978204 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 17:41:02 crc kubenswrapper[4805]: I0226 17:41:02.978261 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" Feb 26 17:41:02 crc kubenswrapper[4805]: I0226 17:41:02.979158 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fda738fe0407aa3e4e71cd0054243c0ef019a44dbbf48701bf838c7b50aeb1e6"} pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 17:41:02 crc kubenswrapper[4805]: I0226 17:41:02.979227 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" containerID="cri-o://fda738fe0407aa3e4e71cd0054243c0ef019a44dbbf48701bf838c7b50aeb1e6" gracePeriod=600 Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.130099 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21b5c747-2fc2-4416-a6cb-b5eb067961ac","Type":"ContainerDied","Data":"0085b62d06bf0520a2d5adcaaed538a882df8e1aee8040704b235bcb09a99a33"} Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.130163 4805 scope.go:117] "RemoveContainer" containerID="4ac5d4aeb6dcb3ee180b54d30a12c74eca84c9f3096cb4bb27df7f07f9ad6782" Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.130404 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.133161 4805 generic.go:334] "Generic (PLEG): container finished" podID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerID="fda738fe0407aa3e4e71cd0054243c0ef019a44dbbf48701bf838c7b50aeb1e6" exitCode=0 Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.133212 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" event={"ID":"25e83477-65d0-41be-8e55-fdacfc5871a8","Type":"ContainerDied","Data":"fda738fe0407aa3e4e71cd0054243c0ef019a44dbbf48701bf838c7b50aeb1e6"} Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.222867 4805 scope.go:117] "RemoveContainer" containerID="991ff0240934b77951a77103fed542d23b2db853834592adbb0aa587221bc9fd" Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.258202 4805 scope.go:117] "RemoveContainer" containerID="756ade5208c4f97999cee04da130327fe400ecafb0837e387eb1c54efe7d8070" Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.258474 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.274115 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.287364 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:41:03 crc kubenswrapper[4805]: E0226 17:41:03.293439 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21b5c747-2fc2-4416-a6cb-b5eb067961ac" containerName="ceilometer-notification-agent" Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.293549 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="21b5c747-2fc2-4416-a6cb-b5eb067961ac" containerName="ceilometer-notification-agent" Feb 26 17:41:03 crc kubenswrapper[4805]: E0226 17:41:03.293596 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21b5c747-2fc2-4416-a6cb-b5eb067961ac" containerName="sg-core" Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.293605 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="21b5c747-2fc2-4416-a6cb-b5eb067961ac" containerName="sg-core" Feb 26 17:41:03 crc kubenswrapper[4805]: E0226 17:41:03.293655 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21b5c747-2fc2-4416-a6cb-b5eb067961ac" containerName="proxy-httpd" Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.293667 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="21b5c747-2fc2-4416-a6cb-b5eb067961ac" containerName="proxy-httpd" Feb 26 17:41:03 crc kubenswrapper[4805]: E0226 17:41:03.293712 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21b5c747-2fc2-4416-a6cb-b5eb067961ac" containerName="ceilometer-central-agent" Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.293720 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="21b5c747-2fc2-4416-a6cb-b5eb067961ac" containerName="ceilometer-central-agent" Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.300103 4805 scope.go:117] "RemoveContainer" containerID="f1acd28960072b22e9f55169b9158f874e79e369c402406fb9a46cdf5d0a176c" Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.304185 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="21b5c747-2fc2-4416-a6cb-b5eb067961ac" containerName="ceilometer-notification-agent" Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.304278 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="21b5c747-2fc2-4416-a6cb-b5eb067961ac" containerName="proxy-httpd" Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.304293 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="21b5c747-2fc2-4416-a6cb-b5eb067961ac" containerName="sg-core" Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.304343 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="21b5c747-2fc2-4416-a6cb-b5eb067961ac" containerName="ceilometer-central-agent" Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.335872 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.338904 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.348857 4805 scope.go:117] "RemoveContainer" containerID="790d5b8d614e0ff22c848f43674f8b7d4c300d976397a943518fa87467a5a9a3" Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.352543 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.353371 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.445906 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-config-data\") pod \"ceilometer-0\" (UID: \"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f\") " pod="openstack/ceilometer-0" Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.445969 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-log-httpd\") pod \"ceilometer-0\" (UID: \"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f\") " pod="openstack/ceilometer-0" Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.446008 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f\") " pod="openstack/ceilometer-0" Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.446050 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f\") " pod="openstack/ceilometer-0" Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.446071 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-run-httpd\") pod \"ceilometer-0\" (UID: \"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f\") " pod="openstack/ceilometer-0" Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.446092 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-scripts\") pod \"ceilometer-0\" (UID: \"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f\") " pod="openstack/ceilometer-0" Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.446145 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7sdk\" (UniqueName: \"kubernetes.io/projected/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-kube-api-access-f7sdk\") pod \"ceilometer-0\" (UID: \"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f\") " pod="openstack/ceilometer-0" Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.548071 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f\") " pod="openstack/ceilometer-0" Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.548126 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f\") " pod="openstack/ceilometer-0" Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.548150 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-run-httpd\") pod \"ceilometer-0\" (UID: \"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f\") " pod="openstack/ceilometer-0" Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.548175 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-scripts\") pod \"ceilometer-0\" (UID: \"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f\") " pod="openstack/ceilometer-0" Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.548216 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7sdk\" (UniqueName: \"kubernetes.io/projected/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-kube-api-access-f7sdk\") pod \"ceilometer-0\" (UID: \"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f\") " pod="openstack/ceilometer-0" Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.548345 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-config-data\") pod \"ceilometer-0\" (UID: \"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f\") " pod="openstack/ceilometer-0" Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.548363 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-log-httpd\") pod \"ceilometer-0\" (UID: \"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f\") " pod="openstack/ceilometer-0" Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.548817 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-log-httpd\") pod \"ceilometer-0\" (UID: \"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f\") " pod="openstack/ceilometer-0" Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.549067 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-run-httpd\") pod \"ceilometer-0\" (UID: \"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f\") " pod="openstack/ceilometer-0" Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.553889 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f\") " pod="openstack/ceilometer-0" Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.554605 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f\") " pod="openstack/ceilometer-0" Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.555605 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-scripts\") pod \"ceilometer-0\" (UID: \"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f\") " pod="openstack/ceilometer-0" Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.566572 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-config-data\") pod \"ceilometer-0\" (UID: \"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f\") " pod="openstack/ceilometer-0" Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.569140 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7sdk\" (UniqueName: \"kubernetes.io/projected/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-kube-api-access-f7sdk\") pod \"ceilometer-0\" (UID: \"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f\") " pod="openstack/ceilometer-0" Feb 26 17:41:03 crc kubenswrapper[4805]: I0226 17:41:03.665591 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 17:41:05 crc kubenswrapper[4805]: I0226 17:41:04.154416 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" event={"ID":"25e83477-65d0-41be-8e55-fdacfc5871a8","Type":"ContainerStarted","Data":"8805aea259eb85c371de7bf3781a3208df65fcf541775082ddfaaaa543988732"} Feb 26 17:41:05 crc kubenswrapper[4805]: I0226 17:41:04.967926 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21b5c747-2fc2-4416-a6cb-b5eb067961ac" path="/var/lib/kubelet/pods/21b5c747-2fc2-4416-a6cb-b5eb067961ac/volumes" Feb 26 17:41:05 crc kubenswrapper[4805]: I0226 17:41:05.747280 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:41:05 crc kubenswrapper[4805]: W0226 17:41:05.759242 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda37a8b33_12b3_47a7_92a3_6e4d8fa8338f.slice/crio-524a0a74ff5c76e199fe950b4f94805df31ef3548db8904a17d4ac819df2d501 WatchSource:0}: Error finding container 524a0a74ff5c76e199fe950b4f94805df31ef3548db8904a17d4ac819df2d501: Status 404 returned error can't find the container with id 524a0a74ff5c76e199fe950b4f94805df31ef3548db8904a17d4ac819df2d501 Feb 26 17:41:06 crc kubenswrapper[4805]: I0226 17:41:06.179700 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f","Type":"ContainerStarted","Data":"524a0a74ff5c76e199fe950b4f94805df31ef3548db8904a17d4ac819df2d501"} Feb 26 17:41:07 crc kubenswrapper[4805]: I0226 17:41:07.192698 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f","Type":"ContainerStarted","Data":"cf27a45c5291442af74cc380bc8d15222294677f2bddb8b31454e449079be748"} Feb 26 17:41:08 crc kubenswrapper[4805]: I0226 17:41:08.208995 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f","Type":"ContainerStarted","Data":"ecfad850bcea8f2884f126683e405241530f8ba87be0fa28df4214b1003920aa"} Feb 26 17:41:08 crc kubenswrapper[4805]: I0226 17:41:08.209850 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f","Type":"ContainerStarted","Data":"4114c257ed26074cd4d06494800c25635d1afa20c52fb1e41b1c835f739f445a"} Feb 26 17:41:10 crc kubenswrapper[4805]: I0226 17:41:10.233921 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f","Type":"ContainerStarted","Data":"d547e4b8ba276a830f861d26a559ed481cbbd31aa0f158046bb62112fa2898f8"} Feb 26 17:41:10 crc kubenswrapper[4805]: I0226 17:41:10.234465 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 17:41:10 crc kubenswrapper[4805]: I0226 17:41:10.269097 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.158731405 podStartE2EDuration="7.269070355s" podCreationTimestamp="2026-02-26 17:41:03 +0000 UTC" firstStartedPulling="2026-02-26 17:41:05.761844107 +0000 UTC m=+1580.323598456" lastFinishedPulling="2026-02-26 17:41:09.872183067 +0000 UTC m=+1584.433937406" observedRunningTime="2026-02-26 17:41:10.25854319 +0000 UTC m=+1584.820297529" watchObservedRunningTime="2026-02-26 17:41:10.269070355 +0000 UTC m=+1584.830824694" Feb 26 17:41:11 crc kubenswrapper[4805]: I0226 17:41:11.247319 4805 generic.go:334] "Generic (PLEG): container finished" podID="328f9e45-178a-4f2d-b4f5-cf870b94e1a2" containerID="0e34f29d8dadc2970f60d8318c3a9e879c93c91e421544a2764c267db66e03bd" exitCode=0 Feb 26 17:41:11 crc kubenswrapper[4805]: I0226 17:41:11.247458 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-82ztd" event={"ID":"328f9e45-178a-4f2d-b4f5-cf870b94e1a2","Type":"ContainerDied","Data":"0e34f29d8dadc2970f60d8318c3a9e879c93c91e421544a2764c267db66e03bd"} Feb 26 17:41:12 crc kubenswrapper[4805]: I0226 17:41:12.731711 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-82ztd" Feb 26 17:41:12 crc kubenswrapper[4805]: I0226 17:41:12.865731 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/328f9e45-178a-4f2d-b4f5-cf870b94e1a2-config-data\") pod \"328f9e45-178a-4f2d-b4f5-cf870b94e1a2\" (UID: \"328f9e45-178a-4f2d-b4f5-cf870b94e1a2\") " Feb 26 17:41:12 crc kubenswrapper[4805]: I0226 17:41:12.865810 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkrtw\" (UniqueName: \"kubernetes.io/projected/328f9e45-178a-4f2d-b4f5-cf870b94e1a2-kube-api-access-jkrtw\") pod \"328f9e45-178a-4f2d-b4f5-cf870b94e1a2\" (UID: \"328f9e45-178a-4f2d-b4f5-cf870b94e1a2\") " Feb 26 17:41:12 crc kubenswrapper[4805]: I0226 17:41:12.865894 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/328f9e45-178a-4f2d-b4f5-cf870b94e1a2-scripts\") pod \"328f9e45-178a-4f2d-b4f5-cf870b94e1a2\" (UID: \"328f9e45-178a-4f2d-b4f5-cf870b94e1a2\") " Feb 26 17:41:12 crc kubenswrapper[4805]: I0226 17:41:12.866001 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328f9e45-178a-4f2d-b4f5-cf870b94e1a2-combined-ca-bundle\") pod \"328f9e45-178a-4f2d-b4f5-cf870b94e1a2\" (UID: \"328f9e45-178a-4f2d-b4f5-cf870b94e1a2\") " Feb 26 17:41:12 crc kubenswrapper[4805]: I0226 17:41:12.871309 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/328f9e45-178a-4f2d-b4f5-cf870b94e1a2-kube-api-access-jkrtw" (OuterVolumeSpecName: "kube-api-access-jkrtw") pod "328f9e45-178a-4f2d-b4f5-cf870b94e1a2" (UID: "328f9e45-178a-4f2d-b4f5-cf870b94e1a2"). InnerVolumeSpecName "kube-api-access-jkrtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:41:12 crc kubenswrapper[4805]: I0226 17:41:12.878482 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/328f9e45-178a-4f2d-b4f5-cf870b94e1a2-scripts" (OuterVolumeSpecName: "scripts") pod "328f9e45-178a-4f2d-b4f5-cf870b94e1a2" (UID: "328f9e45-178a-4f2d-b4f5-cf870b94e1a2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:41:12 crc kubenswrapper[4805]: I0226 17:41:12.895311 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/328f9e45-178a-4f2d-b4f5-cf870b94e1a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "328f9e45-178a-4f2d-b4f5-cf870b94e1a2" (UID: "328f9e45-178a-4f2d-b4f5-cf870b94e1a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:41:12 crc kubenswrapper[4805]: I0226 17:41:12.914238 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/328f9e45-178a-4f2d-b4f5-cf870b94e1a2-config-data" (OuterVolumeSpecName: "config-data") pod "328f9e45-178a-4f2d-b4f5-cf870b94e1a2" (UID: "328f9e45-178a-4f2d-b4f5-cf870b94e1a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:41:12 crc kubenswrapper[4805]: I0226 17:41:12.968631 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/328f9e45-178a-4f2d-b4f5-cf870b94e1a2-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:12 crc kubenswrapper[4805]: I0226 17:41:12.968662 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkrtw\" (UniqueName: \"kubernetes.io/projected/328f9e45-178a-4f2d-b4f5-cf870b94e1a2-kube-api-access-jkrtw\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:12 crc kubenswrapper[4805]: I0226 17:41:12.968672 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/328f9e45-178a-4f2d-b4f5-cf870b94e1a2-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:12 crc kubenswrapper[4805]: I0226 17:41:12.968681 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/328f9e45-178a-4f2d-b4f5-cf870b94e1a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:13 crc kubenswrapper[4805]: I0226 17:41:13.289725 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-82ztd" event={"ID":"328f9e45-178a-4f2d-b4f5-cf870b94e1a2","Type":"ContainerDied","Data":"d277350d5a243a8a723c8715633e762ca64a94b2c9613906c450e1933b0e2abe"} Feb 26 17:41:13 crc kubenswrapper[4805]: I0226 17:41:13.289798 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d277350d5a243a8a723c8715633e762ca64a94b2c9613906c450e1933b0e2abe" Feb 26 17:41:13 crc kubenswrapper[4805]: I0226 17:41:13.290038 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-82ztd" Feb 26 17:41:13 crc kubenswrapper[4805]: I0226 17:41:13.489523 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 26 17:41:13 crc kubenswrapper[4805]: E0226 17:41:13.494941 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="328f9e45-178a-4f2d-b4f5-cf870b94e1a2" containerName="nova-cell0-conductor-db-sync" Feb 26 17:41:13 crc kubenswrapper[4805]: I0226 17:41:13.494977 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="328f9e45-178a-4f2d-b4f5-cf870b94e1a2" containerName="nova-cell0-conductor-db-sync" Feb 26 17:41:13 crc kubenswrapper[4805]: I0226 17:41:13.495437 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="328f9e45-178a-4f2d-b4f5-cf870b94e1a2" containerName="nova-cell0-conductor-db-sync" Feb 26 17:41:13 crc kubenswrapper[4805]: I0226 17:41:13.496448 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 26 17:41:13 crc kubenswrapper[4805]: I0226 17:41:13.499929 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-xgs65" Feb 26 17:41:13 crc kubenswrapper[4805]: I0226 17:41:13.500213 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 26 17:41:13 crc kubenswrapper[4805]: I0226 17:41:13.542287 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 26 17:41:13 crc kubenswrapper[4805]: I0226 17:41:13.581094 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt8k5\" (UniqueName: \"kubernetes.io/projected/59691810-7196-4633-8fc6-b46a505b653d-kube-api-access-wt8k5\") pod \"nova-cell0-conductor-0\" (UID: \"59691810-7196-4633-8fc6-b46a505b653d\") " pod="openstack/nova-cell0-conductor-0" Feb 26 17:41:13 crc kubenswrapper[4805]: I0226 17:41:13.581238 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59691810-7196-4633-8fc6-b46a505b653d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"59691810-7196-4633-8fc6-b46a505b653d\") " pod="openstack/nova-cell0-conductor-0" Feb 26 17:41:13 crc kubenswrapper[4805]: I0226 17:41:13.581294 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59691810-7196-4633-8fc6-b46a505b653d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"59691810-7196-4633-8fc6-b46a505b653d\") " pod="openstack/nova-cell0-conductor-0" Feb 26 17:41:13 crc kubenswrapper[4805]: I0226 17:41:13.685880 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59691810-7196-4633-8fc6-b46a505b653d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"59691810-7196-4633-8fc6-b46a505b653d\") " pod="openstack/nova-cell0-conductor-0" Feb 26 17:41:13 crc kubenswrapper[4805]: I0226 17:41:13.686079 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59691810-7196-4633-8fc6-b46a505b653d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"59691810-7196-4633-8fc6-b46a505b653d\") " pod="openstack/nova-cell0-conductor-0" Feb 26 17:41:13 crc kubenswrapper[4805]: I0226 17:41:13.686416 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt8k5\" (UniqueName: \"kubernetes.io/projected/59691810-7196-4633-8fc6-b46a505b653d-kube-api-access-wt8k5\") pod \"nova-cell0-conductor-0\" (UID: \"59691810-7196-4633-8fc6-b46a505b653d\") " pod="openstack/nova-cell0-conductor-0" Feb 26 17:41:13 crc kubenswrapper[4805]: I0226 17:41:13.698903 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59691810-7196-4633-8fc6-b46a505b653d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"59691810-7196-4633-8fc6-b46a505b653d\") " pod="openstack/nova-cell0-conductor-0" Feb 26 17:41:13 crc kubenswrapper[4805]: I0226 17:41:13.705924 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59691810-7196-4633-8fc6-b46a505b653d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"59691810-7196-4633-8fc6-b46a505b653d\") " pod="openstack/nova-cell0-conductor-0" Feb 26 17:41:13 crc kubenswrapper[4805]: I0226 17:41:13.706379 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt8k5\" (UniqueName: \"kubernetes.io/projected/59691810-7196-4633-8fc6-b46a505b653d-kube-api-access-wt8k5\") pod \"nova-cell0-conductor-0\" (UID: \"59691810-7196-4633-8fc6-b46a505b653d\") " pod="openstack/nova-cell0-conductor-0" Feb 26 17:41:13 crc kubenswrapper[4805]: I0226 17:41:13.820484 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 26 17:41:14 crc kubenswrapper[4805]: I0226 17:41:14.292987 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 26 17:41:14 crc kubenswrapper[4805]: W0226 17:41:14.297598 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59691810_7196_4633_8fc6_b46a505b653d.slice/crio-1586e9d2f69869fb83d8919d677ae2a4ab2d0b98122ee84e23567b836941e917 WatchSource:0}: Error finding container 1586e9d2f69869fb83d8919d677ae2a4ab2d0b98122ee84e23567b836941e917: Status 404 returned error can't find the container with id 1586e9d2f69869fb83d8919d677ae2a4ab2d0b98122ee84e23567b836941e917 Feb 26 17:41:15 crc kubenswrapper[4805]: I0226 17:41:15.309100 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"59691810-7196-4633-8fc6-b46a505b653d","Type":"ContainerStarted","Data":"5e54992b37b1fdbe124ae2b192710cb27d1b99a27c7500bbf9e797d3b71c31d9"} Feb 26 17:41:15 crc kubenswrapper[4805]: I0226 17:41:15.309156 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"59691810-7196-4633-8fc6-b46a505b653d","Type":"ContainerStarted","Data":"1586e9d2f69869fb83d8919d677ae2a4ab2d0b98122ee84e23567b836941e917"} Feb 26 17:41:15 crc kubenswrapper[4805]: I0226 17:41:15.310313 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 26 17:41:15 crc kubenswrapper[4805]: I0226 17:41:15.339402 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.339375555 podStartE2EDuration="2.339375555s" podCreationTimestamp="2026-02-26 17:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:41:15.326697865 +0000 UTC m=+1589.888452214" watchObservedRunningTime="2026-02-26 17:41:15.339375555 +0000 UTC m=+1589.901129914" Feb 26 17:41:23 crc kubenswrapper[4805]: I0226 17:41:23.850094 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.302276 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-cs754"] Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.304377 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-cs754" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.307226 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.307308 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.319209 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-cs754"] Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.417489 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghw5t\" (UniqueName: \"kubernetes.io/projected/f590f397-4abc-4f7a-a9f9-013f581e3ec6-kube-api-access-ghw5t\") pod \"nova-cell0-cell-mapping-cs754\" (UID: \"f590f397-4abc-4f7a-a9f9-013f581e3ec6\") " pod="openstack/nova-cell0-cell-mapping-cs754" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.417619 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f590f397-4abc-4f7a-a9f9-013f581e3ec6-config-data\") pod \"nova-cell0-cell-mapping-cs754\" (UID: \"f590f397-4abc-4f7a-a9f9-013f581e3ec6\") " pod="openstack/nova-cell0-cell-mapping-cs754" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.417706 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f590f397-4abc-4f7a-a9f9-013f581e3ec6-scripts\") pod \"nova-cell0-cell-mapping-cs754\" (UID: \"f590f397-4abc-4f7a-a9f9-013f581e3ec6\") " pod="openstack/nova-cell0-cell-mapping-cs754" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.417750 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f590f397-4abc-4f7a-a9f9-013f581e3ec6-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-cs754\" (UID: \"f590f397-4abc-4f7a-a9f9-013f581e3ec6\") " pod="openstack/nova-cell0-cell-mapping-cs754" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.448862 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.450644 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.455566 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.467151 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.521727 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bea31308-564c-4030-b718-41b7fb684418-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bea31308-564c-4030-b718-41b7fb684418\") " pod="openstack/nova-scheduler-0" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.521837 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f590f397-4abc-4f7a-a9f9-013f581e3ec6-config-data\") pod \"nova-cell0-cell-mapping-cs754\" (UID: \"f590f397-4abc-4f7a-a9f9-013f581e3ec6\") " pod="openstack/nova-cell0-cell-mapping-cs754" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.521889 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bea31308-564c-4030-b718-41b7fb684418-config-data\") pod \"nova-scheduler-0\" (UID: \"bea31308-564c-4030-b718-41b7fb684418\") " pod="openstack/nova-scheduler-0" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.522034 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f590f397-4abc-4f7a-a9f9-013f581e3ec6-scripts\") pod \"nova-cell0-cell-mapping-cs754\" (UID: \"f590f397-4abc-4f7a-a9f9-013f581e3ec6\") " pod="openstack/nova-cell0-cell-mapping-cs754" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.522079 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdmql\" (UniqueName: \"kubernetes.io/projected/bea31308-564c-4030-b718-41b7fb684418-kube-api-access-jdmql\") pod \"nova-scheduler-0\" (UID: \"bea31308-564c-4030-b718-41b7fb684418\") " pod="openstack/nova-scheduler-0" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.522139 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f590f397-4abc-4f7a-a9f9-013f581e3ec6-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-cs754\" (UID: \"f590f397-4abc-4f7a-a9f9-013f581e3ec6\") " pod="openstack/nova-cell0-cell-mapping-cs754" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.522327 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghw5t\" (UniqueName: \"kubernetes.io/projected/f590f397-4abc-4f7a-a9f9-013f581e3ec6-kube-api-access-ghw5t\") pod \"nova-cell0-cell-mapping-cs754\" (UID: \"f590f397-4abc-4f7a-a9f9-013f581e3ec6\") " pod="openstack/nova-cell0-cell-mapping-cs754" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.568553 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f590f397-4abc-4f7a-a9f9-013f581e3ec6-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-cs754\" (UID: \"f590f397-4abc-4f7a-a9f9-013f581e3ec6\") " pod="openstack/nova-cell0-cell-mapping-cs754" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.568661 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f590f397-4abc-4f7a-a9f9-013f581e3ec6-config-data\") pod \"nova-cell0-cell-mapping-cs754\" (UID: \"f590f397-4abc-4f7a-a9f9-013f581e3ec6\") " pod="openstack/nova-cell0-cell-mapping-cs754" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.581580 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghw5t\" (UniqueName: \"kubernetes.io/projected/f590f397-4abc-4f7a-a9f9-013f581e3ec6-kube-api-access-ghw5t\") pod \"nova-cell0-cell-mapping-cs754\" (UID: \"f590f397-4abc-4f7a-a9f9-013f581e3ec6\") " pod="openstack/nova-cell0-cell-mapping-cs754" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.591749 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f590f397-4abc-4f7a-a9f9-013f581e3ec6-scripts\") pod \"nova-cell0-cell-mapping-cs754\" (UID: \"f590f397-4abc-4f7a-a9f9-013f581e3ec6\") " pod="openstack/nova-cell0-cell-mapping-cs754" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.621076 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.622903 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.624792 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bea31308-564c-4030-b718-41b7fb684418-config-data\") pod \"nova-scheduler-0\" (UID: \"bea31308-564c-4030-b718-41b7fb684418\") " pod="openstack/nova-scheduler-0" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.624852 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdmql\" (UniqueName: \"kubernetes.io/projected/bea31308-564c-4030-b718-41b7fb684418-kube-api-access-jdmql\") pod \"nova-scheduler-0\" (UID: \"bea31308-564c-4030-b718-41b7fb684418\") " pod="openstack/nova-scheduler-0" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.624963 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bea31308-564c-4030-b718-41b7fb684418-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bea31308-564c-4030-b718-41b7fb684418\") " pod="openstack/nova-scheduler-0" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.630073 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.630714 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-cs754" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.643744 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bea31308-564c-4030-b718-41b7fb684418-config-data\") pod \"nova-scheduler-0\" (UID: \"bea31308-564c-4030-b718-41b7fb684418\") " pod="openstack/nova-scheduler-0" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.651686 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bea31308-564c-4030-b718-41b7fb684418-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bea31308-564c-4030-b718-41b7fb684418\") " pod="openstack/nova-scheduler-0" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.683258 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.685486 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.705671 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.713269 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.721974 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdmql\" (UniqueName: \"kubernetes.io/projected/bea31308-564c-4030-b718-41b7fb684418-kube-api-access-jdmql\") pod \"nova-scheduler-0\" (UID: \"bea31308-564c-4030-b718-41b7fb684418\") " pod="openstack/nova-scheduler-0" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.734218 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f85b857-4074-4d69-9e39-91261fbe93fb-logs\") pod \"nova-metadata-0\" (UID: \"0f85b857-4074-4d69-9e39-91261fbe93fb\") " pod="openstack/nova-metadata-0" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.734282 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqv2c\" (UniqueName: \"kubernetes.io/projected/c6496761-f8be-4176-a000-0293499b739e-kube-api-access-mqv2c\") pod \"nova-api-0\" (UID: \"c6496761-f8be-4176-a000-0293499b739e\") " pod="openstack/nova-api-0" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.734322 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6496761-f8be-4176-a000-0293499b739e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c6496761-f8be-4176-a000-0293499b739e\") " pod="openstack/nova-api-0" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.734394 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f85b857-4074-4d69-9e39-91261fbe93fb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0f85b857-4074-4d69-9e39-91261fbe93fb\") " pod="openstack/nova-metadata-0" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.734453 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f85b857-4074-4d69-9e39-91261fbe93fb-config-data\") pod \"nova-metadata-0\" (UID: \"0f85b857-4074-4d69-9e39-91261fbe93fb\") " pod="openstack/nova-metadata-0" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.734477 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6496761-f8be-4176-a000-0293499b739e-logs\") pod \"nova-api-0\" (UID: \"c6496761-f8be-4176-a000-0293499b739e\") " pod="openstack/nova-api-0" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.734523 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdrvs\" (UniqueName: \"kubernetes.io/projected/0f85b857-4074-4d69-9e39-91261fbe93fb-kube-api-access-rdrvs\") pod \"nova-metadata-0\" (UID: \"0f85b857-4074-4d69-9e39-91261fbe93fb\") " pod="openstack/nova-metadata-0" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.734567 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6496761-f8be-4176-a000-0293499b739e-config-data\") pod \"nova-api-0\" (UID: \"c6496761-f8be-4176-a000-0293499b739e\") " pod="openstack/nova-api-0" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.758090 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.811417 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.836203 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6496761-f8be-4176-a000-0293499b739e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c6496761-f8be-4176-a000-0293499b739e\") " pod="openstack/nova-api-0" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.836277 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f85b857-4074-4d69-9e39-91261fbe93fb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0f85b857-4074-4d69-9e39-91261fbe93fb\") " pod="openstack/nova-metadata-0" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.836320 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f85b857-4074-4d69-9e39-91261fbe93fb-config-data\") pod \"nova-metadata-0\" (UID: \"0f85b857-4074-4d69-9e39-91261fbe93fb\") " pod="openstack/nova-metadata-0" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.836341 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6496761-f8be-4176-a000-0293499b739e-logs\") pod \"nova-api-0\" (UID: \"c6496761-f8be-4176-a000-0293499b739e\") " pod="openstack/nova-api-0" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.836372 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdrvs\" (UniqueName: \"kubernetes.io/projected/0f85b857-4074-4d69-9e39-91261fbe93fb-kube-api-access-rdrvs\") pod \"nova-metadata-0\" (UID: \"0f85b857-4074-4d69-9e39-91261fbe93fb\") " pod="openstack/nova-metadata-0" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.836398 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6496761-f8be-4176-a000-0293499b739e-config-data\") pod \"nova-api-0\" (UID: \"c6496761-f8be-4176-a000-0293499b739e\") " pod="openstack/nova-api-0" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.836459 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f85b857-4074-4d69-9e39-91261fbe93fb-logs\") pod \"nova-metadata-0\" (UID: \"0f85b857-4074-4d69-9e39-91261fbe93fb\") " pod="openstack/nova-metadata-0" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.836486 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqv2c\" (UniqueName: \"kubernetes.io/projected/c6496761-f8be-4176-a000-0293499b739e-kube-api-access-mqv2c\") pod \"nova-api-0\" (UID: \"c6496761-f8be-4176-a000-0293499b739e\") " pod="openstack/nova-api-0" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.840469 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6496761-f8be-4176-a000-0293499b739e-logs\") pod \"nova-api-0\" (UID: \"c6496761-f8be-4176-a000-0293499b739e\") " pod="openstack/nova-api-0" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.850881 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-wwk5t"] Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.852544 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f85b857-4074-4d69-9e39-91261fbe93fb-logs\") pod \"nova-metadata-0\" (UID: \"0f85b857-4074-4d69-9e39-91261fbe93fb\") " pod="openstack/nova-metadata-0" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.852696 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-884c8b8f5-wwk5t" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.863576 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6496761-f8be-4176-a000-0293499b739e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c6496761-f8be-4176-a000-0293499b739e\") " pod="openstack/nova-api-0" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.877216 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6496761-f8be-4176-a000-0293499b739e-config-data\") pod \"nova-api-0\" (UID: \"c6496761-f8be-4176-a000-0293499b739e\") " pod="openstack/nova-api-0" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.896782 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f85b857-4074-4d69-9e39-91261fbe93fb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0f85b857-4074-4d69-9e39-91261fbe93fb\") " pod="openstack/nova-metadata-0" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.897728 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqv2c\" (UniqueName: \"kubernetes.io/projected/c6496761-f8be-4176-a000-0293499b739e-kube-api-access-mqv2c\") pod \"nova-api-0\" (UID: \"c6496761-f8be-4176-a000-0293499b739e\") " pod="openstack/nova-api-0" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.897927 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f85b857-4074-4d69-9e39-91261fbe93fb-config-data\") pod \"nova-metadata-0\" (UID: \"0f85b857-4074-4d69-9e39-91261fbe93fb\") " pod="openstack/nova-metadata-0" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.901569 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdrvs\" (UniqueName: \"kubernetes.io/projected/0f85b857-4074-4d69-9e39-91261fbe93fb-kube-api-access-rdrvs\") pod \"nova-metadata-0\" (UID: \"0f85b857-4074-4d69-9e39-91261fbe93fb\") " pod="openstack/nova-metadata-0" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.913176 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-wwk5t"] Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.929112 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.943334 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d867cdff-5a57-451e-bb64-6b28255e4ae6-ovsdbserver-nb\") pod \"dnsmasq-dns-884c8b8f5-wwk5t\" (UID: \"d867cdff-5a57-451e-bb64-6b28255e4ae6\") " pod="openstack/dnsmasq-dns-884c8b8f5-wwk5t" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.943533 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d867cdff-5a57-451e-bb64-6b28255e4ae6-dns-svc\") pod \"dnsmasq-dns-884c8b8f5-wwk5t\" (UID: \"d867cdff-5a57-451e-bb64-6b28255e4ae6\") " pod="openstack/dnsmasq-dns-884c8b8f5-wwk5t" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.943573 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d867cdff-5a57-451e-bb64-6b28255e4ae6-ovsdbserver-sb\") pod \"dnsmasq-dns-884c8b8f5-wwk5t\" (UID: \"d867cdff-5a57-451e-bb64-6b28255e4ae6\") " pod="openstack/dnsmasq-dns-884c8b8f5-wwk5t" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.943665 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d867cdff-5a57-451e-bb64-6b28255e4ae6-config\") pod \"dnsmasq-dns-884c8b8f5-wwk5t\" (UID: \"d867cdff-5a57-451e-bb64-6b28255e4ae6\") " pod="openstack/dnsmasq-dns-884c8b8f5-wwk5t" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.943699 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d867cdff-5a57-451e-bb64-6b28255e4ae6-dns-swift-storage-0\") pod \"dnsmasq-dns-884c8b8f5-wwk5t\" (UID: \"d867cdff-5a57-451e-bb64-6b28255e4ae6\") " pod="openstack/dnsmasq-dns-884c8b8f5-wwk5t" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.943750 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxs2g\" (UniqueName: \"kubernetes.io/projected/d867cdff-5a57-451e-bb64-6b28255e4ae6-kube-api-access-kxs2g\") pod \"dnsmasq-dns-884c8b8f5-wwk5t\" (UID: \"d867cdff-5a57-451e-bb64-6b28255e4ae6\") " pod="openstack/dnsmasq-dns-884c8b8f5-wwk5t" Feb 26 17:41:24 crc kubenswrapper[4805]: I0226 17:41:24.948006 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 17:41:25 crc kubenswrapper[4805]: I0226 17:41:25.040490 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 17:41:25 crc kubenswrapper[4805]: I0226 17:41:25.049747 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d867cdff-5a57-451e-bb64-6b28255e4ae6-dns-svc\") pod \"dnsmasq-dns-884c8b8f5-wwk5t\" (UID: \"d867cdff-5a57-451e-bb64-6b28255e4ae6\") " pod="openstack/dnsmasq-dns-884c8b8f5-wwk5t" Feb 26 17:41:25 crc kubenswrapper[4805]: I0226 17:41:25.049808 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d867cdff-5a57-451e-bb64-6b28255e4ae6-ovsdbserver-sb\") pod \"dnsmasq-dns-884c8b8f5-wwk5t\" (UID: \"d867cdff-5a57-451e-bb64-6b28255e4ae6\") " pod="openstack/dnsmasq-dns-884c8b8f5-wwk5t" Feb 26 17:41:25 crc kubenswrapper[4805]: I0226 17:41:25.049907 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d867cdff-5a57-451e-bb64-6b28255e4ae6-config\") pod \"dnsmasq-dns-884c8b8f5-wwk5t\" (UID: \"d867cdff-5a57-451e-bb64-6b28255e4ae6\") " pod="openstack/dnsmasq-dns-884c8b8f5-wwk5t" Feb 26 17:41:25 crc kubenswrapper[4805]: I0226 17:41:25.049933 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d867cdff-5a57-451e-bb64-6b28255e4ae6-dns-swift-storage-0\") pod \"dnsmasq-dns-884c8b8f5-wwk5t\" (UID: \"d867cdff-5a57-451e-bb64-6b28255e4ae6\") " pod="openstack/dnsmasq-dns-884c8b8f5-wwk5t" Feb 26 17:41:25 crc kubenswrapper[4805]: I0226 17:41:25.049973 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxs2g\" (UniqueName: \"kubernetes.io/projected/d867cdff-5a57-451e-bb64-6b28255e4ae6-kube-api-access-kxs2g\") pod \"dnsmasq-dns-884c8b8f5-wwk5t\" (UID: \"d867cdff-5a57-451e-bb64-6b28255e4ae6\") " pod="openstack/dnsmasq-dns-884c8b8f5-wwk5t" Feb 26 17:41:25 crc kubenswrapper[4805]: I0226 17:41:25.050054 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d867cdff-5a57-451e-bb64-6b28255e4ae6-ovsdbserver-nb\") pod \"dnsmasq-dns-884c8b8f5-wwk5t\" (UID: \"d867cdff-5a57-451e-bb64-6b28255e4ae6\") " pod="openstack/dnsmasq-dns-884c8b8f5-wwk5t" Feb 26 17:41:25 crc kubenswrapper[4805]: I0226 17:41:25.051133 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d867cdff-5a57-451e-bb64-6b28255e4ae6-ovsdbserver-nb\") pod \"dnsmasq-dns-884c8b8f5-wwk5t\" (UID: \"d867cdff-5a57-451e-bb64-6b28255e4ae6\") " pod="openstack/dnsmasq-dns-884c8b8f5-wwk5t" Feb 26 17:41:25 crc kubenswrapper[4805]: I0226 17:41:25.051777 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d867cdff-5a57-451e-bb64-6b28255e4ae6-dns-swift-storage-0\") pod \"dnsmasq-dns-884c8b8f5-wwk5t\" (UID: \"d867cdff-5a57-451e-bb64-6b28255e4ae6\") " pod="openstack/dnsmasq-dns-884c8b8f5-wwk5t" Feb 26 17:41:25 crc kubenswrapper[4805]: I0226 17:41:25.051925 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d867cdff-5a57-451e-bb64-6b28255e4ae6-config\") pod \"dnsmasq-dns-884c8b8f5-wwk5t\" (UID: \"d867cdff-5a57-451e-bb64-6b28255e4ae6\") " pod="openstack/dnsmasq-dns-884c8b8f5-wwk5t" Feb 26 17:41:25 crc kubenswrapper[4805]: I0226 17:41:25.052644 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d867cdff-5a57-451e-bb64-6b28255e4ae6-dns-svc\") pod \"dnsmasq-dns-884c8b8f5-wwk5t\" (UID: \"d867cdff-5a57-451e-bb64-6b28255e4ae6\") " pod="openstack/dnsmasq-dns-884c8b8f5-wwk5t" Feb 26 17:41:25 crc kubenswrapper[4805]: I0226 17:41:25.052982 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d867cdff-5a57-451e-bb64-6b28255e4ae6-ovsdbserver-sb\") pod \"dnsmasq-dns-884c8b8f5-wwk5t\" (UID: \"d867cdff-5a57-451e-bb64-6b28255e4ae6\") " pod="openstack/dnsmasq-dns-884c8b8f5-wwk5t" Feb 26 17:41:25 crc kubenswrapper[4805]: I0226 17:41:25.062246 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 17:41:25 crc kubenswrapper[4805]: I0226 17:41:25.062371 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 26 17:41:25 crc kubenswrapper[4805]: I0226 17:41:25.087491 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 26 17:41:25 crc kubenswrapper[4805]: I0226 17:41:25.111093 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxs2g\" (UniqueName: \"kubernetes.io/projected/d867cdff-5a57-451e-bb64-6b28255e4ae6-kube-api-access-kxs2g\") pod \"dnsmasq-dns-884c8b8f5-wwk5t\" (UID: \"d867cdff-5a57-451e-bb64-6b28255e4ae6\") " pod="openstack/dnsmasq-dns-884c8b8f5-wwk5t" Feb 26 17:41:25 crc kubenswrapper[4805]: I0226 17:41:25.264217 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98441635-6cfa-4207-957c-6161496d848c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"98441635-6cfa-4207-957c-6161496d848c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 17:41:25 crc kubenswrapper[4805]: I0226 17:41:25.264265 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98441635-6cfa-4207-957c-6161496d848c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"98441635-6cfa-4207-957c-6161496d848c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 17:41:25 crc kubenswrapper[4805]: I0226 17:41:25.264293 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhmvv\" (UniqueName: \"kubernetes.io/projected/98441635-6cfa-4207-957c-6161496d848c-kube-api-access-fhmvv\") pod \"nova-cell1-novncproxy-0\" (UID: \"98441635-6cfa-4207-957c-6161496d848c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 17:41:25 crc kubenswrapper[4805]: I0226 17:41:25.314511 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-884c8b8f5-wwk5t" Feb 26 17:41:25 crc kubenswrapper[4805]: I0226 17:41:25.375163 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98441635-6cfa-4207-957c-6161496d848c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"98441635-6cfa-4207-957c-6161496d848c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 17:41:25 crc kubenswrapper[4805]: I0226 17:41:25.375208 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98441635-6cfa-4207-957c-6161496d848c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"98441635-6cfa-4207-957c-6161496d848c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 17:41:25 crc kubenswrapper[4805]: I0226 17:41:25.375233 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhmvv\" (UniqueName: \"kubernetes.io/projected/98441635-6cfa-4207-957c-6161496d848c-kube-api-access-fhmvv\") pod \"nova-cell1-novncproxy-0\" (UID: \"98441635-6cfa-4207-957c-6161496d848c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 17:41:25 crc kubenswrapper[4805]: I0226 17:41:25.383931 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98441635-6cfa-4207-957c-6161496d848c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"98441635-6cfa-4207-957c-6161496d848c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 17:41:25 crc kubenswrapper[4805]: I0226 17:41:25.389512 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98441635-6cfa-4207-957c-6161496d848c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"98441635-6cfa-4207-957c-6161496d848c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 17:41:25 crc kubenswrapper[4805]: I0226 17:41:25.425633 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhmvv\" (UniqueName: \"kubernetes.io/projected/98441635-6cfa-4207-957c-6161496d848c-kube-api-access-fhmvv\") pod \"nova-cell1-novncproxy-0\" (UID: \"98441635-6cfa-4207-957c-6161496d848c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 17:41:25 crc kubenswrapper[4805]: I0226 17:41:25.505121 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-cs754"] Feb 26 17:41:25 crc kubenswrapper[4805]: I0226 17:41:25.711810 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 26 17:41:25 crc kubenswrapper[4805]: I0226 17:41:25.762666 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 17:41:25 crc kubenswrapper[4805]: I0226 17:41:25.894500 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 17:41:25 crc kubenswrapper[4805]: W0226 17:41:25.895046 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f85b857_4074_4d69_9e39_91261fbe93fb.slice/crio-d79ccbd876a908d4c84f511f4c4a5701914547b5fd212f33cd3e4acca9271ada WatchSource:0}: Error finding container d79ccbd876a908d4c84f511f4c4a5701914547b5fd212f33cd3e4acca9271ada: Status 404 returned error can't find the container with id d79ccbd876a908d4c84f511f4c4a5701914547b5fd212f33cd3e4acca9271ada Feb 26 17:41:25 crc kubenswrapper[4805]: I0226 17:41:25.926541 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 17:41:25 crc kubenswrapper[4805]: I0226 17:41:25.967251 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bfpdp"] Feb 26 17:41:25 crc kubenswrapper[4805]: I0226 17:41:25.969055 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bfpdp" Feb 26 17:41:25 crc kubenswrapper[4805]: I0226 17:41:25.973284 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 26 17:41:25 crc kubenswrapper[4805]: I0226 17:41:25.977449 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 26 17:41:25 crc kubenswrapper[4805]: I0226 17:41:25.999561 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bfpdp"] Feb 26 17:41:26 crc kubenswrapper[4805]: I0226 17:41:26.102141 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnqb5\" (UniqueName: \"kubernetes.io/projected/59254db9-616c-48c0-bad7-c55d30e99749-kube-api-access-gnqb5\") pod \"nova-cell1-conductor-db-sync-bfpdp\" (UID: \"59254db9-616c-48c0-bad7-c55d30e99749\") " pod="openstack/nova-cell1-conductor-db-sync-bfpdp" Feb 26 17:41:26 crc kubenswrapper[4805]: I0226 17:41:26.103593 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59254db9-616c-48c0-bad7-c55d30e99749-scripts\") pod \"nova-cell1-conductor-db-sync-bfpdp\" (UID: \"59254db9-616c-48c0-bad7-c55d30e99749\") " pod="openstack/nova-cell1-conductor-db-sync-bfpdp" Feb 26 17:41:26 crc kubenswrapper[4805]: I0226 17:41:26.103908 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59254db9-616c-48c0-bad7-c55d30e99749-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bfpdp\" (UID: \"59254db9-616c-48c0-bad7-c55d30e99749\") " pod="openstack/nova-cell1-conductor-db-sync-bfpdp" Feb 26 17:41:26 crc kubenswrapper[4805]: I0226 17:41:26.104195 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59254db9-616c-48c0-bad7-c55d30e99749-config-data\") pod \"nova-cell1-conductor-db-sync-bfpdp\" (UID: \"59254db9-616c-48c0-bad7-c55d30e99749\") " pod="openstack/nova-cell1-conductor-db-sync-bfpdp" Feb 26 17:41:26 crc kubenswrapper[4805]: W0226 17:41:26.121801 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd867cdff_5a57_451e_bb64_6b28255e4ae6.slice/crio-0e8093b8880fff5c912ba1b3957ff11fbcdbc178b27e878bdcdd3904b4f74c42 WatchSource:0}: Error finding container 0e8093b8880fff5c912ba1b3957ff11fbcdbc178b27e878bdcdd3904b4f74c42: Status 404 returned error can't find the container with id 0e8093b8880fff5c912ba1b3957ff11fbcdbc178b27e878bdcdd3904b4f74c42 Feb 26 17:41:26 crc kubenswrapper[4805]: I0226 17:41:26.122470 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-wwk5t"] Feb 26 17:41:26 crc kubenswrapper[4805]: I0226 17:41:26.205798 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59254db9-616c-48c0-bad7-c55d30e99749-config-data\") pod \"nova-cell1-conductor-db-sync-bfpdp\" (UID: \"59254db9-616c-48c0-bad7-c55d30e99749\") " pod="openstack/nova-cell1-conductor-db-sync-bfpdp" Feb 26 17:41:26 crc kubenswrapper[4805]: I0226 17:41:26.206139 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnqb5\" (UniqueName: \"kubernetes.io/projected/59254db9-616c-48c0-bad7-c55d30e99749-kube-api-access-gnqb5\") pod \"nova-cell1-conductor-db-sync-bfpdp\" (UID: \"59254db9-616c-48c0-bad7-c55d30e99749\") " pod="openstack/nova-cell1-conductor-db-sync-bfpdp" Feb 26 17:41:26 crc kubenswrapper[4805]: I0226 17:41:26.206388 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59254db9-616c-48c0-bad7-c55d30e99749-scripts\") pod \"nova-cell1-conductor-db-sync-bfpdp\" (UID: \"59254db9-616c-48c0-bad7-c55d30e99749\") " pod="openstack/nova-cell1-conductor-db-sync-bfpdp" Feb 26 17:41:26 crc kubenswrapper[4805]: I0226 17:41:26.206629 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59254db9-616c-48c0-bad7-c55d30e99749-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bfpdp\" (UID: \"59254db9-616c-48c0-bad7-c55d30e99749\") " pod="openstack/nova-cell1-conductor-db-sync-bfpdp" Feb 26 17:41:26 crc kubenswrapper[4805]: I0226 17:41:26.214003 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59254db9-616c-48c0-bad7-c55d30e99749-scripts\") pod \"nova-cell1-conductor-db-sync-bfpdp\" (UID: \"59254db9-616c-48c0-bad7-c55d30e99749\") " pod="openstack/nova-cell1-conductor-db-sync-bfpdp" Feb 26 17:41:26 crc kubenswrapper[4805]: I0226 17:41:26.214566 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59254db9-616c-48c0-bad7-c55d30e99749-config-data\") pod \"nova-cell1-conductor-db-sync-bfpdp\" (UID: \"59254db9-616c-48c0-bad7-c55d30e99749\") " pod="openstack/nova-cell1-conductor-db-sync-bfpdp" Feb 26 17:41:26 crc kubenswrapper[4805]: I0226 17:41:26.217774 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59254db9-616c-48c0-bad7-c55d30e99749-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bfpdp\" (UID: \"59254db9-616c-48c0-bad7-c55d30e99749\") " pod="openstack/nova-cell1-conductor-db-sync-bfpdp" Feb 26 17:41:26 crc kubenswrapper[4805]: I0226 17:41:26.226200 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnqb5\" (UniqueName: \"kubernetes.io/projected/59254db9-616c-48c0-bad7-c55d30e99749-kube-api-access-gnqb5\") pod \"nova-cell1-conductor-db-sync-bfpdp\" (UID: \"59254db9-616c-48c0-bad7-c55d30e99749\") " pod="openstack/nova-cell1-conductor-db-sync-bfpdp" Feb 26 17:41:26 crc kubenswrapper[4805]: I0226 17:41:26.291355 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bfpdp" Feb 26 17:41:26 crc kubenswrapper[4805]: I0226 17:41:26.300611 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 17:41:26 crc kubenswrapper[4805]: W0226 17:41:26.302280 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98441635_6cfa_4207_957c_6161496d848c.slice/crio-e3f2e8475d53ab12f687756cccd0756a8daee7e6cfd16eee88e16f45e022e1a5 WatchSource:0}: Error finding container e3f2e8475d53ab12f687756cccd0756a8daee7e6cfd16eee88e16f45e022e1a5: Status 404 returned error can't find the container with id e3f2e8475d53ab12f687756cccd0756a8daee7e6cfd16eee88e16f45e022e1a5 Feb 26 17:41:26 crc kubenswrapper[4805]: I0226 17:41:26.481875 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c6496761-f8be-4176-a000-0293499b739e","Type":"ContainerStarted","Data":"c342933d0f422c141ea086370fbc3d58b5941a61c9c8afd36798c48185940856"} Feb 26 17:41:26 crc kubenswrapper[4805]: I0226 17:41:26.485839 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-884c8b8f5-wwk5t" event={"ID":"d867cdff-5a57-451e-bb64-6b28255e4ae6","Type":"ContainerStarted","Data":"6c493379b4cd5b0fd727761c5e0a6ca6cfe54c63a0f32a3b9aa563682302b314"} Feb 26 17:41:26 crc kubenswrapper[4805]: I0226 17:41:26.485892 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-884c8b8f5-wwk5t" event={"ID":"d867cdff-5a57-451e-bb64-6b28255e4ae6","Type":"ContainerStarted","Data":"0e8093b8880fff5c912ba1b3957ff11fbcdbc178b27e878bdcdd3904b4f74c42"} Feb 26 17:41:26 crc kubenswrapper[4805]: I0226 17:41:26.499252 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"98441635-6cfa-4207-957c-6161496d848c","Type":"ContainerStarted","Data":"e3f2e8475d53ab12f687756cccd0756a8daee7e6cfd16eee88e16f45e022e1a5"} Feb 26 17:41:26 crc kubenswrapper[4805]: I0226 17:41:26.500758 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0f85b857-4074-4d69-9e39-91261fbe93fb","Type":"ContainerStarted","Data":"d79ccbd876a908d4c84f511f4c4a5701914547b5fd212f33cd3e4acca9271ada"} Feb 26 17:41:26 crc kubenswrapper[4805]: I0226 17:41:26.519581 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-cs754" event={"ID":"f590f397-4abc-4f7a-a9f9-013f581e3ec6","Type":"ContainerStarted","Data":"a3416ad08c841f32896caf390933d5af6f20f84578821770aff259b157fb0d60"} Feb 26 17:41:26 crc kubenswrapper[4805]: I0226 17:41:26.520068 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-cs754" event={"ID":"f590f397-4abc-4f7a-a9f9-013f581e3ec6","Type":"ContainerStarted","Data":"9a96a82e690b30306510aa828202a0fcc975e4bada0e7c262c97c89d0f36574d"} Feb 26 17:41:26 crc kubenswrapper[4805]: I0226 17:41:26.544357 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bea31308-564c-4030-b718-41b7fb684418","Type":"ContainerStarted","Data":"32a16e243a6d4ecc5b6918a01323c102670ff8f0f75d8b1d6d55921057946ab4"} Feb 26 17:41:26 crc kubenswrapper[4805]: I0226 17:41:26.562594 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-cs754" podStartSLOduration=2.562572802 podStartE2EDuration="2.562572802s" podCreationTimestamp="2026-02-26 17:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:41:26.562353116 +0000 UTC m=+1601.124107465" watchObservedRunningTime="2026-02-26 17:41:26.562572802 +0000 UTC m=+1601.124327141" Feb 26 17:41:26 crc kubenswrapper[4805]: I0226 17:41:26.921558 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bfpdp"] Feb 26 17:41:27 crc kubenswrapper[4805]: I0226 17:41:27.592901 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bfpdp" event={"ID":"59254db9-616c-48c0-bad7-c55d30e99749","Type":"ContainerStarted","Data":"ea36b2b3a113a3b525158efbbaa019cdbbe15f4b9c7b369d58bde6e7a288fd7f"} Feb 26 17:41:27 crc kubenswrapper[4805]: I0226 17:41:27.593896 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bfpdp" event={"ID":"59254db9-616c-48c0-bad7-c55d30e99749","Type":"ContainerStarted","Data":"4975e6698e0786008128aa47a1051ed4a3b4f6493fd34d56c490f3a78aeb9b0d"} Feb 26 17:41:27 crc kubenswrapper[4805]: I0226 17:41:27.606884 4805 generic.go:334] "Generic (PLEG): container finished" podID="d867cdff-5a57-451e-bb64-6b28255e4ae6" containerID="6c493379b4cd5b0fd727761c5e0a6ca6cfe54c63a0f32a3b9aa563682302b314" exitCode=0 Feb 26 17:41:27 crc kubenswrapper[4805]: I0226 17:41:27.608207 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-884c8b8f5-wwk5t" event={"ID":"d867cdff-5a57-451e-bb64-6b28255e4ae6","Type":"ContainerDied","Data":"6c493379b4cd5b0fd727761c5e0a6ca6cfe54c63a0f32a3b9aa563682302b314"} Feb 26 17:41:27 crc kubenswrapper[4805]: I0226 17:41:27.618141 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-bfpdp" podStartSLOduration=2.618118875 podStartE2EDuration="2.618118875s" podCreationTimestamp="2026-02-26 17:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:41:27.611545579 +0000 UTC m=+1602.173299918" watchObservedRunningTime="2026-02-26 17:41:27.618118875 +0000 UTC m=+1602.179873214" Feb 26 17:41:28 crc kubenswrapper[4805]: I0226 17:41:28.170069 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 17:41:28 crc kubenswrapper[4805]: I0226 17:41:28.253438 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 17:41:30 crc kubenswrapper[4805]: I0226 17:41:30.656518 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-884c8b8f5-wwk5t" event={"ID":"d867cdff-5a57-451e-bb64-6b28255e4ae6","Type":"ContainerStarted","Data":"e0fc914adb0fbf4f1d46ff6d613031ea90174eebf301c3431acc650c3007ab3a"} Feb 26 17:41:30 crc kubenswrapper[4805]: I0226 17:41:30.657187 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-884c8b8f5-wwk5t" Feb 26 17:41:30 crc kubenswrapper[4805]: I0226 17:41:30.661189 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="98441635-6cfa-4207-957c-6161496d848c" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://3dc62c01fa1aeab19f678f458abf6f92fba5578bc662e81787685129d94a098d" gracePeriod=30 Feb 26 17:41:30 crc kubenswrapper[4805]: I0226 17:41:30.661366 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"98441635-6cfa-4207-957c-6161496d848c","Type":"ContainerStarted","Data":"3dc62c01fa1aeab19f678f458abf6f92fba5578bc662e81787685129d94a098d"} Feb 26 17:41:30 crc kubenswrapper[4805]: I0226 17:41:30.663597 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0f85b857-4074-4d69-9e39-91261fbe93fb","Type":"ContainerStarted","Data":"6ff994da92f4f10739e1442f3355f60915c296a6ace82363b8b9eb98e0200438"} Feb 26 17:41:30 crc kubenswrapper[4805]: I0226 17:41:30.665169 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bea31308-564c-4030-b718-41b7fb684418","Type":"ContainerStarted","Data":"f1ce4fafa7d423379fed195c24f8437a0511b49fd88927f4b96c59473168ee05"} Feb 26 17:41:30 crc kubenswrapper[4805]: I0226 17:41:30.666594 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c6496761-f8be-4176-a000-0293499b739e","Type":"ContainerStarted","Data":"567172892bede0e1db5f88587994e74e56faff7e6330f45fb97264b33dc5ed77"} Feb 26 17:41:30 crc kubenswrapper[4805]: I0226 17:41:30.694572 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-884c8b8f5-wwk5t" podStartSLOduration=6.694555618 podStartE2EDuration="6.694555618s" podCreationTimestamp="2026-02-26 17:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:41:30.683595081 +0000 UTC m=+1605.245349420" watchObservedRunningTime="2026-02-26 17:41:30.694555618 +0000 UTC m=+1605.256309957" Feb 26 17:41:30 crc kubenswrapper[4805]: I0226 17:41:30.713354 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 26 17:41:30 crc kubenswrapper[4805]: I0226 17:41:30.720041 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.9809255009999998 podStartE2EDuration="6.72000009s" podCreationTimestamp="2026-02-26 17:41:24 +0000 UTC" firstStartedPulling="2026-02-26 17:41:26.3343173 +0000 UTC m=+1600.896071639" lastFinishedPulling="2026-02-26 17:41:30.073391889 +0000 UTC m=+1604.635146228" observedRunningTime="2026-02-26 17:41:30.706415367 +0000 UTC m=+1605.268169706" watchObservedRunningTime="2026-02-26 17:41:30.72000009 +0000 UTC m=+1605.281754429" Feb 26 17:41:30 crc kubenswrapper[4805]: I0226 17:41:30.742895 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.503357207 podStartE2EDuration="6.742868017s" podCreationTimestamp="2026-02-26 17:41:24 +0000 UTC" firstStartedPulling="2026-02-26 17:41:25.835007847 +0000 UTC m=+1600.396762176" lastFinishedPulling="2026-02-26 17:41:30.074518647 +0000 UTC m=+1604.636272986" observedRunningTime="2026-02-26 17:41:30.728559876 +0000 UTC m=+1605.290314255" watchObservedRunningTime="2026-02-26 17:41:30.742868017 +0000 UTC m=+1605.304622346" Feb 26 17:41:31 crc kubenswrapper[4805]: I0226 17:41:31.683167 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0f85b857-4074-4d69-9e39-91261fbe93fb","Type":"ContainerStarted","Data":"4bb7f967f8e4cb7556c0047a47569b053e480b0bbfc57b2b117717525ac19a45"} Feb 26 17:41:31 crc kubenswrapper[4805]: I0226 17:41:31.683339 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0f85b857-4074-4d69-9e39-91261fbe93fb" containerName="nova-metadata-log" containerID="cri-o://6ff994da92f4f10739e1442f3355f60915c296a6ace82363b8b9eb98e0200438" gracePeriod=30 Feb 26 17:41:31 crc kubenswrapper[4805]: I0226 17:41:31.683378 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0f85b857-4074-4d69-9e39-91261fbe93fb" containerName="nova-metadata-metadata" containerID="cri-o://4bb7f967f8e4cb7556c0047a47569b053e480b0bbfc57b2b117717525ac19a45" gracePeriod=30 Feb 26 17:41:31 crc kubenswrapper[4805]: I0226 17:41:31.688727 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c6496761-f8be-4176-a000-0293499b739e","Type":"ContainerStarted","Data":"ae5cf45ce735f1633abac0bfd93ea47ecc65e95675fc2841e610a3c4182583fd"} Feb 26 17:41:31 crc kubenswrapper[4805]: I0226 17:41:31.728261 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.570302738 podStartE2EDuration="7.728238639s" podCreationTimestamp="2026-02-26 17:41:24 +0000 UTC" firstStartedPulling="2026-02-26 17:41:25.916596826 +0000 UTC m=+1600.478351165" lastFinishedPulling="2026-02-26 17:41:30.074532737 +0000 UTC m=+1604.636287066" observedRunningTime="2026-02-26 17:41:31.727233504 +0000 UTC m=+1606.288987853" watchObservedRunningTime="2026-02-26 17:41:31.728238639 +0000 UTC m=+1606.289992978" Feb 26 17:41:31 crc kubenswrapper[4805]: I0226 17:41:31.735455 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.570327929 podStartE2EDuration="7.735423781s" podCreationTimestamp="2026-02-26 17:41:24 +0000 UTC" firstStartedPulling="2026-02-26 17:41:25.90963162 +0000 UTC m=+1600.471385959" lastFinishedPulling="2026-02-26 17:41:30.074727472 +0000 UTC m=+1604.636481811" observedRunningTime="2026-02-26 17:41:31.707203618 +0000 UTC m=+1606.268957967" watchObservedRunningTime="2026-02-26 17:41:31.735423781 +0000 UTC m=+1606.297178120" Feb 26 17:41:32 crc kubenswrapper[4805]: I0226 17:41:32.476474 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 17:41:32 crc kubenswrapper[4805]: I0226 17:41:32.629286 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f85b857-4074-4d69-9e39-91261fbe93fb-combined-ca-bundle\") pod \"0f85b857-4074-4d69-9e39-91261fbe93fb\" (UID: \"0f85b857-4074-4d69-9e39-91261fbe93fb\") " Feb 26 17:41:32 crc kubenswrapper[4805]: I0226 17:41:32.629387 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f85b857-4074-4d69-9e39-91261fbe93fb-logs\") pod \"0f85b857-4074-4d69-9e39-91261fbe93fb\" (UID: \"0f85b857-4074-4d69-9e39-91261fbe93fb\") " Feb 26 17:41:32 crc kubenswrapper[4805]: I0226 17:41:32.629463 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdrvs\" (UniqueName: \"kubernetes.io/projected/0f85b857-4074-4d69-9e39-91261fbe93fb-kube-api-access-rdrvs\") pod \"0f85b857-4074-4d69-9e39-91261fbe93fb\" (UID: \"0f85b857-4074-4d69-9e39-91261fbe93fb\") " Feb 26 17:41:32 crc kubenswrapper[4805]: I0226 17:41:32.629571 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f85b857-4074-4d69-9e39-91261fbe93fb-config-data\") pod \"0f85b857-4074-4d69-9e39-91261fbe93fb\" (UID: \"0f85b857-4074-4d69-9e39-91261fbe93fb\") " Feb 26 17:41:32 crc kubenswrapper[4805]: I0226 17:41:32.629849 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f85b857-4074-4d69-9e39-91261fbe93fb-logs" (OuterVolumeSpecName: "logs") pod "0f85b857-4074-4d69-9e39-91261fbe93fb" (UID: "0f85b857-4074-4d69-9e39-91261fbe93fb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:41:32 crc kubenswrapper[4805]: I0226 17:41:32.630506 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f85b857-4074-4d69-9e39-91261fbe93fb-logs\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:32 crc kubenswrapper[4805]: I0226 17:41:32.639522 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f85b857-4074-4d69-9e39-91261fbe93fb-kube-api-access-rdrvs" (OuterVolumeSpecName: "kube-api-access-rdrvs") pod "0f85b857-4074-4d69-9e39-91261fbe93fb" (UID: "0f85b857-4074-4d69-9e39-91261fbe93fb"). InnerVolumeSpecName "kube-api-access-rdrvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:41:32 crc kubenswrapper[4805]: I0226 17:41:32.664225 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f85b857-4074-4d69-9e39-91261fbe93fb-config-data" (OuterVolumeSpecName: "config-data") pod "0f85b857-4074-4d69-9e39-91261fbe93fb" (UID: "0f85b857-4074-4d69-9e39-91261fbe93fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:41:32 crc kubenswrapper[4805]: I0226 17:41:32.672145 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f85b857-4074-4d69-9e39-91261fbe93fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f85b857-4074-4d69-9e39-91261fbe93fb" (UID: "0f85b857-4074-4d69-9e39-91261fbe93fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:41:32 crc kubenswrapper[4805]: I0226 17:41:32.701661 4805 generic.go:334] "Generic (PLEG): container finished" podID="0f85b857-4074-4d69-9e39-91261fbe93fb" containerID="4bb7f967f8e4cb7556c0047a47569b053e480b0bbfc57b2b117717525ac19a45" exitCode=0 Feb 26 17:41:32 crc kubenswrapper[4805]: I0226 17:41:32.701694 4805 generic.go:334] "Generic (PLEG): container finished" podID="0f85b857-4074-4d69-9e39-91261fbe93fb" containerID="6ff994da92f4f10739e1442f3355f60915c296a6ace82363b8b9eb98e0200438" exitCode=143 Feb 26 17:41:32 crc kubenswrapper[4805]: I0226 17:41:32.701722 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0f85b857-4074-4d69-9e39-91261fbe93fb","Type":"ContainerDied","Data":"4bb7f967f8e4cb7556c0047a47569b053e480b0bbfc57b2b117717525ac19a45"} Feb 26 17:41:32 crc kubenswrapper[4805]: I0226 17:41:32.701762 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0f85b857-4074-4d69-9e39-91261fbe93fb","Type":"ContainerDied","Data":"6ff994da92f4f10739e1442f3355f60915c296a6ace82363b8b9eb98e0200438"} Feb 26 17:41:32 crc kubenswrapper[4805]: I0226 17:41:32.701774 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0f85b857-4074-4d69-9e39-91261fbe93fb","Type":"ContainerDied","Data":"d79ccbd876a908d4c84f511f4c4a5701914547b5fd212f33cd3e4acca9271ada"} Feb 26 17:41:32 crc kubenswrapper[4805]: I0226 17:41:32.701795 4805 scope.go:117] "RemoveContainer" containerID="4bb7f967f8e4cb7556c0047a47569b053e480b0bbfc57b2b117717525ac19a45" Feb 26 17:41:32 crc kubenswrapper[4805]: I0226 17:41:32.701795 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 17:41:32 crc kubenswrapper[4805]: I0226 17:41:32.734217 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f85b857-4074-4d69-9e39-91261fbe93fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:32 crc kubenswrapper[4805]: I0226 17:41:32.734721 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdrvs\" (UniqueName: \"kubernetes.io/projected/0f85b857-4074-4d69-9e39-91261fbe93fb-kube-api-access-rdrvs\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:32 crc kubenswrapper[4805]: I0226 17:41:32.734735 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f85b857-4074-4d69-9e39-91261fbe93fb-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:32 crc kubenswrapper[4805]: I0226 17:41:32.772469 4805 scope.go:117] "RemoveContainer" containerID="6ff994da92f4f10739e1442f3355f60915c296a6ace82363b8b9eb98e0200438" Feb 26 17:41:32 crc kubenswrapper[4805]: I0226 17:41:32.825507 4805 scope.go:117] "RemoveContainer" containerID="4bb7f967f8e4cb7556c0047a47569b053e480b0bbfc57b2b117717525ac19a45" Feb 26 17:41:32 crc kubenswrapper[4805]: I0226 17:41:32.840711 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 17:41:32 crc kubenswrapper[4805]: I0226 17:41:32.840770 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 17:41:32 crc kubenswrapper[4805]: E0226 17:41:32.842776 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bb7f967f8e4cb7556c0047a47569b053e480b0bbfc57b2b117717525ac19a45\": container with ID starting with 4bb7f967f8e4cb7556c0047a47569b053e480b0bbfc57b2b117717525ac19a45 not found: ID does not exist" containerID="4bb7f967f8e4cb7556c0047a47569b053e480b0bbfc57b2b117717525ac19a45" Feb 26 17:41:32 crc kubenswrapper[4805]: I0226 17:41:32.842820 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bb7f967f8e4cb7556c0047a47569b053e480b0bbfc57b2b117717525ac19a45"} err="failed to get container status \"4bb7f967f8e4cb7556c0047a47569b053e480b0bbfc57b2b117717525ac19a45\": rpc error: code = NotFound desc = could not find container \"4bb7f967f8e4cb7556c0047a47569b053e480b0bbfc57b2b117717525ac19a45\": container with ID starting with 4bb7f967f8e4cb7556c0047a47569b053e480b0bbfc57b2b117717525ac19a45 not found: ID does not exist" Feb 26 17:41:32 crc kubenswrapper[4805]: I0226 17:41:32.842851 4805 scope.go:117] "RemoveContainer" containerID="6ff994da92f4f10739e1442f3355f60915c296a6ace82363b8b9eb98e0200438" Feb 26 17:41:32 crc kubenswrapper[4805]: E0226 17:41:32.844669 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ff994da92f4f10739e1442f3355f60915c296a6ace82363b8b9eb98e0200438\": container with ID starting with 6ff994da92f4f10739e1442f3355f60915c296a6ace82363b8b9eb98e0200438 not found: ID does not exist" containerID="6ff994da92f4f10739e1442f3355f60915c296a6ace82363b8b9eb98e0200438" Feb 26 17:41:32 crc kubenswrapper[4805]: I0226 17:41:32.844695 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ff994da92f4f10739e1442f3355f60915c296a6ace82363b8b9eb98e0200438"} err="failed to get container status \"6ff994da92f4f10739e1442f3355f60915c296a6ace82363b8b9eb98e0200438\": rpc error: code = NotFound desc = could not find container \"6ff994da92f4f10739e1442f3355f60915c296a6ace82363b8b9eb98e0200438\": container with ID starting with 6ff994da92f4f10739e1442f3355f60915c296a6ace82363b8b9eb98e0200438 not found: ID does not exist" Feb 26 17:41:32 crc kubenswrapper[4805]: I0226 17:41:32.844718 4805 scope.go:117] "RemoveContainer" containerID="4bb7f967f8e4cb7556c0047a47569b053e480b0bbfc57b2b117717525ac19a45" Feb 26 17:41:32 crc kubenswrapper[4805]: I0226 17:41:32.845305 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bb7f967f8e4cb7556c0047a47569b053e480b0bbfc57b2b117717525ac19a45"} err="failed to get container status \"4bb7f967f8e4cb7556c0047a47569b053e480b0bbfc57b2b117717525ac19a45\": rpc error: code = NotFound desc = could not find container \"4bb7f967f8e4cb7556c0047a47569b053e480b0bbfc57b2b117717525ac19a45\": container with ID starting with 4bb7f967f8e4cb7556c0047a47569b053e480b0bbfc57b2b117717525ac19a45 not found: ID does not exist" Feb 26 17:41:32 crc kubenswrapper[4805]: I0226 17:41:32.845342 4805 scope.go:117] "RemoveContainer" containerID="6ff994da92f4f10739e1442f3355f60915c296a6ace82363b8b9eb98e0200438" Feb 26 17:41:32 crc kubenswrapper[4805]: I0226 17:41:32.845613 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ff994da92f4f10739e1442f3355f60915c296a6ace82363b8b9eb98e0200438"} err="failed to get container status \"6ff994da92f4f10739e1442f3355f60915c296a6ace82363b8b9eb98e0200438\": rpc error: code = NotFound desc = could not find container \"6ff994da92f4f10739e1442f3355f60915c296a6ace82363b8b9eb98e0200438\": container with ID starting with 6ff994da92f4f10739e1442f3355f60915c296a6ace82363b8b9eb98e0200438 not found: ID does not exist" Feb 26 17:41:32 crc kubenswrapper[4805]: I0226 17:41:32.856243 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 26 17:41:32 crc kubenswrapper[4805]: E0226 17:41:32.872384 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f85b857-4074-4d69-9e39-91261fbe93fb" containerName="nova-metadata-metadata" Feb 26 17:41:32 crc kubenswrapper[4805]: I0226 17:41:32.872434 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f85b857-4074-4d69-9e39-91261fbe93fb" containerName="nova-metadata-metadata" Feb 26 17:41:32 crc kubenswrapper[4805]: E0226 17:41:32.872531 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f85b857-4074-4d69-9e39-91261fbe93fb" containerName="nova-metadata-log" Feb 26 17:41:32 crc kubenswrapper[4805]: I0226 17:41:32.872538 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f85b857-4074-4d69-9e39-91261fbe93fb" containerName="nova-metadata-log" Feb 26 17:41:32 crc kubenswrapper[4805]: I0226 17:41:32.873091 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f85b857-4074-4d69-9e39-91261fbe93fb" containerName="nova-metadata-metadata" Feb 26 17:41:32 crc kubenswrapper[4805]: I0226 17:41:32.873118 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f85b857-4074-4d69-9e39-91261fbe93fb" containerName="nova-metadata-log" Feb 26 17:41:32 crc kubenswrapper[4805]: I0226 17:41:32.874763 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 17:41:32 crc kubenswrapper[4805]: I0226 17:41:32.883624 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 26 17:41:32 crc kubenswrapper[4805]: I0226 17:41:32.883910 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 26 17:41:32 crc kubenswrapper[4805]: I0226 17:41:32.895413 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 17:41:33 crc kubenswrapper[4805]: I0226 17:41:33.003734 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f85b857-4074-4d69-9e39-91261fbe93fb" path="/var/lib/kubelet/pods/0f85b857-4074-4d69-9e39-91261fbe93fb/volumes" Feb 26 17:41:33 crc kubenswrapper[4805]: I0226 17:41:33.049238 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/50271682-eacc-4986-8683-47d190c39a43-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"50271682-eacc-4986-8683-47d190c39a43\") " pod="openstack/nova-metadata-0" Feb 26 17:41:33 crc kubenswrapper[4805]: I0226 17:41:33.049296 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50271682-eacc-4986-8683-47d190c39a43-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"50271682-eacc-4986-8683-47d190c39a43\") " pod="openstack/nova-metadata-0" Feb 26 17:41:33 crc kubenswrapper[4805]: I0226 17:41:33.049335 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50271682-eacc-4986-8683-47d190c39a43-config-data\") pod \"nova-metadata-0\" (UID: \"50271682-eacc-4986-8683-47d190c39a43\") " pod="openstack/nova-metadata-0" Feb 26 17:41:33 crc kubenswrapper[4805]: I0226 17:41:33.049375 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgvxv\" (UniqueName: \"kubernetes.io/projected/50271682-eacc-4986-8683-47d190c39a43-kube-api-access-fgvxv\") pod \"nova-metadata-0\" (UID: \"50271682-eacc-4986-8683-47d190c39a43\") " pod="openstack/nova-metadata-0" Feb 26 17:41:33 crc kubenswrapper[4805]: I0226 17:41:33.049429 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50271682-eacc-4986-8683-47d190c39a43-logs\") pod \"nova-metadata-0\" (UID: \"50271682-eacc-4986-8683-47d190c39a43\") " pod="openstack/nova-metadata-0" Feb 26 17:41:33 crc kubenswrapper[4805]: I0226 17:41:33.151887 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50271682-eacc-4986-8683-47d190c39a43-config-data\") pod \"nova-metadata-0\" (UID: \"50271682-eacc-4986-8683-47d190c39a43\") " pod="openstack/nova-metadata-0" Feb 26 17:41:33 crc kubenswrapper[4805]: I0226 17:41:33.152295 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgvxv\" (UniqueName: \"kubernetes.io/projected/50271682-eacc-4986-8683-47d190c39a43-kube-api-access-fgvxv\") pod \"nova-metadata-0\" (UID: \"50271682-eacc-4986-8683-47d190c39a43\") " pod="openstack/nova-metadata-0" Feb 26 17:41:33 crc kubenswrapper[4805]: I0226 17:41:33.152506 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50271682-eacc-4986-8683-47d190c39a43-logs\") pod \"nova-metadata-0\" (UID: \"50271682-eacc-4986-8683-47d190c39a43\") " pod="openstack/nova-metadata-0" Feb 26 17:41:33 crc kubenswrapper[4805]: I0226 17:41:33.152789 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/50271682-eacc-4986-8683-47d190c39a43-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"50271682-eacc-4986-8683-47d190c39a43\") " pod="openstack/nova-metadata-0" Feb 26 17:41:33 crc kubenswrapper[4805]: I0226 17:41:33.152926 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50271682-eacc-4986-8683-47d190c39a43-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"50271682-eacc-4986-8683-47d190c39a43\") " pod="openstack/nova-metadata-0" Feb 26 17:41:33 crc kubenswrapper[4805]: I0226 17:41:33.153122 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50271682-eacc-4986-8683-47d190c39a43-logs\") pod \"nova-metadata-0\" (UID: \"50271682-eacc-4986-8683-47d190c39a43\") " pod="openstack/nova-metadata-0" Feb 26 17:41:33 crc kubenswrapper[4805]: I0226 17:41:33.158824 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50271682-eacc-4986-8683-47d190c39a43-config-data\") pod \"nova-metadata-0\" (UID: \"50271682-eacc-4986-8683-47d190c39a43\") " pod="openstack/nova-metadata-0" Feb 26 17:41:33 crc kubenswrapper[4805]: I0226 17:41:33.159351 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/50271682-eacc-4986-8683-47d190c39a43-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"50271682-eacc-4986-8683-47d190c39a43\") " pod="openstack/nova-metadata-0" Feb 26 17:41:33 crc kubenswrapper[4805]: I0226 17:41:33.164687 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50271682-eacc-4986-8683-47d190c39a43-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"50271682-eacc-4986-8683-47d190c39a43\") " pod="openstack/nova-metadata-0" Feb 26 17:41:33 crc kubenswrapper[4805]: I0226 17:41:33.171828 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgvxv\" (UniqueName: \"kubernetes.io/projected/50271682-eacc-4986-8683-47d190c39a43-kube-api-access-fgvxv\") pod \"nova-metadata-0\" (UID: \"50271682-eacc-4986-8683-47d190c39a43\") " pod="openstack/nova-metadata-0" Feb 26 17:41:33 crc kubenswrapper[4805]: I0226 17:41:33.210649 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 17:41:33 crc kubenswrapper[4805]: I0226 17:41:33.685443 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 26 17:41:33 crc kubenswrapper[4805]: I0226 17:41:33.692956 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 17:41:33 crc kubenswrapper[4805]: W0226 17:41:33.703488 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50271682_eacc_4986_8683_47d190c39a43.slice/crio-4b206bc59d26413ad28c604d18a38a084c8c5a5455124a3efac5d991bfabc0db WatchSource:0}: Error finding container 4b206bc59d26413ad28c604d18a38a084c8c5a5455124a3efac5d991bfabc0db: Status 404 returned error can't find the container with id 4b206bc59d26413ad28c604d18a38a084c8c5a5455124a3efac5d991bfabc0db Feb 26 17:41:34 crc kubenswrapper[4805]: I0226 17:41:34.742761 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"50271682-eacc-4986-8683-47d190c39a43","Type":"ContainerStarted","Data":"48d6d4b6f026f204a02d5306822933b9b1f3dfc97330119b805fac0f1a0a8ca6"} Feb 26 17:41:34 crc kubenswrapper[4805]: I0226 17:41:34.743116 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"50271682-eacc-4986-8683-47d190c39a43","Type":"ContainerStarted","Data":"48a89cd96a9f5affe57f2efcee961369d5818e17611e851a218403d7b41542cb"} Feb 26 17:41:34 crc kubenswrapper[4805]: I0226 17:41:34.743129 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"50271682-eacc-4986-8683-47d190c39a43","Type":"ContainerStarted","Data":"4b206bc59d26413ad28c604d18a38a084c8c5a5455124a3efac5d991bfabc0db"} Feb 26 17:41:34 crc kubenswrapper[4805]: I0226 17:41:34.771439 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.771415253 podStartE2EDuration="2.771415253s" podCreationTimestamp="2026-02-26 17:41:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:41:34.760649651 +0000 UTC m=+1609.322404010" watchObservedRunningTime="2026-02-26 17:41:34.771415253 +0000 UTC m=+1609.333169592" Feb 26 17:41:34 crc kubenswrapper[4805]: I0226 17:41:34.812910 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 26 17:41:34 crc kubenswrapper[4805]: I0226 17:41:34.812981 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 26 17:41:34 crc kubenswrapper[4805]: I0226 17:41:34.848148 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 26 17:41:34 crc kubenswrapper[4805]: I0226 17:41:34.948549 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 17:41:34 crc kubenswrapper[4805]: I0226 17:41:34.948605 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 17:41:35 crc kubenswrapper[4805]: I0226 17:41:35.316284 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-884c8b8f5-wwk5t" Feb 26 17:41:35 crc kubenswrapper[4805]: I0226 17:41:35.422347 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-854cm"] Feb 26 17:41:35 crc kubenswrapper[4805]: I0226 17:41:35.422684 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58bd69657f-854cm" podUID="753f754f-839c-49a1-81e7-93d2c94a9cc7" containerName="dnsmasq-dns" containerID="cri-o://3bc403d4acf8101afb0c0a0b395e3755fcafc174537b0f18edf03e29f25feb9b" gracePeriod=10 Feb 26 17:41:35 crc kubenswrapper[4805]: I0226 17:41:35.754411 4805 generic.go:334] "Generic (PLEG): container finished" podID="f590f397-4abc-4f7a-a9f9-013f581e3ec6" containerID="a3416ad08c841f32896caf390933d5af6f20f84578821770aff259b157fb0d60" exitCode=0 Feb 26 17:41:35 crc kubenswrapper[4805]: I0226 17:41:35.754506 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-cs754" event={"ID":"f590f397-4abc-4f7a-a9f9-013f581e3ec6","Type":"ContainerDied","Data":"a3416ad08c841f32896caf390933d5af6f20f84578821770aff259b157fb0d60"} Feb 26 17:41:35 crc kubenswrapper[4805]: I0226 17:41:35.757066 4805 generic.go:334] "Generic (PLEG): container finished" podID="753f754f-839c-49a1-81e7-93d2c94a9cc7" containerID="3bc403d4acf8101afb0c0a0b395e3755fcafc174537b0f18edf03e29f25feb9b" exitCode=0 Feb 26 17:41:35 crc kubenswrapper[4805]: I0226 17:41:35.757408 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bd69657f-854cm" event={"ID":"753f754f-839c-49a1-81e7-93d2c94a9cc7","Type":"ContainerDied","Data":"3bc403d4acf8101afb0c0a0b395e3755fcafc174537b0f18edf03e29f25feb9b"} Feb 26 17:41:35 crc kubenswrapper[4805]: I0226 17:41:35.805926 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 26 17:41:36 crc kubenswrapper[4805]: I0226 17:41:36.034232 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c6496761-f8be-4176-a000-0293499b739e" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.220:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 17:41:36 crc kubenswrapper[4805]: I0226 17:41:36.034432 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c6496761-f8be-4176-a000-0293499b739e" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.220:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 17:41:36 crc kubenswrapper[4805]: I0226 17:41:36.102368 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bd69657f-854cm" Feb 26 17:41:36 crc kubenswrapper[4805]: I0226 17:41:36.220348 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/753f754f-839c-49a1-81e7-93d2c94a9cc7-config\") pod \"753f754f-839c-49a1-81e7-93d2c94a9cc7\" (UID: \"753f754f-839c-49a1-81e7-93d2c94a9cc7\") " Feb 26 17:41:36 crc kubenswrapper[4805]: I0226 17:41:36.220434 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/753f754f-839c-49a1-81e7-93d2c94a9cc7-ovsdbserver-nb\") pod \"753f754f-839c-49a1-81e7-93d2c94a9cc7\" (UID: \"753f754f-839c-49a1-81e7-93d2c94a9cc7\") " Feb 26 17:41:36 crc kubenswrapper[4805]: I0226 17:41:36.220579 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/753f754f-839c-49a1-81e7-93d2c94a9cc7-dns-svc\") pod \"753f754f-839c-49a1-81e7-93d2c94a9cc7\" (UID: \"753f754f-839c-49a1-81e7-93d2c94a9cc7\") " Feb 26 17:41:36 crc kubenswrapper[4805]: I0226 17:41:36.220668 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/753f754f-839c-49a1-81e7-93d2c94a9cc7-dns-swift-storage-0\") pod \"753f754f-839c-49a1-81e7-93d2c94a9cc7\" (UID: \"753f754f-839c-49a1-81e7-93d2c94a9cc7\") " Feb 26 17:41:36 crc kubenswrapper[4805]: I0226 17:41:36.220705 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/753f754f-839c-49a1-81e7-93d2c94a9cc7-ovsdbserver-sb\") pod \"753f754f-839c-49a1-81e7-93d2c94a9cc7\" (UID: \"753f754f-839c-49a1-81e7-93d2c94a9cc7\") " Feb 26 17:41:36 crc kubenswrapper[4805]: I0226 17:41:36.220750 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5tmv\" (UniqueName: \"kubernetes.io/projected/753f754f-839c-49a1-81e7-93d2c94a9cc7-kube-api-access-c5tmv\") pod \"753f754f-839c-49a1-81e7-93d2c94a9cc7\" (UID: \"753f754f-839c-49a1-81e7-93d2c94a9cc7\") " Feb 26 17:41:36 crc kubenswrapper[4805]: I0226 17:41:36.231209 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/753f754f-839c-49a1-81e7-93d2c94a9cc7-kube-api-access-c5tmv" (OuterVolumeSpecName: "kube-api-access-c5tmv") pod "753f754f-839c-49a1-81e7-93d2c94a9cc7" (UID: "753f754f-839c-49a1-81e7-93d2c94a9cc7"). InnerVolumeSpecName "kube-api-access-c5tmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:41:36 crc kubenswrapper[4805]: I0226 17:41:36.276610 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/753f754f-839c-49a1-81e7-93d2c94a9cc7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "753f754f-839c-49a1-81e7-93d2c94a9cc7" (UID: "753f754f-839c-49a1-81e7-93d2c94a9cc7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:41:36 crc kubenswrapper[4805]: I0226 17:41:36.291073 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/753f754f-839c-49a1-81e7-93d2c94a9cc7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "753f754f-839c-49a1-81e7-93d2c94a9cc7" (UID: "753f754f-839c-49a1-81e7-93d2c94a9cc7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:41:36 crc kubenswrapper[4805]: I0226 17:41:36.294223 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/753f754f-839c-49a1-81e7-93d2c94a9cc7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "753f754f-839c-49a1-81e7-93d2c94a9cc7" (UID: "753f754f-839c-49a1-81e7-93d2c94a9cc7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:41:36 crc kubenswrapper[4805]: I0226 17:41:36.295674 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/753f754f-839c-49a1-81e7-93d2c94a9cc7-config" (OuterVolumeSpecName: "config") pod "753f754f-839c-49a1-81e7-93d2c94a9cc7" (UID: "753f754f-839c-49a1-81e7-93d2c94a9cc7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:41:36 crc kubenswrapper[4805]: I0226 17:41:36.312895 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/753f754f-839c-49a1-81e7-93d2c94a9cc7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "753f754f-839c-49a1-81e7-93d2c94a9cc7" (UID: "753f754f-839c-49a1-81e7-93d2c94a9cc7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:41:36 crc kubenswrapper[4805]: I0226 17:41:36.323311 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/753f754f-839c-49a1-81e7-93d2c94a9cc7-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:36 crc kubenswrapper[4805]: I0226 17:41:36.323360 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/753f754f-839c-49a1-81e7-93d2c94a9cc7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:36 crc kubenswrapper[4805]: I0226 17:41:36.323376 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/753f754f-839c-49a1-81e7-93d2c94a9cc7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:36 crc kubenswrapper[4805]: I0226 17:41:36.323387 4805 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/753f754f-839c-49a1-81e7-93d2c94a9cc7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:36 crc kubenswrapper[4805]: I0226 17:41:36.323397 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/753f754f-839c-49a1-81e7-93d2c94a9cc7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:36 crc kubenswrapper[4805]: I0226 17:41:36.323409 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5tmv\" (UniqueName: \"kubernetes.io/projected/753f754f-839c-49a1-81e7-93d2c94a9cc7-kube-api-access-c5tmv\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:36 crc kubenswrapper[4805]: I0226 17:41:36.771116 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bd69657f-854cm" Feb 26 17:41:36 crc kubenswrapper[4805]: I0226 17:41:36.772153 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bd69657f-854cm" event={"ID":"753f754f-839c-49a1-81e7-93d2c94a9cc7","Type":"ContainerDied","Data":"8db088528661a72ea5ad391f4170ef0a666904b54672a331e8e6701e68f83b40"} Feb 26 17:41:36 crc kubenswrapper[4805]: I0226 17:41:36.772211 4805 scope.go:117] "RemoveContainer" containerID="3bc403d4acf8101afb0c0a0b395e3755fcafc174537b0f18edf03e29f25feb9b" Feb 26 17:41:36 crc kubenswrapper[4805]: I0226 17:41:36.832296 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-854cm"] Feb 26 17:41:36 crc kubenswrapper[4805]: I0226 17:41:36.845599 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58bd69657f-854cm"] Feb 26 17:41:36 crc kubenswrapper[4805]: I0226 17:41:36.848698 4805 scope.go:117] "RemoveContainer" containerID="bb7dac5fd64b17452adddfbc8f2b6be3f4c22874aa3e40d8559448eb2c23cf51" Feb 26 17:41:36 crc kubenswrapper[4805]: I0226 17:41:36.965896 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="753f754f-839c-49a1-81e7-93d2c94a9cc7" path="/var/lib/kubelet/pods/753f754f-839c-49a1-81e7-93d2c94a9cc7/volumes" Feb 26 17:41:37 crc kubenswrapper[4805]: I0226 17:41:37.419311 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-cs754" Feb 26 17:41:37 crc kubenswrapper[4805]: I0226 17:41:37.551157 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f590f397-4abc-4f7a-a9f9-013f581e3ec6-combined-ca-bundle\") pod \"f590f397-4abc-4f7a-a9f9-013f581e3ec6\" (UID: \"f590f397-4abc-4f7a-a9f9-013f581e3ec6\") " Feb 26 17:41:37 crc kubenswrapper[4805]: I0226 17:41:37.551219 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f590f397-4abc-4f7a-a9f9-013f581e3ec6-scripts\") pod \"f590f397-4abc-4f7a-a9f9-013f581e3ec6\" (UID: \"f590f397-4abc-4f7a-a9f9-013f581e3ec6\") " Feb 26 17:41:37 crc kubenswrapper[4805]: I0226 17:41:37.551321 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghw5t\" (UniqueName: \"kubernetes.io/projected/f590f397-4abc-4f7a-a9f9-013f581e3ec6-kube-api-access-ghw5t\") pod \"f590f397-4abc-4f7a-a9f9-013f581e3ec6\" (UID: \"f590f397-4abc-4f7a-a9f9-013f581e3ec6\") " Feb 26 17:41:37 crc kubenswrapper[4805]: I0226 17:41:37.551386 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f590f397-4abc-4f7a-a9f9-013f581e3ec6-config-data\") pod \"f590f397-4abc-4f7a-a9f9-013f581e3ec6\" (UID: \"f590f397-4abc-4f7a-a9f9-013f581e3ec6\") " Feb 26 17:41:37 crc kubenswrapper[4805]: I0226 17:41:37.558040 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f590f397-4abc-4f7a-a9f9-013f581e3ec6-scripts" (OuterVolumeSpecName: "scripts") pod "f590f397-4abc-4f7a-a9f9-013f581e3ec6" (UID: "f590f397-4abc-4f7a-a9f9-013f581e3ec6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:41:37 crc kubenswrapper[4805]: I0226 17:41:37.558178 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f590f397-4abc-4f7a-a9f9-013f581e3ec6-kube-api-access-ghw5t" (OuterVolumeSpecName: "kube-api-access-ghw5t") pod "f590f397-4abc-4f7a-a9f9-013f581e3ec6" (UID: "f590f397-4abc-4f7a-a9f9-013f581e3ec6"). InnerVolumeSpecName "kube-api-access-ghw5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:41:37 crc kubenswrapper[4805]: I0226 17:41:37.586081 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f590f397-4abc-4f7a-a9f9-013f581e3ec6-config-data" (OuterVolumeSpecName: "config-data") pod "f590f397-4abc-4f7a-a9f9-013f581e3ec6" (UID: "f590f397-4abc-4f7a-a9f9-013f581e3ec6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:41:37 crc kubenswrapper[4805]: I0226 17:41:37.601180 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f590f397-4abc-4f7a-a9f9-013f581e3ec6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f590f397-4abc-4f7a-a9f9-013f581e3ec6" (UID: "f590f397-4abc-4f7a-a9f9-013f581e3ec6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:41:37 crc kubenswrapper[4805]: I0226 17:41:37.653156 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghw5t\" (UniqueName: \"kubernetes.io/projected/f590f397-4abc-4f7a-a9f9-013f581e3ec6-kube-api-access-ghw5t\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:37 crc kubenswrapper[4805]: I0226 17:41:37.653188 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f590f397-4abc-4f7a-a9f9-013f581e3ec6-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:37 crc kubenswrapper[4805]: I0226 17:41:37.653198 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f590f397-4abc-4f7a-a9f9-013f581e3ec6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:37 crc kubenswrapper[4805]: I0226 17:41:37.653206 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f590f397-4abc-4f7a-a9f9-013f581e3ec6-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:37 crc kubenswrapper[4805]: I0226 17:41:37.785274 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-cs754" event={"ID":"f590f397-4abc-4f7a-a9f9-013f581e3ec6","Type":"ContainerDied","Data":"9a96a82e690b30306510aa828202a0fcc975e4bada0e7c262c97c89d0f36574d"} Feb 26 17:41:37 crc kubenswrapper[4805]: I0226 17:41:37.785316 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-cs754" Feb 26 17:41:37 crc kubenswrapper[4805]: I0226 17:41:37.785337 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a96a82e690b30306510aa828202a0fcc975e4bada0e7c262c97c89d0f36574d" Feb 26 17:41:37 crc kubenswrapper[4805]: I0226 17:41:37.948628 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 26 17:41:37 crc kubenswrapper[4805]: I0226 17:41:37.948888 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c6496761-f8be-4176-a000-0293499b739e" containerName="nova-api-log" containerID="cri-o://567172892bede0e1db5f88587994e74e56faff7e6330f45fb97264b33dc5ed77" gracePeriod=30 Feb 26 17:41:37 crc kubenswrapper[4805]: I0226 17:41:37.948991 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c6496761-f8be-4176-a000-0293499b739e" containerName="nova-api-api" containerID="cri-o://ae5cf45ce735f1633abac0bfd93ea47ecc65e95675fc2841e610a3c4182583fd" gracePeriod=30 Feb 26 17:41:37 crc kubenswrapper[4805]: I0226 17:41:37.964404 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 17:41:37 crc kubenswrapper[4805]: I0226 17:41:37.986721 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 17:41:37 crc kubenswrapper[4805]: I0226 17:41:37.986939 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="50271682-eacc-4986-8683-47d190c39a43" containerName="nova-metadata-log" containerID="cri-o://48a89cd96a9f5affe57f2efcee961369d5818e17611e851a218403d7b41542cb" gracePeriod=30 Feb 26 17:41:37 crc kubenswrapper[4805]: I0226 17:41:37.987104 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="50271682-eacc-4986-8683-47d190c39a43" containerName="nova-metadata-metadata" containerID="cri-o://48d6d4b6f026f204a02d5306822933b9b1f3dfc97330119b805fac0f1a0a8ca6" gracePeriod=30 Feb 26 17:41:38 crc kubenswrapper[4805]: I0226 17:41:38.148606 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 17:41:38 crc kubenswrapper[4805]: I0226 17:41:38.154275 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="d82b85ed-2d9b-4e61-aa95-7ca78b0e96e7" containerName="kube-state-metrics" containerID="cri-o://ee20e971a7958bf6c11e987ee4ffc4aa8f7d51232d6dc453c9f5ab49a1faa6b8" gracePeriod=30 Feb 26 17:41:38 crc kubenswrapper[4805]: I0226 17:41:38.212188 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 26 17:41:38 crc kubenswrapper[4805]: I0226 17:41:38.212259 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 26 17:41:38 crc kubenswrapper[4805]: I0226 17:41:38.775959 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 26 17:41:38 crc kubenswrapper[4805]: I0226 17:41:38.830368 4805 generic.go:334] "Generic (PLEG): container finished" podID="d82b85ed-2d9b-4e61-aa95-7ca78b0e96e7" containerID="ee20e971a7958bf6c11e987ee4ffc4aa8f7d51232d6dc453c9f5ab49a1faa6b8" exitCode=2 Feb 26 17:41:38 crc kubenswrapper[4805]: I0226 17:41:38.830577 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d82b85ed-2d9b-4e61-aa95-7ca78b0e96e7","Type":"ContainerDied","Data":"ee20e971a7958bf6c11e987ee4ffc4aa8f7d51232d6dc453c9f5ab49a1faa6b8"} Feb 26 17:41:38 crc kubenswrapper[4805]: I0226 17:41:38.830843 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d82b85ed-2d9b-4e61-aa95-7ca78b0e96e7","Type":"ContainerDied","Data":"0b726e2072a4435e4da78b93a4af0451ed1eeaa5a962f3293a93c61c27b84c61"} Feb 26 17:41:38 crc kubenswrapper[4805]: I0226 17:41:38.830872 4805 scope.go:117] "RemoveContainer" containerID="ee20e971a7958bf6c11e987ee4ffc4aa8f7d51232d6dc453c9f5ab49a1faa6b8" Feb 26 17:41:38 crc kubenswrapper[4805]: I0226 17:41:38.830695 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 26 17:41:38 crc kubenswrapper[4805]: I0226 17:41:38.845116 4805 generic.go:334] "Generic (PLEG): container finished" podID="c6496761-f8be-4176-a000-0293499b739e" containerID="567172892bede0e1db5f88587994e74e56faff7e6330f45fb97264b33dc5ed77" exitCode=143 Feb 26 17:41:38 crc kubenswrapper[4805]: I0226 17:41:38.845234 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c6496761-f8be-4176-a000-0293499b739e","Type":"ContainerDied","Data":"567172892bede0e1db5f88587994e74e56faff7e6330f45fb97264b33dc5ed77"} Feb 26 17:41:38 crc kubenswrapper[4805]: I0226 17:41:38.848987 4805 generic.go:334] "Generic (PLEG): container finished" podID="50271682-eacc-4986-8683-47d190c39a43" containerID="48d6d4b6f026f204a02d5306822933b9b1f3dfc97330119b805fac0f1a0a8ca6" exitCode=0 Feb 26 17:41:38 crc kubenswrapper[4805]: I0226 17:41:38.849032 4805 generic.go:334] "Generic (PLEG): container finished" podID="50271682-eacc-4986-8683-47d190c39a43" containerID="48a89cd96a9f5affe57f2efcee961369d5818e17611e851a218403d7b41542cb" exitCode=143 Feb 26 17:41:38 crc kubenswrapper[4805]: I0226 17:41:38.849279 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"50271682-eacc-4986-8683-47d190c39a43","Type":"ContainerDied","Data":"48d6d4b6f026f204a02d5306822933b9b1f3dfc97330119b805fac0f1a0a8ca6"} Feb 26 17:41:38 crc kubenswrapper[4805]: I0226 17:41:38.849346 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"50271682-eacc-4986-8683-47d190c39a43","Type":"ContainerDied","Data":"48a89cd96a9f5affe57f2efcee961369d5818e17611e851a218403d7b41542cb"} Feb 26 17:41:38 crc kubenswrapper[4805]: I0226 17:41:38.850593 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="bea31308-564c-4030-b718-41b7fb684418" containerName="nova-scheduler-scheduler" containerID="cri-o://f1ce4fafa7d423379fed195c24f8437a0511b49fd88927f4b96c59473168ee05" gracePeriod=30 Feb 26 17:41:38 crc kubenswrapper[4805]: I0226 17:41:38.896838 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dw94n\" (UniqueName: \"kubernetes.io/projected/d82b85ed-2d9b-4e61-aa95-7ca78b0e96e7-kube-api-access-dw94n\") pod \"d82b85ed-2d9b-4e61-aa95-7ca78b0e96e7\" (UID: \"d82b85ed-2d9b-4e61-aa95-7ca78b0e96e7\") " Feb 26 17:41:38 crc kubenswrapper[4805]: I0226 17:41:38.910047 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d82b85ed-2d9b-4e61-aa95-7ca78b0e96e7-kube-api-access-dw94n" (OuterVolumeSpecName: "kube-api-access-dw94n") pod "d82b85ed-2d9b-4e61-aa95-7ca78b0e96e7" (UID: "d82b85ed-2d9b-4e61-aa95-7ca78b0e96e7"). InnerVolumeSpecName "kube-api-access-dw94n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:41:38 crc kubenswrapper[4805]: I0226 17:41:38.945168 4805 scope.go:117] "RemoveContainer" containerID="ee20e971a7958bf6c11e987ee4ffc4aa8f7d51232d6dc453c9f5ab49a1faa6b8" Feb 26 17:41:38 crc kubenswrapper[4805]: E0226 17:41:38.946608 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee20e971a7958bf6c11e987ee4ffc4aa8f7d51232d6dc453c9f5ab49a1faa6b8\": container with ID starting with ee20e971a7958bf6c11e987ee4ffc4aa8f7d51232d6dc453c9f5ab49a1faa6b8 not found: ID does not exist" containerID="ee20e971a7958bf6c11e987ee4ffc4aa8f7d51232d6dc453c9f5ab49a1faa6b8" Feb 26 17:41:38 crc kubenswrapper[4805]: I0226 17:41:38.946675 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee20e971a7958bf6c11e987ee4ffc4aa8f7d51232d6dc453c9f5ab49a1faa6b8"} err="failed to get container status \"ee20e971a7958bf6c11e987ee4ffc4aa8f7d51232d6dc453c9f5ab49a1faa6b8\": rpc error: code = NotFound desc = could not find container \"ee20e971a7958bf6c11e987ee4ffc4aa8f7d51232d6dc453c9f5ab49a1faa6b8\": container with ID starting with ee20e971a7958bf6c11e987ee4ffc4aa8f7d51232d6dc453c9f5ab49a1faa6b8 not found: ID does not exist" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:38.999962 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dw94n\" (UniqueName: \"kubernetes.io/projected/d82b85ed-2d9b-4e61-aa95-7ca78b0e96e7-kube-api-access-dw94n\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.006881 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.105625 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgvxv\" (UniqueName: \"kubernetes.io/projected/50271682-eacc-4986-8683-47d190c39a43-kube-api-access-fgvxv\") pod \"50271682-eacc-4986-8683-47d190c39a43\" (UID: \"50271682-eacc-4986-8683-47d190c39a43\") " Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.105844 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50271682-eacc-4986-8683-47d190c39a43-combined-ca-bundle\") pod \"50271682-eacc-4986-8683-47d190c39a43\" (UID: \"50271682-eacc-4986-8683-47d190c39a43\") " Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.106062 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/50271682-eacc-4986-8683-47d190c39a43-nova-metadata-tls-certs\") pod \"50271682-eacc-4986-8683-47d190c39a43\" (UID: \"50271682-eacc-4986-8683-47d190c39a43\") " Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.106204 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50271682-eacc-4986-8683-47d190c39a43-logs\") pod \"50271682-eacc-4986-8683-47d190c39a43\" (UID: \"50271682-eacc-4986-8683-47d190c39a43\") " Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.106315 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50271682-eacc-4986-8683-47d190c39a43-config-data\") pod \"50271682-eacc-4986-8683-47d190c39a43\" (UID: \"50271682-eacc-4986-8683-47d190c39a43\") " Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.107277 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50271682-eacc-4986-8683-47d190c39a43-logs" (OuterVolumeSpecName: "logs") pod "50271682-eacc-4986-8683-47d190c39a43" (UID: "50271682-eacc-4986-8683-47d190c39a43"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.107819 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50271682-eacc-4986-8683-47d190c39a43-logs\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.117733 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50271682-eacc-4986-8683-47d190c39a43-kube-api-access-fgvxv" (OuterVolumeSpecName: "kube-api-access-fgvxv") pod "50271682-eacc-4986-8683-47d190c39a43" (UID: "50271682-eacc-4986-8683-47d190c39a43"). InnerVolumeSpecName "kube-api-access-fgvxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.146984 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50271682-eacc-4986-8683-47d190c39a43-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50271682-eacc-4986-8683-47d190c39a43" (UID: "50271682-eacc-4986-8683-47d190c39a43"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.147797 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50271682-eacc-4986-8683-47d190c39a43-config-data" (OuterVolumeSpecName: "config-data") pod "50271682-eacc-4986-8683-47d190c39a43" (UID: "50271682-eacc-4986-8683-47d190c39a43"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.193555 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50271682-eacc-4986-8683-47d190c39a43-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "50271682-eacc-4986-8683-47d190c39a43" (UID: "50271682-eacc-4986-8683-47d190c39a43"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.210283 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50271682-eacc-4986-8683-47d190c39a43-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.210555 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgvxv\" (UniqueName: \"kubernetes.io/projected/50271682-eacc-4986-8683-47d190c39a43-kube-api-access-fgvxv\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.210640 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50271682-eacc-4986-8683-47d190c39a43-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.210720 4805 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/50271682-eacc-4986-8683-47d190c39a43-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.295068 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.313460 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.334516 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 17:41:39 crc kubenswrapper[4805]: E0226 17:41:39.335401 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d82b85ed-2d9b-4e61-aa95-7ca78b0e96e7" containerName="kube-state-metrics" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.335429 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d82b85ed-2d9b-4e61-aa95-7ca78b0e96e7" containerName="kube-state-metrics" Feb 26 17:41:39 crc kubenswrapper[4805]: E0226 17:41:39.335442 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="753f754f-839c-49a1-81e7-93d2c94a9cc7" containerName="init" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.335453 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="753f754f-839c-49a1-81e7-93d2c94a9cc7" containerName="init" Feb 26 17:41:39 crc kubenswrapper[4805]: E0226 17:41:39.335469 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50271682-eacc-4986-8683-47d190c39a43" containerName="nova-metadata-log" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.335477 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="50271682-eacc-4986-8683-47d190c39a43" containerName="nova-metadata-log" Feb 26 17:41:39 crc kubenswrapper[4805]: E0226 17:41:39.335496 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50271682-eacc-4986-8683-47d190c39a43" containerName="nova-metadata-metadata" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.335502 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="50271682-eacc-4986-8683-47d190c39a43" containerName="nova-metadata-metadata" Feb 26 17:41:39 crc kubenswrapper[4805]: E0226 17:41:39.335517 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="753f754f-839c-49a1-81e7-93d2c94a9cc7" containerName="dnsmasq-dns" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.335523 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="753f754f-839c-49a1-81e7-93d2c94a9cc7" containerName="dnsmasq-dns" Feb 26 17:41:39 crc kubenswrapper[4805]: E0226 17:41:39.335535 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f590f397-4abc-4f7a-a9f9-013f581e3ec6" containerName="nova-manage" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.335540 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f590f397-4abc-4f7a-a9f9-013f581e3ec6" containerName="nova-manage" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.335789 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f590f397-4abc-4f7a-a9f9-013f581e3ec6" containerName="nova-manage" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.335807 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="753f754f-839c-49a1-81e7-93d2c94a9cc7" containerName="dnsmasq-dns" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.335819 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="50271682-eacc-4986-8683-47d190c39a43" containerName="nova-metadata-log" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.335828 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="50271682-eacc-4986-8683-47d190c39a43" containerName="nova-metadata-metadata" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.335837 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="d82b85ed-2d9b-4e61-aa95-7ca78b0e96e7" containerName="kube-state-metrics" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.337134 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.340485 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.343728 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.353563 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.417787 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p852n\" (UniqueName: \"kubernetes.io/projected/ce1b9317-a017-46fb-8be3-686d80446649-kube-api-access-p852n\") pod \"kube-state-metrics-0\" (UID: \"ce1b9317-a017-46fb-8be3-686d80446649\") " pod="openstack/kube-state-metrics-0" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.417867 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce1b9317-a017-46fb-8be3-686d80446649-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ce1b9317-a017-46fb-8be3-686d80446649\") " pod="openstack/kube-state-metrics-0" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.418189 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce1b9317-a017-46fb-8be3-686d80446649-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ce1b9317-a017-46fb-8be3-686d80446649\") " pod="openstack/kube-state-metrics-0" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.418312 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ce1b9317-a017-46fb-8be3-686d80446649-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ce1b9317-a017-46fb-8be3-686d80446649\") " pod="openstack/kube-state-metrics-0" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.525330 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce1b9317-a017-46fb-8be3-686d80446649-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ce1b9317-a017-46fb-8be3-686d80446649\") " pod="openstack/kube-state-metrics-0" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.525617 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ce1b9317-a017-46fb-8be3-686d80446649-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ce1b9317-a017-46fb-8be3-686d80446649\") " pod="openstack/kube-state-metrics-0" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.526271 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p852n\" (UniqueName: \"kubernetes.io/projected/ce1b9317-a017-46fb-8be3-686d80446649-kube-api-access-p852n\") pod \"kube-state-metrics-0\" (UID: \"ce1b9317-a017-46fb-8be3-686d80446649\") " pod="openstack/kube-state-metrics-0" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.526332 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce1b9317-a017-46fb-8be3-686d80446649-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ce1b9317-a017-46fb-8be3-686d80446649\") " pod="openstack/kube-state-metrics-0" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.529963 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ce1b9317-a017-46fb-8be3-686d80446649-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ce1b9317-a017-46fb-8be3-686d80446649\") " pod="openstack/kube-state-metrics-0" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.533169 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce1b9317-a017-46fb-8be3-686d80446649-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ce1b9317-a017-46fb-8be3-686d80446649\") " pod="openstack/kube-state-metrics-0" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.533427 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce1b9317-a017-46fb-8be3-686d80446649-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ce1b9317-a017-46fb-8be3-686d80446649\") " pod="openstack/kube-state-metrics-0" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.546657 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p852n\" (UniqueName: \"kubernetes.io/projected/ce1b9317-a017-46fb-8be3-686d80446649-kube-api-access-p852n\") pod \"kube-state-metrics-0\" (UID: \"ce1b9317-a017-46fb-8be3-686d80446649\") " pod="openstack/kube-state-metrics-0" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.667668 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 26 17:41:39 crc kubenswrapper[4805]: E0226 17:41:39.815623 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f1ce4fafa7d423379fed195c24f8437a0511b49fd88927f4b96c59473168ee05" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 26 17:41:39 crc kubenswrapper[4805]: E0226 17:41:39.818123 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f1ce4fafa7d423379fed195c24f8437a0511b49fd88927f4b96c59473168ee05" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 26 17:41:39 crc kubenswrapper[4805]: E0226 17:41:39.820409 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f1ce4fafa7d423379fed195c24f8437a0511b49fd88927f4b96c59473168ee05" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 26 17:41:39 crc kubenswrapper[4805]: E0226 17:41:39.820497 4805 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="bea31308-564c-4030-b718-41b7fb684418" containerName="nova-scheduler-scheduler" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.877564 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"50271682-eacc-4986-8683-47d190c39a43","Type":"ContainerDied","Data":"4b206bc59d26413ad28c604d18a38a084c8c5a5455124a3efac5d991bfabc0db"} Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.877617 4805 scope.go:117] "RemoveContainer" containerID="48d6d4b6f026f204a02d5306822933b9b1f3dfc97330119b805fac0f1a0a8ca6" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.877612 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.881745 4805 generic.go:334] "Generic (PLEG): container finished" podID="59254db9-616c-48c0-bad7-c55d30e99749" containerID="ea36b2b3a113a3b525158efbbaa019cdbbe15f4b9c7b369d58bde6e7a288fd7f" exitCode=0 Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.881821 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bfpdp" event={"ID":"59254db9-616c-48c0-bad7-c55d30e99749","Type":"ContainerDied","Data":"ea36b2b3a113a3b525158efbbaa019cdbbe15f4b9c7b369d58bde6e7a288fd7f"} Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.928857 4805 scope.go:117] "RemoveContainer" containerID="48a89cd96a9f5affe57f2efcee961369d5818e17611e851a218403d7b41542cb" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.953937 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.974512 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.987712 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.990205 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.993979 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.994228 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 26 17:41:39 crc kubenswrapper[4805]: I0226 17:41:39.997628 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 17:41:40 crc kubenswrapper[4805]: I0226 17:41:40.144226 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/655f50ae-6f7d-46d2-bae9-5c0bd4df422d-config-data\") pod \"nova-metadata-0\" (UID: \"655f50ae-6f7d-46d2-bae9-5c0bd4df422d\") " pod="openstack/nova-metadata-0" Feb 26 17:41:40 crc kubenswrapper[4805]: I0226 17:41:40.144583 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/655f50ae-6f7d-46d2-bae9-5c0bd4df422d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"655f50ae-6f7d-46d2-bae9-5c0bd4df422d\") " pod="openstack/nova-metadata-0" Feb 26 17:41:40 crc kubenswrapper[4805]: I0226 17:41:40.144731 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/655f50ae-6f7d-46d2-bae9-5c0bd4df422d-logs\") pod \"nova-metadata-0\" (UID: \"655f50ae-6f7d-46d2-bae9-5c0bd4df422d\") " pod="openstack/nova-metadata-0" Feb 26 17:41:40 crc kubenswrapper[4805]: I0226 17:41:40.144999 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/655f50ae-6f7d-46d2-bae9-5c0bd4df422d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"655f50ae-6f7d-46d2-bae9-5c0bd4df422d\") " pod="openstack/nova-metadata-0" Feb 26 17:41:40 crc kubenswrapper[4805]: I0226 17:41:40.145100 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7jxd\" (UniqueName: \"kubernetes.io/projected/655f50ae-6f7d-46d2-bae9-5c0bd4df422d-kube-api-access-d7jxd\") pod \"nova-metadata-0\" (UID: \"655f50ae-6f7d-46d2-bae9-5c0bd4df422d\") " pod="openstack/nova-metadata-0" Feb 26 17:41:40 crc kubenswrapper[4805]: I0226 17:41:40.161437 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 26 17:41:40 crc kubenswrapper[4805]: I0226 17:41:40.246998 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/655f50ae-6f7d-46d2-bae9-5c0bd4df422d-config-data\") pod \"nova-metadata-0\" (UID: \"655f50ae-6f7d-46d2-bae9-5c0bd4df422d\") " pod="openstack/nova-metadata-0" Feb 26 17:41:40 crc kubenswrapper[4805]: I0226 17:41:40.247136 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/655f50ae-6f7d-46d2-bae9-5c0bd4df422d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"655f50ae-6f7d-46d2-bae9-5c0bd4df422d\") " pod="openstack/nova-metadata-0" Feb 26 17:41:40 crc kubenswrapper[4805]: I0226 17:41:40.247163 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/655f50ae-6f7d-46d2-bae9-5c0bd4df422d-logs\") pod \"nova-metadata-0\" (UID: \"655f50ae-6f7d-46d2-bae9-5c0bd4df422d\") " pod="openstack/nova-metadata-0" Feb 26 17:41:40 crc kubenswrapper[4805]: I0226 17:41:40.247245 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/655f50ae-6f7d-46d2-bae9-5c0bd4df422d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"655f50ae-6f7d-46d2-bae9-5c0bd4df422d\") " pod="openstack/nova-metadata-0" Feb 26 17:41:40 crc kubenswrapper[4805]: I0226 17:41:40.247295 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7jxd\" (UniqueName: \"kubernetes.io/projected/655f50ae-6f7d-46d2-bae9-5c0bd4df422d-kube-api-access-d7jxd\") pod \"nova-metadata-0\" (UID: \"655f50ae-6f7d-46d2-bae9-5c0bd4df422d\") " pod="openstack/nova-metadata-0" Feb 26 17:41:40 crc kubenswrapper[4805]: I0226 17:41:40.247763 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/655f50ae-6f7d-46d2-bae9-5c0bd4df422d-logs\") pod \"nova-metadata-0\" (UID: \"655f50ae-6f7d-46d2-bae9-5c0bd4df422d\") " pod="openstack/nova-metadata-0" Feb 26 17:41:40 crc kubenswrapper[4805]: I0226 17:41:40.254379 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/655f50ae-6f7d-46d2-bae9-5c0bd4df422d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"655f50ae-6f7d-46d2-bae9-5c0bd4df422d\") " pod="openstack/nova-metadata-0" Feb 26 17:41:40 crc kubenswrapper[4805]: I0226 17:41:40.256651 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/655f50ae-6f7d-46d2-bae9-5c0bd4df422d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"655f50ae-6f7d-46d2-bae9-5c0bd4df422d\") " pod="openstack/nova-metadata-0" Feb 26 17:41:40 crc kubenswrapper[4805]: I0226 17:41:40.256770 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/655f50ae-6f7d-46d2-bae9-5c0bd4df422d-config-data\") pod \"nova-metadata-0\" (UID: \"655f50ae-6f7d-46d2-bae9-5c0bd4df422d\") " pod="openstack/nova-metadata-0" Feb 26 17:41:40 crc kubenswrapper[4805]: I0226 17:41:40.267654 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7jxd\" (UniqueName: \"kubernetes.io/projected/655f50ae-6f7d-46d2-bae9-5c0bd4df422d-kube-api-access-d7jxd\") pod \"nova-metadata-0\" (UID: \"655f50ae-6f7d-46d2-bae9-5c0bd4df422d\") " pod="openstack/nova-metadata-0" Feb 26 17:41:40 crc kubenswrapper[4805]: I0226 17:41:40.345569 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 17:41:40 crc kubenswrapper[4805]: I0226 17:41:40.645523 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:41:40 crc kubenswrapper[4805]: I0226 17:41:40.647597 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a37a8b33-12b3-47a7-92a3-6e4d8fa8338f" containerName="ceilometer-central-agent" containerID="cri-o://cf27a45c5291442af74cc380bc8d15222294677f2bddb8b31454e449079be748" gracePeriod=30 Feb 26 17:41:40 crc kubenswrapper[4805]: I0226 17:41:40.647850 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a37a8b33-12b3-47a7-92a3-6e4d8fa8338f" containerName="proxy-httpd" containerID="cri-o://d547e4b8ba276a830f861d26a559ed481cbbd31aa0f158046bb62112fa2898f8" gracePeriod=30 Feb 26 17:41:40 crc kubenswrapper[4805]: I0226 17:41:40.648010 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a37a8b33-12b3-47a7-92a3-6e4d8fa8338f" containerName="ceilometer-notification-agent" containerID="cri-o://4114c257ed26074cd4d06494800c25635d1afa20c52fb1e41b1c835f739f445a" gracePeriod=30 Feb 26 17:41:40 crc kubenswrapper[4805]: I0226 17:41:40.649616 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a37a8b33-12b3-47a7-92a3-6e4d8fa8338f" containerName="sg-core" containerID="cri-o://ecfad850bcea8f2884f126683e405241530f8ba87be0fa28df4214b1003920aa" gracePeriod=30 Feb 26 17:41:40 crc kubenswrapper[4805]: I0226 17:41:40.922862 4805 generic.go:334] "Generic (PLEG): container finished" podID="a37a8b33-12b3-47a7-92a3-6e4d8fa8338f" containerID="ecfad850bcea8f2884f126683e405241530f8ba87be0fa28df4214b1003920aa" exitCode=2 Feb 26 17:41:40 crc kubenswrapper[4805]: I0226 17:41:40.922957 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f","Type":"ContainerDied","Data":"ecfad850bcea8f2884f126683e405241530f8ba87be0fa28df4214b1003920aa"} Feb 26 17:41:40 crc kubenswrapper[4805]: I0226 17:41:40.924972 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 17:41:40 crc kubenswrapper[4805]: I0226 17:41:40.930068 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ce1b9317-a017-46fb-8be3-686d80446649","Type":"ContainerStarted","Data":"8eca2393cf8109f366d3e73f2dc96c575f40287f90807ab74a6c8fb057d5e47b"} Feb 26 17:41:40 crc kubenswrapper[4805]: I0226 17:41:40.971956 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50271682-eacc-4986-8683-47d190c39a43" path="/var/lib/kubelet/pods/50271682-eacc-4986-8683-47d190c39a43/volumes" Feb 26 17:41:40 crc kubenswrapper[4805]: I0226 17:41:40.974012 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d82b85ed-2d9b-4e61-aa95-7ca78b0e96e7" path="/var/lib/kubelet/pods/d82b85ed-2d9b-4e61-aa95-7ca78b0e96e7/volumes" Feb 26 17:41:41 crc kubenswrapper[4805]: I0226 17:41:41.450226 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bfpdp" Feb 26 17:41:41 crc kubenswrapper[4805]: I0226 17:41:41.569476 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnqb5\" (UniqueName: \"kubernetes.io/projected/59254db9-616c-48c0-bad7-c55d30e99749-kube-api-access-gnqb5\") pod \"59254db9-616c-48c0-bad7-c55d30e99749\" (UID: \"59254db9-616c-48c0-bad7-c55d30e99749\") " Feb 26 17:41:41 crc kubenswrapper[4805]: I0226 17:41:41.569578 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59254db9-616c-48c0-bad7-c55d30e99749-config-data\") pod \"59254db9-616c-48c0-bad7-c55d30e99749\" (UID: \"59254db9-616c-48c0-bad7-c55d30e99749\") " Feb 26 17:41:41 crc kubenswrapper[4805]: I0226 17:41:41.569620 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59254db9-616c-48c0-bad7-c55d30e99749-scripts\") pod \"59254db9-616c-48c0-bad7-c55d30e99749\" (UID: \"59254db9-616c-48c0-bad7-c55d30e99749\") " Feb 26 17:41:41 crc kubenswrapper[4805]: I0226 17:41:41.570161 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59254db9-616c-48c0-bad7-c55d30e99749-combined-ca-bundle\") pod \"59254db9-616c-48c0-bad7-c55d30e99749\" (UID: \"59254db9-616c-48c0-bad7-c55d30e99749\") " Feb 26 17:41:41 crc kubenswrapper[4805]: I0226 17:41:41.583937 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59254db9-616c-48c0-bad7-c55d30e99749-scripts" (OuterVolumeSpecName: "scripts") pod "59254db9-616c-48c0-bad7-c55d30e99749" (UID: "59254db9-616c-48c0-bad7-c55d30e99749"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:41:41 crc kubenswrapper[4805]: I0226 17:41:41.584129 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59254db9-616c-48c0-bad7-c55d30e99749-kube-api-access-gnqb5" (OuterVolumeSpecName: "kube-api-access-gnqb5") pod "59254db9-616c-48c0-bad7-c55d30e99749" (UID: "59254db9-616c-48c0-bad7-c55d30e99749"). InnerVolumeSpecName "kube-api-access-gnqb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:41:41 crc kubenswrapper[4805]: I0226 17:41:41.603509 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59254db9-616c-48c0-bad7-c55d30e99749-config-data" (OuterVolumeSpecName: "config-data") pod "59254db9-616c-48c0-bad7-c55d30e99749" (UID: "59254db9-616c-48c0-bad7-c55d30e99749"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:41:41 crc kubenswrapper[4805]: I0226 17:41:41.604097 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59254db9-616c-48c0-bad7-c55d30e99749-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59254db9-616c-48c0-bad7-c55d30e99749" (UID: "59254db9-616c-48c0-bad7-c55d30e99749"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:41:41 crc kubenswrapper[4805]: I0226 17:41:41.671892 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnqb5\" (UniqueName: \"kubernetes.io/projected/59254db9-616c-48c0-bad7-c55d30e99749-kube-api-access-gnqb5\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:41 crc kubenswrapper[4805]: I0226 17:41:41.671939 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59254db9-616c-48c0-bad7-c55d30e99749-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:41 crc kubenswrapper[4805]: I0226 17:41:41.671952 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59254db9-616c-48c0-bad7-c55d30e99749-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:41 crc kubenswrapper[4805]: I0226 17:41:41.671964 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59254db9-616c-48c0-bad7-c55d30e99749-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:41 crc kubenswrapper[4805]: I0226 17:41:41.959777 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bfpdp" event={"ID":"59254db9-616c-48c0-bad7-c55d30e99749","Type":"ContainerDied","Data":"4975e6698e0786008128aa47a1051ed4a3b4f6493fd34d56c490f3a78aeb9b0d"} Feb 26 17:41:41 crc kubenswrapper[4805]: I0226 17:41:41.959825 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4975e6698e0786008128aa47a1051ed4a3b4f6493fd34d56c490f3a78aeb9b0d" Feb 26 17:41:41 crc kubenswrapper[4805]: I0226 17:41:41.959790 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bfpdp" Feb 26 17:41:41 crc kubenswrapper[4805]: I0226 17:41:41.964357 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"655f50ae-6f7d-46d2-bae9-5c0bd4df422d","Type":"ContainerStarted","Data":"2554de4d0a1888797a657e116a9ccb8477ee095ce0e590f114396b66fb531eb4"} Feb 26 17:41:41 crc kubenswrapper[4805]: I0226 17:41:41.964410 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"655f50ae-6f7d-46d2-bae9-5c0bd4df422d","Type":"ContainerStarted","Data":"65db96f90483349b35e836f267b6667465eedc484347e24fc8875fd638bf0ed7"} Feb 26 17:41:41 crc kubenswrapper[4805]: I0226 17:41:41.964424 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"655f50ae-6f7d-46d2-bae9-5c0bd4df422d","Type":"ContainerStarted","Data":"90e3249d74f5f4a6d6b8eaa17643bd6e1caf44d261ccfcecf4b100ba1001a663"} Feb 26 17:41:41 crc kubenswrapper[4805]: I0226 17:41:41.972907 4805 generic.go:334] "Generic (PLEG): container finished" podID="a37a8b33-12b3-47a7-92a3-6e4d8fa8338f" containerID="d547e4b8ba276a830f861d26a559ed481cbbd31aa0f158046bb62112fa2898f8" exitCode=0 Feb 26 17:41:41 crc kubenswrapper[4805]: I0226 17:41:41.972941 4805 generic.go:334] "Generic (PLEG): container finished" podID="a37a8b33-12b3-47a7-92a3-6e4d8fa8338f" containerID="cf27a45c5291442af74cc380bc8d15222294677f2bddb8b31454e449079be748" exitCode=0 Feb 26 17:41:41 crc kubenswrapper[4805]: I0226 17:41:41.972983 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f","Type":"ContainerDied","Data":"d547e4b8ba276a830f861d26a559ed481cbbd31aa0f158046bb62112fa2898f8"} Feb 26 17:41:41 crc kubenswrapper[4805]: I0226 17:41:41.973039 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f","Type":"ContainerDied","Data":"cf27a45c5291442af74cc380bc8d15222294677f2bddb8b31454e449079be748"} Feb 26 17:41:41 crc kubenswrapper[4805]: I0226 17:41:41.980143 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ce1b9317-a017-46fb-8be3-686d80446649","Type":"ContainerStarted","Data":"fc7b45cf8c974f6b3571c24d27fc06ce7e707198741699b9878635e9e6196c82"} Feb 26 17:41:41 crc kubenswrapper[4805]: I0226 17:41:41.981924 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 26 17:41:41 crc kubenswrapper[4805]: I0226 17:41:41.998895 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 26 17:41:41 crc kubenswrapper[4805]: E0226 17:41:41.999350 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59254db9-616c-48c0-bad7-c55d30e99749" containerName="nova-cell1-conductor-db-sync" Feb 26 17:41:41 crc kubenswrapper[4805]: I0226 17:41:41.999368 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="59254db9-616c-48c0-bad7-c55d30e99749" containerName="nova-cell1-conductor-db-sync" Feb 26 17:41:41 crc kubenswrapper[4805]: I0226 17:41:41.999614 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="59254db9-616c-48c0-bad7-c55d30e99749" containerName="nova-cell1-conductor-db-sync" Feb 26 17:41:42 crc kubenswrapper[4805]: I0226 17:41:42.000443 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 26 17:41:42 crc kubenswrapper[4805]: I0226 17:41:42.007754 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 26 17:41:42 crc kubenswrapper[4805]: I0226 17:41:42.012204 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.012176487 podStartE2EDuration="3.012176487s" podCreationTimestamp="2026-02-26 17:41:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:41:41.982639501 +0000 UTC m=+1616.544393850" watchObservedRunningTime="2026-02-26 17:41:42.012176487 +0000 UTC m=+1616.573930826" Feb 26 17:41:42 crc kubenswrapper[4805]: I0226 17:41:42.028573 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 26 17:41:42 crc kubenswrapper[4805]: I0226 17:41:42.028702 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.663976888 podStartE2EDuration="3.028681954s" podCreationTimestamp="2026-02-26 17:41:39 +0000 UTC" firstStartedPulling="2026-02-26 17:41:40.169139766 +0000 UTC m=+1614.730894095" lastFinishedPulling="2026-02-26 17:41:40.533844822 +0000 UTC m=+1615.095599161" observedRunningTime="2026-02-26 17:41:42.008828972 +0000 UTC m=+1616.570583321" watchObservedRunningTime="2026-02-26 17:41:42.028681954 +0000 UTC m=+1616.590436293" Feb 26 17:41:42 crc kubenswrapper[4805]: I0226 17:41:42.079791 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82029b76-8e8c-4aab-ac16-906345d63ad8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"82029b76-8e8c-4aab-ac16-906345d63ad8\") " pod="openstack/nova-cell1-conductor-0" Feb 26 17:41:42 crc kubenswrapper[4805]: I0226 17:41:42.080200 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppc7x\" (UniqueName: \"kubernetes.io/projected/82029b76-8e8c-4aab-ac16-906345d63ad8-kube-api-access-ppc7x\") pod \"nova-cell1-conductor-0\" (UID: \"82029b76-8e8c-4aab-ac16-906345d63ad8\") " pod="openstack/nova-cell1-conductor-0" Feb 26 17:41:42 crc kubenswrapper[4805]: I0226 17:41:42.080432 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82029b76-8e8c-4aab-ac16-906345d63ad8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"82029b76-8e8c-4aab-ac16-906345d63ad8\") " pod="openstack/nova-cell1-conductor-0" Feb 26 17:41:42 crc kubenswrapper[4805]: I0226 17:41:42.182482 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82029b76-8e8c-4aab-ac16-906345d63ad8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"82029b76-8e8c-4aab-ac16-906345d63ad8\") " pod="openstack/nova-cell1-conductor-0" Feb 26 17:41:42 crc kubenswrapper[4805]: I0226 17:41:42.182589 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82029b76-8e8c-4aab-ac16-906345d63ad8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"82029b76-8e8c-4aab-ac16-906345d63ad8\") " pod="openstack/nova-cell1-conductor-0" Feb 26 17:41:42 crc kubenswrapper[4805]: I0226 17:41:42.182636 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppc7x\" (UniqueName: \"kubernetes.io/projected/82029b76-8e8c-4aab-ac16-906345d63ad8-kube-api-access-ppc7x\") pod \"nova-cell1-conductor-0\" (UID: \"82029b76-8e8c-4aab-ac16-906345d63ad8\") " pod="openstack/nova-cell1-conductor-0" Feb 26 17:41:42 crc kubenswrapper[4805]: I0226 17:41:42.197351 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82029b76-8e8c-4aab-ac16-906345d63ad8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"82029b76-8e8c-4aab-ac16-906345d63ad8\") " pod="openstack/nova-cell1-conductor-0" Feb 26 17:41:42 crc kubenswrapper[4805]: I0226 17:41:42.197859 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82029b76-8e8c-4aab-ac16-906345d63ad8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"82029b76-8e8c-4aab-ac16-906345d63ad8\") " pod="openstack/nova-cell1-conductor-0" Feb 26 17:41:42 crc kubenswrapper[4805]: I0226 17:41:42.204594 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppc7x\" (UniqueName: \"kubernetes.io/projected/82029b76-8e8c-4aab-ac16-906345d63ad8-kube-api-access-ppc7x\") pod \"nova-cell1-conductor-0\" (UID: \"82029b76-8e8c-4aab-ac16-906345d63ad8\") " pod="openstack/nova-cell1-conductor-0" Feb 26 17:41:42 crc kubenswrapper[4805]: I0226 17:41:42.326808 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 26 17:41:42 crc kubenswrapper[4805]: I0226 17:41:42.859613 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 26 17:41:42 crc kubenswrapper[4805]: W0226 17:41:42.891200 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82029b76_8e8c_4aab_ac16_906345d63ad8.slice/crio-8587e5693babd9b8484d5669d2173ad71627e1377c6f50ac8e8dc641d5010e6b WatchSource:0}: Error finding container 8587e5693babd9b8484d5669d2173ad71627e1377c6f50ac8e8dc641d5010e6b: Status 404 returned error can't find the container with id 8587e5693babd9b8484d5669d2173ad71627e1377c6f50ac8e8dc641d5010e6b Feb 26 17:41:43 crc kubenswrapper[4805]: I0226 17:41:43.017065 4805 generic.go:334] "Generic (PLEG): container finished" podID="bea31308-564c-4030-b718-41b7fb684418" containerID="f1ce4fafa7d423379fed195c24f8437a0511b49fd88927f4b96c59473168ee05" exitCode=0 Feb 26 17:41:43 crc kubenswrapper[4805]: I0226 17:41:43.017182 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bea31308-564c-4030-b718-41b7fb684418","Type":"ContainerDied","Data":"f1ce4fafa7d423379fed195c24f8437a0511b49fd88927f4b96c59473168ee05"} Feb 26 17:41:43 crc kubenswrapper[4805]: I0226 17:41:43.021341 4805 generic.go:334] "Generic (PLEG): container finished" podID="c6496761-f8be-4176-a000-0293499b739e" containerID="ae5cf45ce735f1633abac0bfd93ea47ecc65e95675fc2841e610a3c4182583fd" exitCode=0 Feb 26 17:41:43 crc kubenswrapper[4805]: I0226 17:41:43.021419 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c6496761-f8be-4176-a000-0293499b739e","Type":"ContainerDied","Data":"ae5cf45ce735f1633abac0bfd93ea47ecc65e95675fc2841e610a3c4182583fd"} Feb 26 17:41:43 crc kubenswrapper[4805]: I0226 17:41:43.023001 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"82029b76-8e8c-4aab-ac16-906345d63ad8","Type":"ContainerStarted","Data":"8587e5693babd9b8484d5669d2173ad71627e1377c6f50ac8e8dc641d5010e6b"} Feb 26 17:41:43 crc kubenswrapper[4805]: I0226 17:41:43.039241 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 17:41:43 crc kubenswrapper[4805]: I0226 17:41:43.203246 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6496761-f8be-4176-a000-0293499b739e-config-data\") pod \"c6496761-f8be-4176-a000-0293499b739e\" (UID: \"c6496761-f8be-4176-a000-0293499b739e\") " Feb 26 17:41:43 crc kubenswrapper[4805]: I0226 17:41:43.203381 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqv2c\" (UniqueName: \"kubernetes.io/projected/c6496761-f8be-4176-a000-0293499b739e-kube-api-access-mqv2c\") pod \"c6496761-f8be-4176-a000-0293499b739e\" (UID: \"c6496761-f8be-4176-a000-0293499b739e\") " Feb 26 17:41:43 crc kubenswrapper[4805]: I0226 17:41:43.203407 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6496761-f8be-4176-a000-0293499b739e-logs\") pod \"c6496761-f8be-4176-a000-0293499b739e\" (UID: \"c6496761-f8be-4176-a000-0293499b739e\") " Feb 26 17:41:43 crc kubenswrapper[4805]: I0226 17:41:43.203513 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6496761-f8be-4176-a000-0293499b739e-combined-ca-bundle\") pod \"c6496761-f8be-4176-a000-0293499b739e\" (UID: \"c6496761-f8be-4176-a000-0293499b739e\") " Feb 26 17:41:43 crc kubenswrapper[4805]: I0226 17:41:43.207317 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6496761-f8be-4176-a000-0293499b739e-logs" (OuterVolumeSpecName: "logs") pod "c6496761-f8be-4176-a000-0293499b739e" (UID: "c6496761-f8be-4176-a000-0293499b739e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:41:43 crc kubenswrapper[4805]: I0226 17:41:43.212309 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6496761-f8be-4176-a000-0293499b739e-kube-api-access-mqv2c" (OuterVolumeSpecName: "kube-api-access-mqv2c") pod "c6496761-f8be-4176-a000-0293499b739e" (UID: "c6496761-f8be-4176-a000-0293499b739e"). InnerVolumeSpecName "kube-api-access-mqv2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:41:43 crc kubenswrapper[4805]: I0226 17:41:43.242134 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6496761-f8be-4176-a000-0293499b739e-config-data" (OuterVolumeSpecName: "config-data") pod "c6496761-f8be-4176-a000-0293499b739e" (UID: "c6496761-f8be-4176-a000-0293499b739e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:41:43 crc kubenswrapper[4805]: I0226 17:41:43.264340 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6496761-f8be-4176-a000-0293499b739e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6496761-f8be-4176-a000-0293499b739e" (UID: "c6496761-f8be-4176-a000-0293499b739e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:41:43 crc kubenswrapper[4805]: I0226 17:41:43.321958 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqv2c\" (UniqueName: \"kubernetes.io/projected/c6496761-f8be-4176-a000-0293499b739e-kube-api-access-mqv2c\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:43 crc kubenswrapper[4805]: I0226 17:41:43.321994 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6496761-f8be-4176-a000-0293499b739e-logs\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:43 crc kubenswrapper[4805]: I0226 17:41:43.322007 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6496761-f8be-4176-a000-0293499b739e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:43 crc kubenswrapper[4805]: I0226 17:41:43.322020 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6496761-f8be-4176-a000-0293499b739e-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:43 crc kubenswrapper[4805]: I0226 17:41:43.445537 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 17:41:43 crc kubenswrapper[4805]: I0226 17:41:43.939628 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdmql\" (UniqueName: \"kubernetes.io/projected/bea31308-564c-4030-b718-41b7fb684418-kube-api-access-jdmql\") pod \"bea31308-564c-4030-b718-41b7fb684418\" (UID: \"bea31308-564c-4030-b718-41b7fb684418\") " Feb 26 17:41:43 crc kubenswrapper[4805]: I0226 17:41:43.939739 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bea31308-564c-4030-b718-41b7fb684418-config-data\") pod \"bea31308-564c-4030-b718-41b7fb684418\" (UID: \"bea31308-564c-4030-b718-41b7fb684418\") " Feb 26 17:41:43 crc kubenswrapper[4805]: I0226 17:41:43.939778 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bea31308-564c-4030-b718-41b7fb684418-combined-ca-bundle\") pod \"bea31308-564c-4030-b718-41b7fb684418\" (UID: \"bea31308-564c-4030-b718-41b7fb684418\") " Feb 26 17:41:43 crc kubenswrapper[4805]: I0226 17:41:43.948278 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bea31308-564c-4030-b718-41b7fb684418-kube-api-access-jdmql" (OuterVolumeSpecName: "kube-api-access-jdmql") pod "bea31308-564c-4030-b718-41b7fb684418" (UID: "bea31308-564c-4030-b718-41b7fb684418"). InnerVolumeSpecName "kube-api-access-jdmql". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:41:43 crc kubenswrapper[4805]: I0226 17:41:43.985442 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bea31308-564c-4030-b718-41b7fb684418-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bea31308-564c-4030-b718-41b7fb684418" (UID: "bea31308-564c-4030-b718-41b7fb684418"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:41:43 crc kubenswrapper[4805]: I0226 17:41:43.992280 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bea31308-564c-4030-b718-41b7fb684418-config-data" (OuterVolumeSpecName: "config-data") pod "bea31308-564c-4030-b718-41b7fb684418" (UID: "bea31308-564c-4030-b718-41b7fb684418"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.039663 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"82029b76-8e8c-4aab-ac16-906345d63ad8","Type":"ContainerStarted","Data":"ce1d96049297fb0d6fac188fd46e240431f9ea6db34d81beb960c55cce38f337"} Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.048228 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.068241 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bea31308-564c-4030-b718-41b7fb684418","Type":"ContainerDied","Data":"32a16e243a6d4ecc5b6918a01323c102670ff8f0f75d8b1d6d55921057946ab4"} Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.068315 4805 scope.go:117] "RemoveContainer" containerID="f1ce4fafa7d423379fed195c24f8437a0511b49fd88927f4b96c59473168ee05" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.057301 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.065646 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.065600467 podStartE2EDuration="3.065600467s" podCreationTimestamp="2026-02-26 17:41:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:41:44.06533057 +0000 UTC m=+1618.627084909" watchObservedRunningTime="2026-02-26 17:41:44.065600467 +0000 UTC m=+1618.627354816" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.051791 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bea31308-564c-4030-b718-41b7fb684418-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.069412 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bea31308-564c-4030-b718-41b7fb684418-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.069696 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c6496761-f8be-4176-a000-0293499b739e","Type":"ContainerDied","Data":"c342933d0f422c141ea086370fbc3d58b5941a61c9c8afd36798c48185940856"} Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.069847 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.071224 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdmql\" (UniqueName: \"kubernetes.io/projected/bea31308-564c-4030-b718-41b7fb684418-kube-api-access-jdmql\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.166645 4805 scope.go:117] "RemoveContainer" containerID="ae5cf45ce735f1633abac0bfd93ea47ecc65e95675fc2841e610a3c4182583fd" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.180577 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.202889 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.223394 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 26 17:41:44 crc kubenswrapper[4805]: E0226 17:41:44.223840 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bea31308-564c-4030-b718-41b7fb684418" containerName="nova-scheduler-scheduler" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.223856 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="bea31308-564c-4030-b718-41b7fb684418" containerName="nova-scheduler-scheduler" Feb 26 17:41:44 crc kubenswrapper[4805]: E0226 17:41:44.223871 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6496761-f8be-4176-a000-0293499b739e" containerName="nova-api-log" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.223878 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6496761-f8be-4176-a000-0293499b739e" containerName="nova-api-log" Feb 26 17:41:44 crc kubenswrapper[4805]: E0226 17:41:44.223890 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6496761-f8be-4176-a000-0293499b739e" containerName="nova-api-api" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.223896 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6496761-f8be-4176-a000-0293499b739e" containerName="nova-api-api" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.224107 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6496761-f8be-4176-a000-0293499b739e" containerName="nova-api-api" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.224124 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6496761-f8be-4176-a000-0293499b739e" containerName="nova-api-log" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.224141 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="bea31308-564c-4030-b718-41b7fb684418" containerName="nova-scheduler-scheduler" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.225198 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.233429 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.238583 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.258960 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.271189 4805 scope.go:117] "RemoveContainer" containerID="567172892bede0e1db5f88587994e74e56faff7e6330f45fb97264b33dc5ed77" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.274323 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0e114c8-fc37-47f7-a02f-202774708b04-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c0e114c8-fc37-47f7-a02f-202774708b04\") " pod="openstack/nova-api-0" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.274557 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0e114c8-fc37-47f7-a02f-202774708b04-config-data\") pod \"nova-api-0\" (UID: \"c0e114c8-fc37-47f7-a02f-202774708b04\") " pod="openstack/nova-api-0" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.274679 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnv49\" (UniqueName: \"kubernetes.io/projected/c0e114c8-fc37-47f7-a02f-202774708b04-kube-api-access-cnv49\") pod \"nova-api-0\" (UID: \"c0e114c8-fc37-47f7-a02f-202774708b04\") " pod="openstack/nova-api-0" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.274821 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0e114c8-fc37-47f7-a02f-202774708b04-logs\") pod \"nova-api-0\" (UID: \"c0e114c8-fc37-47f7-a02f-202774708b04\") " pod="openstack/nova-api-0" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.281224 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.292967 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.295154 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.297959 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.303892 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.376405 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0e114c8-fc37-47f7-a02f-202774708b04-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c0e114c8-fc37-47f7-a02f-202774708b04\") " pod="openstack/nova-api-0" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.376447 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0e114c8-fc37-47f7-a02f-202774708b04-config-data\") pod \"nova-api-0\" (UID: \"c0e114c8-fc37-47f7-a02f-202774708b04\") " pod="openstack/nova-api-0" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.376485 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnv49\" (UniqueName: \"kubernetes.io/projected/c0e114c8-fc37-47f7-a02f-202774708b04-kube-api-access-cnv49\") pod \"nova-api-0\" (UID: \"c0e114c8-fc37-47f7-a02f-202774708b04\") " pod="openstack/nova-api-0" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.376533 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4d891fd-08b5-44ba-a363-11b4b8c30f74-config-data\") pod \"nova-scheduler-0\" (UID: \"c4d891fd-08b5-44ba-a363-11b4b8c30f74\") " pod="openstack/nova-scheduler-0" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.376554 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlc5g\" (UniqueName: \"kubernetes.io/projected/c4d891fd-08b5-44ba-a363-11b4b8c30f74-kube-api-access-qlc5g\") pod \"nova-scheduler-0\" (UID: \"c4d891fd-08b5-44ba-a363-11b4b8c30f74\") " pod="openstack/nova-scheduler-0" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.376578 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4d891fd-08b5-44ba-a363-11b4b8c30f74-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c4d891fd-08b5-44ba-a363-11b4b8c30f74\") " pod="openstack/nova-scheduler-0" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.376596 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0e114c8-fc37-47f7-a02f-202774708b04-logs\") pod \"nova-api-0\" (UID: \"c0e114c8-fc37-47f7-a02f-202774708b04\") " pod="openstack/nova-api-0" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.377183 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0e114c8-fc37-47f7-a02f-202774708b04-logs\") pod \"nova-api-0\" (UID: \"c0e114c8-fc37-47f7-a02f-202774708b04\") " pod="openstack/nova-api-0" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.381912 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0e114c8-fc37-47f7-a02f-202774708b04-config-data\") pod \"nova-api-0\" (UID: \"c0e114c8-fc37-47f7-a02f-202774708b04\") " pod="openstack/nova-api-0" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.384296 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0e114c8-fc37-47f7-a02f-202774708b04-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c0e114c8-fc37-47f7-a02f-202774708b04\") " pod="openstack/nova-api-0" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.396416 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnv49\" (UniqueName: \"kubernetes.io/projected/c0e114c8-fc37-47f7-a02f-202774708b04-kube-api-access-cnv49\") pod \"nova-api-0\" (UID: \"c0e114c8-fc37-47f7-a02f-202774708b04\") " pod="openstack/nova-api-0" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.483813 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4d891fd-08b5-44ba-a363-11b4b8c30f74-config-data\") pod \"nova-scheduler-0\" (UID: \"c4d891fd-08b5-44ba-a363-11b4b8c30f74\") " pod="openstack/nova-scheduler-0" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.484109 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlc5g\" (UniqueName: \"kubernetes.io/projected/c4d891fd-08b5-44ba-a363-11b4b8c30f74-kube-api-access-qlc5g\") pod \"nova-scheduler-0\" (UID: \"c4d891fd-08b5-44ba-a363-11b4b8c30f74\") " pod="openstack/nova-scheduler-0" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.484227 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4d891fd-08b5-44ba-a363-11b4b8c30f74-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c4d891fd-08b5-44ba-a363-11b4b8c30f74\") " pod="openstack/nova-scheduler-0" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.488183 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4d891fd-08b5-44ba-a363-11b4b8c30f74-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c4d891fd-08b5-44ba-a363-11b4b8c30f74\") " pod="openstack/nova-scheduler-0" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.491940 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4d891fd-08b5-44ba-a363-11b4b8c30f74-config-data\") pod \"nova-scheduler-0\" (UID: \"c4d891fd-08b5-44ba-a363-11b4b8c30f74\") " pod="openstack/nova-scheduler-0" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.506141 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlc5g\" (UniqueName: \"kubernetes.io/projected/c4d891fd-08b5-44ba-a363-11b4b8c30f74-kube-api-access-qlc5g\") pod \"nova-scheduler-0\" (UID: \"c4d891fd-08b5-44ba-a363-11b4b8c30f74\") " pod="openstack/nova-scheduler-0" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.561138 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.614971 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.967898 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bea31308-564c-4030-b718-41b7fb684418" path="/var/lib/kubelet/pods/bea31308-564c-4030-b718-41b7fb684418/volumes" Feb 26 17:41:44 crc kubenswrapper[4805]: I0226 17:41:44.968936 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6496761-f8be-4176-a000-0293499b739e" path="/var/lib/kubelet/pods/c6496761-f8be-4176-a000-0293499b739e/volumes" Feb 26 17:41:45 crc kubenswrapper[4805]: I0226 17:41:45.113786 4805 generic.go:334] "Generic (PLEG): container finished" podID="a37a8b33-12b3-47a7-92a3-6e4d8fa8338f" containerID="4114c257ed26074cd4d06494800c25635d1afa20c52fb1e41b1c835f739f445a" exitCode=0 Feb 26 17:41:45 crc kubenswrapper[4805]: I0226 17:41:45.116045 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f","Type":"ContainerDied","Data":"4114c257ed26074cd4d06494800c25635d1afa20c52fb1e41b1c835f739f445a"} Feb 26 17:41:45 crc kubenswrapper[4805]: I0226 17:41:45.137560 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 17:41:45 crc kubenswrapper[4805]: I0226 17:41:45.279281 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 17:41:45 crc kubenswrapper[4805]: W0226 17:41:45.325013 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4d891fd_08b5_44ba_a363_11b4b8c30f74.slice/crio-c105b4ee5556542424686ba814255ee294051b01984e44f2190123886a941a3a WatchSource:0}: Error finding container c105b4ee5556542424686ba814255ee294051b01984e44f2190123886a941a3a: Status 404 returned error can't find the container with id c105b4ee5556542424686ba814255ee294051b01984e44f2190123886a941a3a Feb 26 17:41:45 crc kubenswrapper[4805]: I0226 17:41:45.346233 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 26 17:41:45 crc kubenswrapper[4805]: I0226 17:41:45.346606 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 26 17:41:45 crc kubenswrapper[4805]: I0226 17:41:45.504295 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 17:41:45 crc kubenswrapper[4805]: I0226 17:41:45.612067 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7sdk\" (UniqueName: \"kubernetes.io/projected/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-kube-api-access-f7sdk\") pod \"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f\" (UID: \"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f\") " Feb 26 17:41:45 crc kubenswrapper[4805]: I0226 17:41:45.612208 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-log-httpd\") pod \"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f\" (UID: \"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f\") " Feb 26 17:41:45 crc kubenswrapper[4805]: I0226 17:41:45.612276 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-run-httpd\") pod \"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f\" (UID: \"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f\") " Feb 26 17:41:45 crc kubenswrapper[4805]: I0226 17:41:45.612509 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-sg-core-conf-yaml\") pod \"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f\" (UID: \"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f\") " Feb 26 17:41:45 crc kubenswrapper[4805]: I0226 17:41:45.612549 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-config-data\") pod \"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f\" (UID: \"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f\") " Feb 26 17:41:45 crc kubenswrapper[4805]: I0226 17:41:45.612763 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-scripts\") pod \"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f\" (UID: \"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f\") " Feb 26 17:41:45 crc kubenswrapper[4805]: I0226 17:41:45.612797 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-combined-ca-bundle\") pod \"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f\" (UID: \"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f\") " Feb 26 17:41:45 crc kubenswrapper[4805]: I0226 17:41:45.612778 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a37a8b33-12b3-47a7-92a3-6e4d8fa8338f" (UID: "a37a8b33-12b3-47a7-92a3-6e4d8fa8338f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:41:45 crc kubenswrapper[4805]: I0226 17:41:45.613513 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a37a8b33-12b3-47a7-92a3-6e4d8fa8338f" (UID: "a37a8b33-12b3-47a7-92a3-6e4d8fa8338f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:41:45 crc kubenswrapper[4805]: I0226 17:41:45.614105 4805 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:45 crc kubenswrapper[4805]: I0226 17:41:45.614129 4805 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:45 crc kubenswrapper[4805]: I0226 17:41:45.617636 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-kube-api-access-f7sdk" (OuterVolumeSpecName: "kube-api-access-f7sdk") pod "a37a8b33-12b3-47a7-92a3-6e4d8fa8338f" (UID: "a37a8b33-12b3-47a7-92a3-6e4d8fa8338f"). InnerVolumeSpecName "kube-api-access-f7sdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:41:45 crc kubenswrapper[4805]: I0226 17:41:45.617906 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-scripts" (OuterVolumeSpecName: "scripts") pod "a37a8b33-12b3-47a7-92a3-6e4d8fa8338f" (UID: "a37a8b33-12b3-47a7-92a3-6e4d8fa8338f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:41:45 crc kubenswrapper[4805]: I0226 17:41:45.660687 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a37a8b33-12b3-47a7-92a3-6e4d8fa8338f" (UID: "a37a8b33-12b3-47a7-92a3-6e4d8fa8338f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:41:45 crc kubenswrapper[4805]: I0226 17:41:45.717813 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7sdk\" (UniqueName: \"kubernetes.io/projected/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-kube-api-access-f7sdk\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:45 crc kubenswrapper[4805]: I0226 17:41:45.717885 4805 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:45 crc kubenswrapper[4805]: I0226 17:41:45.717907 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:45 crc kubenswrapper[4805]: I0226 17:41:45.764550 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a37a8b33-12b3-47a7-92a3-6e4d8fa8338f" (UID: "a37a8b33-12b3-47a7-92a3-6e4d8fa8338f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:41:45 crc kubenswrapper[4805]: I0226 17:41:45.777194 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-config-data" (OuterVolumeSpecName: "config-data") pod "a37a8b33-12b3-47a7-92a3-6e4d8fa8338f" (UID: "a37a8b33-12b3-47a7-92a3-6e4d8fa8338f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:41:45 crc kubenswrapper[4805]: I0226 17:41:45.825315 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:45 crc kubenswrapper[4805]: I0226 17:41:45.825348 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.129150 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c0e114c8-fc37-47f7-a02f-202774708b04","Type":"ContainerStarted","Data":"6cae855cea1b9b5d23f295af64fc954c9c2200ed7d58d1c0fe7745beee6e4059"} Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.129905 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c0e114c8-fc37-47f7-a02f-202774708b04","Type":"ContainerStarted","Data":"a9bc590f0519afaf47c26ec767672bb8154ee489da10b99683ec2eb6e0142960"} Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.129926 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c0e114c8-fc37-47f7-a02f-202774708b04","Type":"ContainerStarted","Data":"dd0ffc1a561993eca3a0cab4d9b7311cb6c7a5030f80559a99c2bde276c0eb19"} Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.132596 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c4d891fd-08b5-44ba-a363-11b4b8c30f74","Type":"ContainerStarted","Data":"da638b5aa6bcb309e0fb1053f473d800759a75f22f0523c48bac14baf7e6a0cf"} Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.132654 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c4d891fd-08b5-44ba-a363-11b4b8c30f74","Type":"ContainerStarted","Data":"c105b4ee5556542424686ba814255ee294051b01984e44f2190123886a941a3a"} Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.161535 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.161989 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a37a8b33-12b3-47a7-92a3-6e4d8fa8338f","Type":"ContainerDied","Data":"524a0a74ff5c76e199fe950b4f94805df31ef3548db8904a17d4ac819df2d501"} Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.162080 4805 scope.go:117] "RemoveContainer" containerID="d547e4b8ba276a830f861d26a559ed481cbbd31aa0f158046bb62112fa2898f8" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.185902 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.185870715 podStartE2EDuration="2.185870715s" podCreationTimestamp="2026-02-26 17:41:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:41:46.15638781 +0000 UTC m=+1620.718142159" watchObservedRunningTime="2026-02-26 17:41:46.185870715 +0000 UTC m=+1620.747625054" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.194288 4805 scope.go:117] "RemoveContainer" containerID="ecfad850bcea8f2884f126683e405241530f8ba87be0fa28df4214b1003920aa" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.229766 4805 scope.go:117] "RemoveContainer" containerID="4114c257ed26074cd4d06494800c25635d1afa20c52fb1e41b1c835f739f445a" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.234375 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.234346028 podStartE2EDuration="2.234346028s" podCreationTimestamp="2026-02-26 17:41:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:41:46.1809447 +0000 UTC m=+1620.742699039" watchObservedRunningTime="2026-02-26 17:41:46.234346028 +0000 UTC m=+1620.796100367" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.255235 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.257351 4805 scope.go:117] "RemoveContainer" containerID="cf27a45c5291442af74cc380bc8d15222294677f2bddb8b31454e449079be748" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.269047 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.282367 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:41:46 crc kubenswrapper[4805]: E0226 17:41:46.283197 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a37a8b33-12b3-47a7-92a3-6e4d8fa8338f" containerName="proxy-httpd" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.283220 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a37a8b33-12b3-47a7-92a3-6e4d8fa8338f" containerName="proxy-httpd" Feb 26 17:41:46 crc kubenswrapper[4805]: E0226 17:41:46.283237 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a37a8b33-12b3-47a7-92a3-6e4d8fa8338f" containerName="ceilometer-notification-agent" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.283247 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a37a8b33-12b3-47a7-92a3-6e4d8fa8338f" containerName="ceilometer-notification-agent" Feb 26 17:41:46 crc kubenswrapper[4805]: E0226 17:41:46.283269 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a37a8b33-12b3-47a7-92a3-6e4d8fa8338f" containerName="sg-core" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.283277 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a37a8b33-12b3-47a7-92a3-6e4d8fa8338f" containerName="sg-core" Feb 26 17:41:46 crc kubenswrapper[4805]: E0226 17:41:46.283291 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a37a8b33-12b3-47a7-92a3-6e4d8fa8338f" containerName="ceilometer-central-agent" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.283300 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a37a8b33-12b3-47a7-92a3-6e4d8fa8338f" containerName="ceilometer-central-agent" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.283524 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a37a8b33-12b3-47a7-92a3-6e4d8fa8338f" containerName="ceilometer-central-agent" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.283551 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a37a8b33-12b3-47a7-92a3-6e4d8fa8338f" containerName="ceilometer-notification-agent" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.283567 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a37a8b33-12b3-47a7-92a3-6e4d8fa8338f" containerName="sg-core" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.283580 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a37a8b33-12b3-47a7-92a3-6e4d8fa8338f" containerName="proxy-httpd" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.286371 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.289414 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.290954 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.291311 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.296494 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.337838 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f51903c-b83f-44cb-8369-d99ed92259fa-config-data\") pod \"ceilometer-0\" (UID: \"8f51903c-b83f-44cb-8369-d99ed92259fa\") " pod="openstack/ceilometer-0" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.337916 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ft99\" (UniqueName: \"kubernetes.io/projected/8f51903c-b83f-44cb-8369-d99ed92259fa-kube-api-access-6ft99\") pod \"ceilometer-0\" (UID: \"8f51903c-b83f-44cb-8369-d99ed92259fa\") " pod="openstack/ceilometer-0" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.338138 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f51903c-b83f-44cb-8369-d99ed92259fa-scripts\") pod \"ceilometer-0\" (UID: \"8f51903c-b83f-44cb-8369-d99ed92259fa\") " pod="openstack/ceilometer-0" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.338190 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8f51903c-b83f-44cb-8369-d99ed92259fa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8f51903c-b83f-44cb-8369-d99ed92259fa\") " pod="openstack/ceilometer-0" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.338334 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f51903c-b83f-44cb-8369-d99ed92259fa-log-httpd\") pod \"ceilometer-0\" (UID: \"8f51903c-b83f-44cb-8369-d99ed92259fa\") " pod="openstack/ceilometer-0" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.338511 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f51903c-b83f-44cb-8369-d99ed92259fa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8f51903c-b83f-44cb-8369-d99ed92259fa\") " pod="openstack/ceilometer-0" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.338547 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f51903c-b83f-44cb-8369-d99ed92259fa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8f51903c-b83f-44cb-8369-d99ed92259fa\") " pod="openstack/ceilometer-0" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.338581 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f51903c-b83f-44cb-8369-d99ed92259fa-run-httpd\") pod \"ceilometer-0\" (UID: \"8f51903c-b83f-44cb-8369-d99ed92259fa\") " pod="openstack/ceilometer-0" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.441106 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f51903c-b83f-44cb-8369-d99ed92259fa-config-data\") pod \"ceilometer-0\" (UID: \"8f51903c-b83f-44cb-8369-d99ed92259fa\") " pod="openstack/ceilometer-0" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.441166 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ft99\" (UniqueName: \"kubernetes.io/projected/8f51903c-b83f-44cb-8369-d99ed92259fa-kube-api-access-6ft99\") pod \"ceilometer-0\" (UID: \"8f51903c-b83f-44cb-8369-d99ed92259fa\") " pod="openstack/ceilometer-0" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.441319 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f51903c-b83f-44cb-8369-d99ed92259fa-scripts\") pod \"ceilometer-0\" (UID: \"8f51903c-b83f-44cb-8369-d99ed92259fa\") " pod="openstack/ceilometer-0" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.441352 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8f51903c-b83f-44cb-8369-d99ed92259fa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8f51903c-b83f-44cb-8369-d99ed92259fa\") " pod="openstack/ceilometer-0" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.441414 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f51903c-b83f-44cb-8369-d99ed92259fa-log-httpd\") pod \"ceilometer-0\" (UID: \"8f51903c-b83f-44cb-8369-d99ed92259fa\") " pod="openstack/ceilometer-0" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.441540 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f51903c-b83f-44cb-8369-d99ed92259fa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8f51903c-b83f-44cb-8369-d99ed92259fa\") " pod="openstack/ceilometer-0" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.441567 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f51903c-b83f-44cb-8369-d99ed92259fa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8f51903c-b83f-44cb-8369-d99ed92259fa\") " pod="openstack/ceilometer-0" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.441590 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f51903c-b83f-44cb-8369-d99ed92259fa-run-httpd\") pod \"ceilometer-0\" (UID: \"8f51903c-b83f-44cb-8369-d99ed92259fa\") " pod="openstack/ceilometer-0" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.442822 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f51903c-b83f-44cb-8369-d99ed92259fa-run-httpd\") pod \"ceilometer-0\" (UID: \"8f51903c-b83f-44cb-8369-d99ed92259fa\") " pod="openstack/ceilometer-0" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.443395 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f51903c-b83f-44cb-8369-d99ed92259fa-log-httpd\") pod \"ceilometer-0\" (UID: \"8f51903c-b83f-44cb-8369-d99ed92259fa\") " pod="openstack/ceilometer-0" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.448930 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8f51903c-b83f-44cb-8369-d99ed92259fa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8f51903c-b83f-44cb-8369-d99ed92259fa\") " pod="openstack/ceilometer-0" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.449493 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f51903c-b83f-44cb-8369-d99ed92259fa-scripts\") pod \"ceilometer-0\" (UID: \"8f51903c-b83f-44cb-8369-d99ed92259fa\") " pod="openstack/ceilometer-0" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.449898 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f51903c-b83f-44cb-8369-d99ed92259fa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8f51903c-b83f-44cb-8369-d99ed92259fa\") " pod="openstack/ceilometer-0" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.450862 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f51903c-b83f-44cb-8369-d99ed92259fa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8f51903c-b83f-44cb-8369-d99ed92259fa\") " pod="openstack/ceilometer-0" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.454557 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f51903c-b83f-44cb-8369-d99ed92259fa-config-data\") pod \"ceilometer-0\" (UID: \"8f51903c-b83f-44cb-8369-d99ed92259fa\") " pod="openstack/ceilometer-0" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.468586 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ft99\" (UniqueName: \"kubernetes.io/projected/8f51903c-b83f-44cb-8369-d99ed92259fa-kube-api-access-6ft99\") pod \"ceilometer-0\" (UID: \"8f51903c-b83f-44cb-8369-d99ed92259fa\") " pod="openstack/ceilometer-0" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.614484 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 17:41:46 crc kubenswrapper[4805]: I0226 17:41:46.969710 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a37a8b33-12b3-47a7-92a3-6e4d8fa8338f" path="/var/lib/kubelet/pods/a37a8b33-12b3-47a7-92a3-6e4d8fa8338f/volumes" Feb 26 17:41:47 crc kubenswrapper[4805]: I0226 17:41:47.119520 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:41:47 crc kubenswrapper[4805]: I0226 17:41:47.177759 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f51903c-b83f-44cb-8369-d99ed92259fa","Type":"ContainerStarted","Data":"8039c72169635d96ba91048e5d709b14d6b61981f2682de75e54a63af661d8c7"} Feb 26 17:41:48 crc kubenswrapper[4805]: I0226 17:41:48.191538 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f51903c-b83f-44cb-8369-d99ed92259fa","Type":"ContainerStarted","Data":"90a1fd003b4552a39e757fe06412af50e4e72ac14729bcd76b602d2c54dfc78e"} Feb 26 17:41:49 crc kubenswrapper[4805]: I0226 17:41:49.207851 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f51903c-b83f-44cb-8369-d99ed92259fa","Type":"ContainerStarted","Data":"47c88961793a4c8a86de2f3790a5d9efb475db6b202bbdc5e982e1c1165d8409"} Feb 26 17:41:49 crc kubenswrapper[4805]: I0226 17:41:49.208294 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f51903c-b83f-44cb-8369-d99ed92259fa","Type":"ContainerStarted","Data":"fd750a6d60081cb150a2e1a3913faf4906d0f9c8f270c684e32fd57f1c1334a5"} Feb 26 17:41:49 crc kubenswrapper[4805]: I0226 17:41:49.615118 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 26 17:41:49 crc kubenswrapper[4805]: I0226 17:41:49.681500 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 26 17:41:50 crc kubenswrapper[4805]: I0226 17:41:50.346909 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 26 17:41:50 crc kubenswrapper[4805]: I0226 17:41:50.348791 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 26 17:41:51 crc kubenswrapper[4805]: I0226 17:41:51.360286 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="655f50ae-6f7d-46d2-bae9-5c0bd4df422d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.226:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 17:41:51 crc kubenswrapper[4805]: I0226 17:41:51.360305 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="655f50ae-6f7d-46d2-bae9-5c0bd4df422d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.226:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 17:41:52 crc kubenswrapper[4805]: I0226 17:41:52.250957 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f51903c-b83f-44cb-8369-d99ed92259fa","Type":"ContainerStarted","Data":"dd573679be1333bf9a00cc7371870ac6e9706840b6e8f6eb384638033122daaa"} Feb 26 17:41:52 crc kubenswrapper[4805]: I0226 17:41:52.251286 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 17:41:52 crc kubenswrapper[4805]: I0226 17:41:52.281868 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.263388604 podStartE2EDuration="6.281844575s" podCreationTimestamp="2026-02-26 17:41:46 +0000 UTC" firstStartedPulling="2026-02-26 17:41:47.1333479 +0000 UTC m=+1621.695102239" lastFinishedPulling="2026-02-26 17:41:51.151803871 +0000 UTC m=+1625.713558210" observedRunningTime="2026-02-26 17:41:52.271621727 +0000 UTC m=+1626.833376076" watchObservedRunningTime="2026-02-26 17:41:52.281844575 +0000 UTC m=+1626.843598914" Feb 26 17:41:52 crc kubenswrapper[4805]: I0226 17:41:52.361617 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 26 17:41:54 crc kubenswrapper[4805]: I0226 17:41:54.561420 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 17:41:54 crc kubenswrapper[4805]: I0226 17:41:54.562173 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 17:41:54 crc kubenswrapper[4805]: I0226 17:41:54.615423 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 26 17:41:54 crc kubenswrapper[4805]: I0226 17:41:54.653120 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 26 17:41:55 crc kubenswrapper[4805]: I0226 17:41:55.318835 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 26 17:41:55 crc kubenswrapper[4805]: I0226 17:41:55.603482 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c0e114c8-fc37-47f7-a02f-202774708b04" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.228:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 17:41:55 crc kubenswrapper[4805]: I0226 17:41:55.644328 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c0e114c8-fc37-47f7-a02f-202774708b04" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.228:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 17:42:00 crc kubenswrapper[4805]: I0226 17:42:00.160134 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535462-ks82t"] Feb 26 17:42:00 crc kubenswrapper[4805]: I0226 17:42:00.162331 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535462-ks82t" Feb 26 17:42:00 crc kubenswrapper[4805]: I0226 17:42:00.165390 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 17:42:00 crc kubenswrapper[4805]: I0226 17:42:00.167468 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 17:42:00 crc kubenswrapper[4805]: I0226 17:42:00.167506 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 17:42:00 crc kubenswrapper[4805]: I0226 17:42:00.176980 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj748\" (UniqueName: \"kubernetes.io/projected/50a09872-8737-49f7-97b5-6075717d1336-kube-api-access-cj748\") pod \"auto-csr-approver-29535462-ks82t\" (UID: \"50a09872-8737-49f7-97b5-6075717d1336\") " pod="openshift-infra/auto-csr-approver-29535462-ks82t" Feb 26 17:42:00 crc kubenswrapper[4805]: I0226 17:42:00.177778 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535462-ks82t"] Feb 26 17:42:00 crc kubenswrapper[4805]: I0226 17:42:00.279958 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj748\" (UniqueName: \"kubernetes.io/projected/50a09872-8737-49f7-97b5-6075717d1336-kube-api-access-cj748\") pod \"auto-csr-approver-29535462-ks82t\" (UID: \"50a09872-8737-49f7-97b5-6075717d1336\") " pod="openshift-infra/auto-csr-approver-29535462-ks82t" Feb 26 17:42:00 crc kubenswrapper[4805]: I0226 17:42:00.302903 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj748\" (UniqueName: \"kubernetes.io/projected/50a09872-8737-49f7-97b5-6075717d1336-kube-api-access-cj748\") pod \"auto-csr-approver-29535462-ks82t\" (UID: \"50a09872-8737-49f7-97b5-6075717d1336\") " pod="openshift-infra/auto-csr-approver-29535462-ks82t" Feb 26 17:42:00 crc kubenswrapper[4805]: I0226 17:42:00.361485 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 26 17:42:00 crc kubenswrapper[4805]: I0226 17:42:00.362464 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 26 17:42:00 crc kubenswrapper[4805]: I0226 17:42:00.368010 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 26 17:42:00 crc kubenswrapper[4805]: I0226 17:42:00.490683 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535462-ks82t" Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.004361 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535462-ks82t"] Feb 26 17:42:01 crc kubenswrapper[4805]: W0226 17:42:01.018269 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50a09872_8737_49f7_97b5_6075717d1336.slice/crio-189118599fb9bb2a74ef044e2de4c526ec9971dc1bc861a0884ba5a719b71d5a WatchSource:0}: Error finding container 189118599fb9bb2a74ef044e2de4c526ec9971dc1bc861a0884ba5a719b71d5a: Status 404 returned error can't find the container with id 189118599fb9bb2a74ef044e2de4c526ec9971dc1bc861a0884ba5a719b71d5a Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.127048 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.200917 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98441635-6cfa-4207-957c-6161496d848c-combined-ca-bundle\") pod \"98441635-6cfa-4207-957c-6161496d848c\" (UID: \"98441635-6cfa-4207-957c-6161496d848c\") " Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.201040 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98441635-6cfa-4207-957c-6161496d848c-config-data\") pod \"98441635-6cfa-4207-957c-6161496d848c\" (UID: \"98441635-6cfa-4207-957c-6161496d848c\") " Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.201121 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhmvv\" (UniqueName: \"kubernetes.io/projected/98441635-6cfa-4207-957c-6161496d848c-kube-api-access-fhmvv\") pod \"98441635-6cfa-4207-957c-6161496d848c\" (UID: \"98441635-6cfa-4207-957c-6161496d848c\") " Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.208354 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98441635-6cfa-4207-957c-6161496d848c-kube-api-access-fhmvv" (OuterVolumeSpecName: "kube-api-access-fhmvv") pod "98441635-6cfa-4207-957c-6161496d848c" (UID: "98441635-6cfa-4207-957c-6161496d848c"). InnerVolumeSpecName "kube-api-access-fhmvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.233195 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98441635-6cfa-4207-957c-6161496d848c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98441635-6cfa-4207-957c-6161496d848c" (UID: "98441635-6cfa-4207-957c-6161496d848c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.233729 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98441635-6cfa-4207-957c-6161496d848c-config-data" (OuterVolumeSpecName: "config-data") pod "98441635-6cfa-4207-957c-6161496d848c" (UID: "98441635-6cfa-4207-957c-6161496d848c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.303510 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhmvv\" (UniqueName: \"kubernetes.io/projected/98441635-6cfa-4207-957c-6161496d848c-kube-api-access-fhmvv\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.303559 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98441635-6cfa-4207-957c-6161496d848c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.303570 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98441635-6cfa-4207-957c-6161496d848c-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.370559 4805 generic.go:334] "Generic (PLEG): container finished" podID="98441635-6cfa-4207-957c-6161496d848c" containerID="3dc62c01fa1aeab19f678f458abf6f92fba5578bc662e81787685129d94a098d" exitCode=137 Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.370664 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"98441635-6cfa-4207-957c-6161496d848c","Type":"ContainerDied","Data":"3dc62c01fa1aeab19f678f458abf6f92fba5578bc662e81787685129d94a098d"} Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.370714 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"98441635-6cfa-4207-957c-6161496d848c","Type":"ContainerDied","Data":"e3f2e8475d53ab12f687756cccd0756a8daee7e6cfd16eee88e16f45e022e1a5"} Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.370738 4805 scope.go:117] "RemoveContainer" containerID="3dc62c01fa1aeab19f678f458abf6f92fba5578bc662e81787685129d94a098d" Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.371275 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.373275 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535462-ks82t" event={"ID":"50a09872-8737-49f7-97b5-6075717d1336","Type":"ContainerStarted","Data":"189118599fb9bb2a74ef044e2de4c526ec9971dc1bc861a0884ba5a719b71d5a"} Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.381394 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.412009 4805 scope.go:117] "RemoveContainer" containerID="3dc62c01fa1aeab19f678f458abf6f92fba5578bc662e81787685129d94a098d" Feb 26 17:42:01 crc kubenswrapper[4805]: E0226 17:42:01.413439 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dc62c01fa1aeab19f678f458abf6f92fba5578bc662e81787685129d94a098d\": container with ID starting with 3dc62c01fa1aeab19f678f458abf6f92fba5578bc662e81787685129d94a098d not found: ID does not exist" containerID="3dc62c01fa1aeab19f678f458abf6f92fba5578bc662e81787685129d94a098d" Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.413495 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dc62c01fa1aeab19f678f458abf6f92fba5578bc662e81787685129d94a098d"} err="failed to get container status \"3dc62c01fa1aeab19f678f458abf6f92fba5578bc662e81787685129d94a098d\": rpc error: code = NotFound desc = could not find container \"3dc62c01fa1aeab19f678f458abf6f92fba5578bc662e81787685129d94a098d\": container with ID starting with 3dc62c01fa1aeab19f678f458abf6f92fba5578bc662e81787685129d94a098d not found: ID does not exist" Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.449967 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.461612 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.548753 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 17:42:01 crc kubenswrapper[4805]: E0226 17:42:01.549279 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98441635-6cfa-4207-957c-6161496d848c" containerName="nova-cell1-novncproxy-novncproxy" Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.549301 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="98441635-6cfa-4207-957c-6161496d848c" containerName="nova-cell1-novncproxy-novncproxy" Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.549568 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="98441635-6cfa-4207-957c-6161496d848c" containerName="nova-cell1-novncproxy-novncproxy" Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.550459 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.552529 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.552948 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.553575 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.558488 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.711899 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/404c1e92-b497-4dfe-aeb4-54db98639b48-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"404c1e92-b497-4dfe-aeb4-54db98639b48\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.711970 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/404c1e92-b497-4dfe-aeb4-54db98639b48-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"404c1e92-b497-4dfe-aeb4-54db98639b48\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.712684 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/404c1e92-b497-4dfe-aeb4-54db98639b48-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"404c1e92-b497-4dfe-aeb4-54db98639b48\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.712736 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/404c1e92-b497-4dfe-aeb4-54db98639b48-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"404c1e92-b497-4dfe-aeb4-54db98639b48\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.712758 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-482fs\" (UniqueName: \"kubernetes.io/projected/404c1e92-b497-4dfe-aeb4-54db98639b48-kube-api-access-482fs\") pod \"nova-cell1-novncproxy-0\" (UID: \"404c1e92-b497-4dfe-aeb4-54db98639b48\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.814988 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/404c1e92-b497-4dfe-aeb4-54db98639b48-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"404c1e92-b497-4dfe-aeb4-54db98639b48\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.815069 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/404c1e92-b497-4dfe-aeb4-54db98639b48-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"404c1e92-b497-4dfe-aeb4-54db98639b48\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.815166 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/404c1e92-b497-4dfe-aeb4-54db98639b48-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"404c1e92-b497-4dfe-aeb4-54db98639b48\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.815207 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/404c1e92-b497-4dfe-aeb4-54db98639b48-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"404c1e92-b497-4dfe-aeb4-54db98639b48\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.815237 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-482fs\" (UniqueName: \"kubernetes.io/projected/404c1e92-b497-4dfe-aeb4-54db98639b48-kube-api-access-482fs\") pod \"nova-cell1-novncproxy-0\" (UID: \"404c1e92-b497-4dfe-aeb4-54db98639b48\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.819439 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/404c1e92-b497-4dfe-aeb4-54db98639b48-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"404c1e92-b497-4dfe-aeb4-54db98639b48\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.819999 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/404c1e92-b497-4dfe-aeb4-54db98639b48-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"404c1e92-b497-4dfe-aeb4-54db98639b48\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.823732 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/404c1e92-b497-4dfe-aeb4-54db98639b48-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"404c1e92-b497-4dfe-aeb4-54db98639b48\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.826203 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/404c1e92-b497-4dfe-aeb4-54db98639b48-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"404c1e92-b497-4dfe-aeb4-54db98639b48\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.838636 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-482fs\" (UniqueName: \"kubernetes.io/projected/404c1e92-b497-4dfe-aeb4-54db98639b48-kube-api-access-482fs\") pod \"nova-cell1-novncproxy-0\" (UID: \"404c1e92-b497-4dfe-aeb4-54db98639b48\") " pod="openstack/nova-cell1-novncproxy-0" Feb 26 17:42:01 crc kubenswrapper[4805]: I0226 17:42:01.865504 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 26 17:42:02 crc kubenswrapper[4805]: I0226 17:42:02.347609 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 26 17:42:02 crc kubenswrapper[4805]: I0226 17:42:02.397603 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"404c1e92-b497-4dfe-aeb4-54db98639b48","Type":"ContainerStarted","Data":"22e19dc4321f71b152fc2973fe0506457999cc5ad40b7acf2f9f4917e3ed5b90"} Feb 26 17:42:02 crc kubenswrapper[4805]: I0226 17:42:02.964791 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98441635-6cfa-4207-957c-6161496d848c" path="/var/lib/kubelet/pods/98441635-6cfa-4207-957c-6161496d848c/volumes" Feb 26 17:42:03 crc kubenswrapper[4805]: I0226 17:42:03.416648 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"404c1e92-b497-4dfe-aeb4-54db98639b48","Type":"ContainerStarted","Data":"41ba8d8ac202421a1b19f950ce6dd11705a8b23c5afa092daff0f2bcb508b3d3"} Feb 26 17:42:03 crc kubenswrapper[4805]: I0226 17:42:03.422695 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535462-ks82t" event={"ID":"50a09872-8737-49f7-97b5-6075717d1336","Type":"ContainerStarted","Data":"d9487905cdea19a9180ce1eef845149aed5d0db0b6a772bf776bd7871ed2b09a"} Feb 26 17:42:03 crc kubenswrapper[4805]: I0226 17:42:03.443041 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.442996705 podStartE2EDuration="2.442996705s" podCreationTimestamp="2026-02-26 17:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:42:03.437600659 +0000 UTC m=+1637.999354998" watchObservedRunningTime="2026-02-26 17:42:03.442996705 +0000 UTC m=+1638.004751044" Feb 26 17:42:03 crc kubenswrapper[4805]: I0226 17:42:03.467644 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535462-ks82t" podStartSLOduration=2.232906531 podStartE2EDuration="3.467622597s" podCreationTimestamp="2026-02-26 17:42:00 +0000 UTC" firstStartedPulling="2026-02-26 17:42:01.020287973 +0000 UTC m=+1635.582042322" lastFinishedPulling="2026-02-26 17:42:02.255004049 +0000 UTC m=+1636.816758388" observedRunningTime="2026-02-26 17:42:03.461713468 +0000 UTC m=+1638.023467807" watchObservedRunningTime="2026-02-26 17:42:03.467622597 +0000 UTC m=+1638.029376936" Feb 26 17:42:04 crc kubenswrapper[4805]: I0226 17:42:04.431964 4805 generic.go:334] "Generic (PLEG): container finished" podID="50a09872-8737-49f7-97b5-6075717d1336" containerID="d9487905cdea19a9180ce1eef845149aed5d0db0b6a772bf776bd7871ed2b09a" exitCode=0 Feb 26 17:42:04 crc kubenswrapper[4805]: I0226 17:42:04.432133 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535462-ks82t" event={"ID":"50a09872-8737-49f7-97b5-6075717d1336","Type":"ContainerDied","Data":"d9487905cdea19a9180ce1eef845149aed5d0db0b6a772bf776bd7871ed2b09a"} Feb 26 17:42:04 crc kubenswrapper[4805]: I0226 17:42:04.565304 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 26 17:42:04 crc kubenswrapper[4805]: I0226 17:42:04.565603 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 26 17:42:04 crc kubenswrapper[4805]: I0226 17:42:04.565892 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 26 17:42:04 crc kubenswrapper[4805]: I0226 17:42:04.569829 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 26 17:42:05 crc kubenswrapper[4805]: I0226 17:42:05.446681 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 26 17:42:05 crc kubenswrapper[4805]: I0226 17:42:05.457362 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 26 17:42:05 crc kubenswrapper[4805]: I0226 17:42:05.704314 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54dd998c-nw4mj"] Feb 26 17:42:05 crc kubenswrapper[4805]: I0226 17:42:05.706862 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54dd998c-nw4mj" Feb 26 17:42:05 crc kubenswrapper[4805]: I0226 17:42:05.719633 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54dd998c-nw4mj"] Feb 26 17:42:05 crc kubenswrapper[4805]: I0226 17:42:05.732371 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8874dd5-500c-4544-acd3-2749ede9ef23-dns-svc\") pod \"dnsmasq-dns-54dd998c-nw4mj\" (UID: \"a8874dd5-500c-4544-acd3-2749ede9ef23\") " pod="openstack/dnsmasq-dns-54dd998c-nw4mj" Feb 26 17:42:05 crc kubenswrapper[4805]: I0226 17:42:05.732437 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8clm8\" (UniqueName: \"kubernetes.io/projected/a8874dd5-500c-4544-acd3-2749ede9ef23-kube-api-access-8clm8\") pod \"dnsmasq-dns-54dd998c-nw4mj\" (UID: \"a8874dd5-500c-4544-acd3-2749ede9ef23\") " pod="openstack/dnsmasq-dns-54dd998c-nw4mj" Feb 26 17:42:05 crc kubenswrapper[4805]: I0226 17:42:05.732463 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a8874dd5-500c-4544-acd3-2749ede9ef23-ovsdbserver-sb\") pod \"dnsmasq-dns-54dd998c-nw4mj\" (UID: \"a8874dd5-500c-4544-acd3-2749ede9ef23\") " pod="openstack/dnsmasq-dns-54dd998c-nw4mj" Feb 26 17:42:05 crc kubenswrapper[4805]: I0226 17:42:05.732593 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8874dd5-500c-4544-acd3-2749ede9ef23-config\") pod \"dnsmasq-dns-54dd998c-nw4mj\" (UID: \"a8874dd5-500c-4544-acd3-2749ede9ef23\") " pod="openstack/dnsmasq-dns-54dd998c-nw4mj" Feb 26 17:42:05 crc kubenswrapper[4805]: I0226 17:42:05.732639 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a8874dd5-500c-4544-acd3-2749ede9ef23-ovsdbserver-nb\") pod \"dnsmasq-dns-54dd998c-nw4mj\" (UID: \"a8874dd5-500c-4544-acd3-2749ede9ef23\") " pod="openstack/dnsmasq-dns-54dd998c-nw4mj" Feb 26 17:42:05 crc kubenswrapper[4805]: I0226 17:42:05.732705 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a8874dd5-500c-4544-acd3-2749ede9ef23-dns-swift-storage-0\") pod \"dnsmasq-dns-54dd998c-nw4mj\" (UID: \"a8874dd5-500c-4544-acd3-2749ede9ef23\") " pod="openstack/dnsmasq-dns-54dd998c-nw4mj" Feb 26 17:42:05 crc kubenswrapper[4805]: I0226 17:42:05.837078 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8874dd5-500c-4544-acd3-2749ede9ef23-config\") pod \"dnsmasq-dns-54dd998c-nw4mj\" (UID: \"a8874dd5-500c-4544-acd3-2749ede9ef23\") " pod="openstack/dnsmasq-dns-54dd998c-nw4mj" Feb 26 17:42:05 crc kubenswrapper[4805]: I0226 17:42:05.837159 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a8874dd5-500c-4544-acd3-2749ede9ef23-ovsdbserver-nb\") pod \"dnsmasq-dns-54dd998c-nw4mj\" (UID: \"a8874dd5-500c-4544-acd3-2749ede9ef23\") " pod="openstack/dnsmasq-dns-54dd998c-nw4mj" Feb 26 17:42:05 crc kubenswrapper[4805]: I0226 17:42:05.837246 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a8874dd5-500c-4544-acd3-2749ede9ef23-dns-swift-storage-0\") pod \"dnsmasq-dns-54dd998c-nw4mj\" (UID: \"a8874dd5-500c-4544-acd3-2749ede9ef23\") " pod="openstack/dnsmasq-dns-54dd998c-nw4mj" Feb 26 17:42:05 crc kubenswrapper[4805]: I0226 17:42:05.837315 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8874dd5-500c-4544-acd3-2749ede9ef23-dns-svc\") pod \"dnsmasq-dns-54dd998c-nw4mj\" (UID: \"a8874dd5-500c-4544-acd3-2749ede9ef23\") " pod="openstack/dnsmasq-dns-54dd998c-nw4mj" Feb 26 17:42:05 crc kubenswrapper[4805]: I0226 17:42:05.837395 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8clm8\" (UniqueName: \"kubernetes.io/projected/a8874dd5-500c-4544-acd3-2749ede9ef23-kube-api-access-8clm8\") pod \"dnsmasq-dns-54dd998c-nw4mj\" (UID: \"a8874dd5-500c-4544-acd3-2749ede9ef23\") " pod="openstack/dnsmasq-dns-54dd998c-nw4mj" Feb 26 17:42:05 crc kubenswrapper[4805]: I0226 17:42:05.837423 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a8874dd5-500c-4544-acd3-2749ede9ef23-ovsdbserver-sb\") pod \"dnsmasq-dns-54dd998c-nw4mj\" (UID: \"a8874dd5-500c-4544-acd3-2749ede9ef23\") " pod="openstack/dnsmasq-dns-54dd998c-nw4mj" Feb 26 17:42:05 crc kubenswrapper[4805]: I0226 17:42:05.838297 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a8874dd5-500c-4544-acd3-2749ede9ef23-ovsdbserver-sb\") pod \"dnsmasq-dns-54dd998c-nw4mj\" (UID: \"a8874dd5-500c-4544-acd3-2749ede9ef23\") " pod="openstack/dnsmasq-dns-54dd998c-nw4mj" Feb 26 17:42:05 crc kubenswrapper[4805]: I0226 17:42:05.838318 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8874dd5-500c-4544-acd3-2749ede9ef23-config\") pod \"dnsmasq-dns-54dd998c-nw4mj\" (UID: \"a8874dd5-500c-4544-acd3-2749ede9ef23\") " pod="openstack/dnsmasq-dns-54dd998c-nw4mj" Feb 26 17:42:05 crc kubenswrapper[4805]: I0226 17:42:05.839131 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a8874dd5-500c-4544-acd3-2749ede9ef23-ovsdbserver-nb\") pod \"dnsmasq-dns-54dd998c-nw4mj\" (UID: \"a8874dd5-500c-4544-acd3-2749ede9ef23\") " pod="openstack/dnsmasq-dns-54dd998c-nw4mj" Feb 26 17:42:05 crc kubenswrapper[4805]: I0226 17:42:05.839332 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8874dd5-500c-4544-acd3-2749ede9ef23-dns-svc\") pod \"dnsmasq-dns-54dd998c-nw4mj\" (UID: \"a8874dd5-500c-4544-acd3-2749ede9ef23\") " pod="openstack/dnsmasq-dns-54dd998c-nw4mj" Feb 26 17:42:05 crc kubenswrapper[4805]: I0226 17:42:05.839961 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a8874dd5-500c-4544-acd3-2749ede9ef23-dns-swift-storage-0\") pod \"dnsmasq-dns-54dd998c-nw4mj\" (UID: \"a8874dd5-500c-4544-acd3-2749ede9ef23\") " pod="openstack/dnsmasq-dns-54dd998c-nw4mj" Feb 26 17:42:05 crc kubenswrapper[4805]: I0226 17:42:05.867544 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8clm8\" (UniqueName: \"kubernetes.io/projected/a8874dd5-500c-4544-acd3-2749ede9ef23-kube-api-access-8clm8\") pod \"dnsmasq-dns-54dd998c-nw4mj\" (UID: \"a8874dd5-500c-4544-acd3-2749ede9ef23\") " pod="openstack/dnsmasq-dns-54dd998c-nw4mj" Feb 26 17:42:06 crc kubenswrapper[4805]: I0226 17:42:06.039049 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54dd998c-nw4mj" Feb 26 17:42:06 crc kubenswrapper[4805]: I0226 17:42:06.219791 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535462-ks82t" Feb 26 17:42:06 crc kubenswrapper[4805]: I0226 17:42:06.357727 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cj748\" (UniqueName: \"kubernetes.io/projected/50a09872-8737-49f7-97b5-6075717d1336-kube-api-access-cj748\") pod \"50a09872-8737-49f7-97b5-6075717d1336\" (UID: \"50a09872-8737-49f7-97b5-6075717d1336\") " Feb 26 17:42:06 crc kubenswrapper[4805]: I0226 17:42:06.377771 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50a09872-8737-49f7-97b5-6075717d1336-kube-api-access-cj748" (OuterVolumeSpecName: "kube-api-access-cj748") pod "50a09872-8737-49f7-97b5-6075717d1336" (UID: "50a09872-8737-49f7-97b5-6075717d1336"). InnerVolumeSpecName "kube-api-access-cj748". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:42:06 crc kubenswrapper[4805]: I0226 17:42:06.473635 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cj748\" (UniqueName: \"kubernetes.io/projected/50a09872-8737-49f7-97b5-6075717d1336-kube-api-access-cj748\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:06 crc kubenswrapper[4805]: I0226 17:42:06.485557 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535462-ks82t" Feb 26 17:42:06 crc kubenswrapper[4805]: I0226 17:42:06.485574 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535462-ks82t" event={"ID":"50a09872-8737-49f7-97b5-6075717d1336","Type":"ContainerDied","Data":"189118599fb9bb2a74ef044e2de4c526ec9971dc1bc861a0884ba5a719b71d5a"} Feb 26 17:42:06 crc kubenswrapper[4805]: I0226 17:42:06.485665 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="189118599fb9bb2a74ef044e2de4c526ec9971dc1bc861a0884ba5a719b71d5a" Feb 26 17:42:06 crc kubenswrapper[4805]: I0226 17:42:06.720787 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54dd998c-nw4mj"] Feb 26 17:42:06 crc kubenswrapper[4805]: I0226 17:42:06.875393 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 26 17:42:07 crc kubenswrapper[4805]: I0226 17:42:07.326093 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535456-ngmsm"] Feb 26 17:42:07 crc kubenswrapper[4805]: I0226 17:42:07.343964 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535456-ngmsm"] Feb 26 17:42:07 crc kubenswrapper[4805]: I0226 17:42:07.492804 4805 generic.go:334] "Generic (PLEG): container finished" podID="a8874dd5-500c-4544-acd3-2749ede9ef23" containerID="1d259d445bf5bf78c2a34d7677fb278d05ac0d8b41e64c32b2ce7ee2bf0dcf1c" exitCode=0 Feb 26 17:42:07 crc kubenswrapper[4805]: I0226 17:42:07.492885 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dd998c-nw4mj" event={"ID":"a8874dd5-500c-4544-acd3-2749ede9ef23","Type":"ContainerDied","Data":"1d259d445bf5bf78c2a34d7677fb278d05ac0d8b41e64c32b2ce7ee2bf0dcf1c"} Feb 26 17:42:07 crc kubenswrapper[4805]: I0226 17:42:07.492975 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dd998c-nw4mj" event={"ID":"a8874dd5-500c-4544-acd3-2749ede9ef23","Type":"ContainerStarted","Data":"6f5444bfbaaf92e678e327794a20525a96ac4e2aad619cec51aa438a8eaf9012"} Feb 26 17:42:08 crc kubenswrapper[4805]: I0226 17:42:08.511174 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dd998c-nw4mj" event={"ID":"a8874dd5-500c-4544-acd3-2749ede9ef23","Type":"ContainerStarted","Data":"6f0253600401e12d562144f981f5564e5a121a2464c9bab9b590eaa5b5ca180d"} Feb 26 17:42:08 crc kubenswrapper[4805]: I0226 17:42:08.512271 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54dd998c-nw4mj" Feb 26 17:42:08 crc kubenswrapper[4805]: I0226 17:42:08.542559 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-54dd998c-nw4mj" podStartSLOduration=3.542523043 podStartE2EDuration="3.542523043s" podCreationTimestamp="2026-02-26 17:42:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:42:08.534760137 +0000 UTC m=+1643.096514466" watchObservedRunningTime="2026-02-26 17:42:08.542523043 +0000 UTC m=+1643.104277382" Feb 26 17:42:08 crc kubenswrapper[4805]: I0226 17:42:08.603852 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 26 17:42:08 crc kubenswrapper[4805]: I0226 17:42:08.604216 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c0e114c8-fc37-47f7-a02f-202774708b04" containerName="nova-api-log" containerID="cri-o://a9bc590f0519afaf47c26ec767672bb8154ee489da10b99683ec2eb6e0142960" gracePeriod=30 Feb 26 17:42:08 crc kubenswrapper[4805]: I0226 17:42:08.604239 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c0e114c8-fc37-47f7-a02f-202774708b04" containerName="nova-api-api" containerID="cri-o://6cae855cea1b9b5d23f295af64fc954c9c2200ed7d58d1c0fe7745beee6e4059" gracePeriod=30 Feb 26 17:42:08 crc kubenswrapper[4805]: I0226 17:42:08.646151 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:42:08 crc kubenswrapper[4805]: I0226 17:42:08.646672 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8f51903c-b83f-44cb-8369-d99ed92259fa" containerName="ceilometer-central-agent" containerID="cri-o://90a1fd003b4552a39e757fe06412af50e4e72ac14729bcd76b602d2c54dfc78e" gracePeriod=30 Feb 26 17:42:08 crc kubenswrapper[4805]: I0226 17:42:08.646878 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8f51903c-b83f-44cb-8369-d99ed92259fa" containerName="proxy-httpd" containerID="cri-o://dd573679be1333bf9a00cc7371870ac6e9706840b6e8f6eb384638033122daaa" gracePeriod=30 Feb 26 17:42:08 crc kubenswrapper[4805]: I0226 17:42:08.646983 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8f51903c-b83f-44cb-8369-d99ed92259fa" containerName="sg-core" containerID="cri-o://47c88961793a4c8a86de2f3790a5d9efb475db6b202bbdc5e982e1c1165d8409" gracePeriod=30 Feb 26 17:42:08 crc kubenswrapper[4805]: I0226 17:42:08.647126 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8f51903c-b83f-44cb-8369-d99ed92259fa" containerName="ceilometer-notification-agent" containerID="cri-o://fd750a6d60081cb150a2e1a3913faf4906d0f9c8f270c684e32fd57f1c1334a5" gracePeriod=30 Feb 26 17:42:08 crc kubenswrapper[4805]: I0226 17:42:08.660840 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="8f51903c-b83f-44cb-8369-d99ed92259fa" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.230:3000/\": EOF" Feb 26 17:42:08 crc kubenswrapper[4805]: I0226 17:42:08.971104 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbfd9013-6210-4ca0-b7d6-ce58c547779b" path="/var/lib/kubelet/pods/bbfd9013-6210-4ca0-b7d6-ce58c547779b/volumes" Feb 26 17:42:09 crc kubenswrapper[4805]: I0226 17:42:09.537259 4805 generic.go:334] "Generic (PLEG): container finished" podID="c0e114c8-fc37-47f7-a02f-202774708b04" containerID="a9bc590f0519afaf47c26ec767672bb8154ee489da10b99683ec2eb6e0142960" exitCode=143 Feb 26 17:42:09 crc kubenswrapper[4805]: I0226 17:42:09.537367 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c0e114c8-fc37-47f7-a02f-202774708b04","Type":"ContainerDied","Data":"a9bc590f0519afaf47c26ec767672bb8154ee489da10b99683ec2eb6e0142960"} Feb 26 17:42:09 crc kubenswrapper[4805]: I0226 17:42:09.542136 4805 generic.go:334] "Generic (PLEG): container finished" podID="8f51903c-b83f-44cb-8369-d99ed92259fa" containerID="dd573679be1333bf9a00cc7371870ac6e9706840b6e8f6eb384638033122daaa" exitCode=0 Feb 26 17:42:09 crc kubenswrapper[4805]: I0226 17:42:09.542178 4805 generic.go:334] "Generic (PLEG): container finished" podID="8f51903c-b83f-44cb-8369-d99ed92259fa" containerID="47c88961793a4c8a86de2f3790a5d9efb475db6b202bbdc5e982e1c1165d8409" exitCode=2 Feb 26 17:42:09 crc kubenswrapper[4805]: I0226 17:42:09.542187 4805 generic.go:334] "Generic (PLEG): container finished" podID="8f51903c-b83f-44cb-8369-d99ed92259fa" containerID="90a1fd003b4552a39e757fe06412af50e4e72ac14729bcd76b602d2c54dfc78e" exitCode=0 Feb 26 17:42:09 crc kubenswrapper[4805]: I0226 17:42:09.542473 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f51903c-b83f-44cb-8369-d99ed92259fa","Type":"ContainerDied","Data":"dd573679be1333bf9a00cc7371870ac6e9706840b6e8f6eb384638033122daaa"} Feb 26 17:42:09 crc kubenswrapper[4805]: I0226 17:42:09.542632 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f51903c-b83f-44cb-8369-d99ed92259fa","Type":"ContainerDied","Data":"47c88961793a4c8a86de2f3790a5d9efb475db6b202bbdc5e982e1c1165d8409"} Feb 26 17:42:09 crc kubenswrapper[4805]: I0226 17:42:09.542740 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f51903c-b83f-44cb-8369-d99ed92259fa","Type":"ContainerDied","Data":"90a1fd003b4552a39e757fe06412af50e4e72ac14729bcd76b602d2c54dfc78e"} Feb 26 17:42:10 crc kubenswrapper[4805]: I0226 17:42:10.554444 4805 generic.go:334] "Generic (PLEG): container finished" podID="8f51903c-b83f-44cb-8369-d99ed92259fa" containerID="fd750a6d60081cb150a2e1a3913faf4906d0f9c8f270c684e32fd57f1c1334a5" exitCode=0 Feb 26 17:42:10 crc kubenswrapper[4805]: I0226 17:42:10.554802 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f51903c-b83f-44cb-8369-d99ed92259fa","Type":"ContainerDied","Data":"fd750a6d60081cb150a2e1a3913faf4906d0f9c8f270c684e32fd57f1c1334a5"} Feb 26 17:42:10 crc kubenswrapper[4805]: I0226 17:42:10.908732 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.098148 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f51903c-b83f-44cb-8369-d99ed92259fa-config-data\") pod \"8f51903c-b83f-44cb-8369-d99ed92259fa\" (UID: \"8f51903c-b83f-44cb-8369-d99ed92259fa\") " Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.098324 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f51903c-b83f-44cb-8369-d99ed92259fa-log-httpd\") pod \"8f51903c-b83f-44cb-8369-d99ed92259fa\" (UID: \"8f51903c-b83f-44cb-8369-d99ed92259fa\") " Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.098454 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f51903c-b83f-44cb-8369-d99ed92259fa-scripts\") pod \"8f51903c-b83f-44cb-8369-d99ed92259fa\" (UID: \"8f51903c-b83f-44cb-8369-d99ed92259fa\") " Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.098533 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8f51903c-b83f-44cb-8369-d99ed92259fa-sg-core-conf-yaml\") pod \"8f51903c-b83f-44cb-8369-d99ed92259fa\" (UID: \"8f51903c-b83f-44cb-8369-d99ed92259fa\") " Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.098619 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f51903c-b83f-44cb-8369-d99ed92259fa-ceilometer-tls-certs\") pod \"8f51903c-b83f-44cb-8369-d99ed92259fa\" (UID: \"8f51903c-b83f-44cb-8369-d99ed92259fa\") " Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.098688 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ft99\" (UniqueName: \"kubernetes.io/projected/8f51903c-b83f-44cb-8369-d99ed92259fa-kube-api-access-6ft99\") pod \"8f51903c-b83f-44cb-8369-d99ed92259fa\" (UID: \"8f51903c-b83f-44cb-8369-d99ed92259fa\") " Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.098744 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f51903c-b83f-44cb-8369-d99ed92259fa-combined-ca-bundle\") pod \"8f51903c-b83f-44cb-8369-d99ed92259fa\" (UID: \"8f51903c-b83f-44cb-8369-d99ed92259fa\") " Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.098855 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f51903c-b83f-44cb-8369-d99ed92259fa-run-httpd\") pod \"8f51903c-b83f-44cb-8369-d99ed92259fa\" (UID: \"8f51903c-b83f-44cb-8369-d99ed92259fa\") " Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.098730 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f51903c-b83f-44cb-8369-d99ed92259fa-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8f51903c-b83f-44cb-8369-d99ed92259fa" (UID: "8f51903c-b83f-44cb-8369-d99ed92259fa"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.099795 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f51903c-b83f-44cb-8369-d99ed92259fa-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8f51903c-b83f-44cb-8369-d99ed92259fa" (UID: "8f51903c-b83f-44cb-8369-d99ed92259fa"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.101378 4805 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f51903c-b83f-44cb-8369-d99ed92259fa-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.101405 4805 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8f51903c-b83f-44cb-8369-d99ed92259fa-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.104978 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f51903c-b83f-44cb-8369-d99ed92259fa-scripts" (OuterVolumeSpecName: "scripts") pod "8f51903c-b83f-44cb-8369-d99ed92259fa" (UID: "8f51903c-b83f-44cb-8369-d99ed92259fa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.120741 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f51903c-b83f-44cb-8369-d99ed92259fa-kube-api-access-6ft99" (OuterVolumeSpecName: "kube-api-access-6ft99") pod "8f51903c-b83f-44cb-8369-d99ed92259fa" (UID: "8f51903c-b83f-44cb-8369-d99ed92259fa"). InnerVolumeSpecName "kube-api-access-6ft99". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.151602 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f51903c-b83f-44cb-8369-d99ed92259fa-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8f51903c-b83f-44cb-8369-d99ed92259fa" (UID: "8f51903c-b83f-44cb-8369-d99ed92259fa"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.174686 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f51903c-b83f-44cb-8369-d99ed92259fa-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "8f51903c-b83f-44cb-8369-d99ed92259fa" (UID: "8f51903c-b83f-44cb-8369-d99ed92259fa"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.200924 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f51903c-b83f-44cb-8369-d99ed92259fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f51903c-b83f-44cb-8369-d99ed92259fa" (UID: "8f51903c-b83f-44cb-8369-d99ed92259fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.204010 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f51903c-b83f-44cb-8369-d99ed92259fa-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.204082 4805 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8f51903c-b83f-44cb-8369-d99ed92259fa-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.204099 4805 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f51903c-b83f-44cb-8369-d99ed92259fa-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.204114 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ft99\" (UniqueName: \"kubernetes.io/projected/8f51903c-b83f-44cb-8369-d99ed92259fa-kube-api-access-6ft99\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.204124 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f51903c-b83f-44cb-8369-d99ed92259fa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.256494 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f51903c-b83f-44cb-8369-d99ed92259fa-config-data" (OuterVolumeSpecName: "config-data") pod "8f51903c-b83f-44cb-8369-d99ed92259fa" (UID: "8f51903c-b83f-44cb-8369-d99ed92259fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.306229 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f51903c-b83f-44cb-8369-d99ed92259fa-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.579515 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8f51903c-b83f-44cb-8369-d99ed92259fa","Type":"ContainerDied","Data":"8039c72169635d96ba91048e5d709b14d6b61981f2682de75e54a63af661d8c7"} Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.579586 4805 scope.go:117] "RemoveContainer" containerID="dd573679be1333bf9a00cc7371870ac6e9706840b6e8f6eb384638033122daaa" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.579680 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.620043 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.626576 4805 scope.go:117] "RemoveContainer" containerID="47c88961793a4c8a86de2f3790a5d9efb475db6b202bbdc5e982e1c1165d8409" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.631898 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.653077 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:42:11 crc kubenswrapper[4805]: E0226 17:42:11.653963 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f51903c-b83f-44cb-8369-d99ed92259fa" containerName="ceilometer-notification-agent" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.654082 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f51903c-b83f-44cb-8369-d99ed92259fa" containerName="ceilometer-notification-agent" Feb 26 17:42:11 crc kubenswrapper[4805]: E0226 17:42:11.654199 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f51903c-b83f-44cb-8369-d99ed92259fa" containerName="ceilometer-central-agent" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.654308 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f51903c-b83f-44cb-8369-d99ed92259fa" containerName="ceilometer-central-agent" Feb 26 17:42:11 crc kubenswrapper[4805]: E0226 17:42:11.654427 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a09872-8737-49f7-97b5-6075717d1336" containerName="oc" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.654487 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a09872-8737-49f7-97b5-6075717d1336" containerName="oc" Feb 26 17:42:11 crc kubenswrapper[4805]: E0226 17:42:11.654554 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f51903c-b83f-44cb-8369-d99ed92259fa" containerName="proxy-httpd" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.654609 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f51903c-b83f-44cb-8369-d99ed92259fa" containerName="proxy-httpd" Feb 26 17:42:11 crc kubenswrapper[4805]: E0226 17:42:11.654697 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f51903c-b83f-44cb-8369-d99ed92259fa" containerName="sg-core" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.654761 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f51903c-b83f-44cb-8369-d99ed92259fa" containerName="sg-core" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.655101 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f51903c-b83f-44cb-8369-d99ed92259fa" containerName="sg-core" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.655196 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f51903c-b83f-44cb-8369-d99ed92259fa" containerName="proxy-httpd" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.655283 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f51903c-b83f-44cb-8369-d99ed92259fa" containerName="ceilometer-notification-agent" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.655365 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f51903c-b83f-44cb-8369-d99ed92259fa" containerName="ceilometer-central-agent" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.655433 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="50a09872-8737-49f7-97b5-6075717d1336" containerName="oc" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.658137 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.663148 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.663541 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.664919 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.671512 4805 scope.go:117] "RemoveContainer" containerID="fd750a6d60081cb150a2e1a3913faf4906d0f9c8f270c684e32fd57f1c1334a5" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.707993 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.713368 4805 scope.go:117] "RemoveContainer" containerID="90a1fd003b4552a39e757fe06412af50e4e72ac14729bcd76b602d2c54dfc78e" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.820942 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0be4e82e-5695-4f70-9e35-8ef03d0de22c-log-httpd\") pod \"ceilometer-0\" (UID: \"0be4e82e-5695-4f70-9e35-8ef03d0de22c\") " pod="openstack/ceilometer-0" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.821299 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0be4e82e-5695-4f70-9e35-8ef03d0de22c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0be4e82e-5695-4f70-9e35-8ef03d0de22c\") " pod="openstack/ceilometer-0" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.821377 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0be4e82e-5695-4f70-9e35-8ef03d0de22c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0be4e82e-5695-4f70-9e35-8ef03d0de22c\") " pod="openstack/ceilometer-0" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.821406 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0be4e82e-5695-4f70-9e35-8ef03d0de22c-scripts\") pod \"ceilometer-0\" (UID: \"0be4e82e-5695-4f70-9e35-8ef03d0de22c\") " pod="openstack/ceilometer-0" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.821441 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wzxf\" (UniqueName: \"kubernetes.io/projected/0be4e82e-5695-4f70-9e35-8ef03d0de22c-kube-api-access-7wzxf\") pod \"ceilometer-0\" (UID: \"0be4e82e-5695-4f70-9e35-8ef03d0de22c\") " pod="openstack/ceilometer-0" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.821481 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0be4e82e-5695-4f70-9e35-8ef03d0de22c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0be4e82e-5695-4f70-9e35-8ef03d0de22c\") " pod="openstack/ceilometer-0" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.821506 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0be4e82e-5695-4f70-9e35-8ef03d0de22c-config-data\") pod \"ceilometer-0\" (UID: \"0be4e82e-5695-4f70-9e35-8ef03d0de22c\") " pod="openstack/ceilometer-0" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.821557 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0be4e82e-5695-4f70-9e35-8ef03d0de22c-run-httpd\") pod \"ceilometer-0\" (UID: \"0be4e82e-5695-4f70-9e35-8ef03d0de22c\") " pod="openstack/ceilometer-0" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.867105 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.900623 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.922971 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0be4e82e-5695-4f70-9e35-8ef03d0de22c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0be4e82e-5695-4f70-9e35-8ef03d0de22c\") " pod="openstack/ceilometer-0" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.923051 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0be4e82e-5695-4f70-9e35-8ef03d0de22c-scripts\") pod \"ceilometer-0\" (UID: \"0be4e82e-5695-4f70-9e35-8ef03d0de22c\") " pod="openstack/ceilometer-0" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.923098 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wzxf\" (UniqueName: \"kubernetes.io/projected/0be4e82e-5695-4f70-9e35-8ef03d0de22c-kube-api-access-7wzxf\") pod \"ceilometer-0\" (UID: \"0be4e82e-5695-4f70-9e35-8ef03d0de22c\") " pod="openstack/ceilometer-0" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.923149 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0be4e82e-5695-4f70-9e35-8ef03d0de22c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0be4e82e-5695-4f70-9e35-8ef03d0de22c\") " pod="openstack/ceilometer-0" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.923182 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0be4e82e-5695-4f70-9e35-8ef03d0de22c-config-data\") pod \"ceilometer-0\" (UID: \"0be4e82e-5695-4f70-9e35-8ef03d0de22c\") " pod="openstack/ceilometer-0" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.923243 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0be4e82e-5695-4f70-9e35-8ef03d0de22c-run-httpd\") pod \"ceilometer-0\" (UID: \"0be4e82e-5695-4f70-9e35-8ef03d0de22c\") " pod="openstack/ceilometer-0" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.923276 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0be4e82e-5695-4f70-9e35-8ef03d0de22c-log-httpd\") pod \"ceilometer-0\" (UID: \"0be4e82e-5695-4f70-9e35-8ef03d0de22c\") " pod="openstack/ceilometer-0" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.923313 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0be4e82e-5695-4f70-9e35-8ef03d0de22c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0be4e82e-5695-4f70-9e35-8ef03d0de22c\") " pod="openstack/ceilometer-0" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.926629 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0be4e82e-5695-4f70-9e35-8ef03d0de22c-run-httpd\") pod \"ceilometer-0\" (UID: \"0be4e82e-5695-4f70-9e35-8ef03d0de22c\") " pod="openstack/ceilometer-0" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.926918 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0be4e82e-5695-4f70-9e35-8ef03d0de22c-log-httpd\") pod \"ceilometer-0\" (UID: \"0be4e82e-5695-4f70-9e35-8ef03d0de22c\") " pod="openstack/ceilometer-0" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.931857 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0be4e82e-5695-4f70-9e35-8ef03d0de22c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0be4e82e-5695-4f70-9e35-8ef03d0de22c\") " pod="openstack/ceilometer-0" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.932647 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0be4e82e-5695-4f70-9e35-8ef03d0de22c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0be4e82e-5695-4f70-9e35-8ef03d0de22c\") " pod="openstack/ceilometer-0" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.943108 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0be4e82e-5695-4f70-9e35-8ef03d0de22c-scripts\") pod \"ceilometer-0\" (UID: \"0be4e82e-5695-4f70-9e35-8ef03d0de22c\") " pod="openstack/ceilometer-0" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.950879 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0be4e82e-5695-4f70-9e35-8ef03d0de22c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0be4e82e-5695-4f70-9e35-8ef03d0de22c\") " pod="openstack/ceilometer-0" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.958523 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0be4e82e-5695-4f70-9e35-8ef03d0de22c-config-data\") pod \"ceilometer-0\" (UID: \"0be4e82e-5695-4f70-9e35-8ef03d0de22c\") " pod="openstack/ceilometer-0" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.960917 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wzxf\" (UniqueName: \"kubernetes.io/projected/0be4e82e-5695-4f70-9e35-8ef03d0de22c-kube-api-access-7wzxf\") pod \"ceilometer-0\" (UID: \"0be4e82e-5695-4f70-9e35-8ef03d0de22c\") " pod="openstack/ceilometer-0" Feb 26 17:42:11 crc kubenswrapper[4805]: I0226 17:42:11.982051 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.208036 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.341252 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0e114c8-fc37-47f7-a02f-202774708b04-config-data\") pod \"c0e114c8-fc37-47f7-a02f-202774708b04\" (UID: \"c0e114c8-fc37-47f7-a02f-202774708b04\") " Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.341620 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0e114c8-fc37-47f7-a02f-202774708b04-logs\") pod \"c0e114c8-fc37-47f7-a02f-202774708b04\" (UID: \"c0e114c8-fc37-47f7-a02f-202774708b04\") " Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.341804 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnv49\" (UniqueName: \"kubernetes.io/projected/c0e114c8-fc37-47f7-a02f-202774708b04-kube-api-access-cnv49\") pod \"c0e114c8-fc37-47f7-a02f-202774708b04\" (UID: \"c0e114c8-fc37-47f7-a02f-202774708b04\") " Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.341924 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0e114c8-fc37-47f7-a02f-202774708b04-combined-ca-bundle\") pod \"c0e114c8-fc37-47f7-a02f-202774708b04\" (UID: \"c0e114c8-fc37-47f7-a02f-202774708b04\") " Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.343158 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0e114c8-fc37-47f7-a02f-202774708b04-logs" (OuterVolumeSpecName: "logs") pod "c0e114c8-fc37-47f7-a02f-202774708b04" (UID: "c0e114c8-fc37-47f7-a02f-202774708b04"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.344442 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0e114c8-fc37-47f7-a02f-202774708b04-logs\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.353388 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0e114c8-fc37-47f7-a02f-202774708b04-kube-api-access-cnv49" (OuterVolumeSpecName: "kube-api-access-cnv49") pod "c0e114c8-fc37-47f7-a02f-202774708b04" (UID: "c0e114c8-fc37-47f7-a02f-202774708b04"). InnerVolumeSpecName "kube-api-access-cnv49". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.428903 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0e114c8-fc37-47f7-a02f-202774708b04-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0e114c8-fc37-47f7-a02f-202774708b04" (UID: "c0e114c8-fc37-47f7-a02f-202774708b04"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.448010 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnv49\" (UniqueName: \"kubernetes.io/projected/c0e114c8-fc37-47f7-a02f-202774708b04-kube-api-access-cnv49\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.448103 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0e114c8-fc37-47f7-a02f-202774708b04-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.456362 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0e114c8-fc37-47f7-a02f-202774708b04-config-data" (OuterVolumeSpecName: "config-data") pod "c0e114c8-fc37-47f7-a02f-202774708b04" (UID: "c0e114c8-fc37-47f7-a02f-202774708b04"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.580069 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0e114c8-fc37-47f7-a02f-202774708b04-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.624845 4805 generic.go:334] "Generic (PLEG): container finished" podID="c0e114c8-fc37-47f7-a02f-202774708b04" containerID="6cae855cea1b9b5d23f295af64fc954c9c2200ed7d58d1c0fe7745beee6e4059" exitCode=0 Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.625416 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c0e114c8-fc37-47f7-a02f-202774708b04","Type":"ContainerDied","Data":"6cae855cea1b9b5d23f295af64fc954c9c2200ed7d58d1c0fe7745beee6e4059"} Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.625784 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.625864 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c0e114c8-fc37-47f7-a02f-202774708b04","Type":"ContainerDied","Data":"dd0ffc1a561993eca3a0cab4d9b7311cb6c7a5030f80559a99c2bde276c0eb19"} Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.625794 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.625821 4805 scope.go:117] "RemoveContainer" containerID="6cae855cea1b9b5d23f295af64fc954c9c2200ed7d58d1c0fe7745beee6e4059" Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.668493 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.701541 4805 scope.go:117] "RemoveContainer" containerID="a9bc590f0519afaf47c26ec767672bb8154ee489da10b99683ec2eb6e0142960" Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.704240 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.742151 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.769444 4805 scope.go:117] "RemoveContainer" containerID="6cae855cea1b9b5d23f295af64fc954c9c2200ed7d58d1c0fe7745beee6e4059" Feb 26 17:42:12 crc kubenswrapper[4805]: E0226 17:42:12.769920 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cae855cea1b9b5d23f295af64fc954c9c2200ed7d58d1c0fe7745beee6e4059\": container with ID starting with 6cae855cea1b9b5d23f295af64fc954c9c2200ed7d58d1c0fe7745beee6e4059 not found: ID does not exist" containerID="6cae855cea1b9b5d23f295af64fc954c9c2200ed7d58d1c0fe7745beee6e4059" Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.769953 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cae855cea1b9b5d23f295af64fc954c9c2200ed7d58d1c0fe7745beee6e4059"} err="failed to get container status \"6cae855cea1b9b5d23f295af64fc954c9c2200ed7d58d1c0fe7745beee6e4059\": rpc error: code = NotFound desc = could not find container \"6cae855cea1b9b5d23f295af64fc954c9c2200ed7d58d1c0fe7745beee6e4059\": container with ID starting with 6cae855cea1b9b5d23f295af64fc954c9c2200ed7d58d1c0fe7745beee6e4059 not found: ID does not exist" Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.769975 4805 scope.go:117] "RemoveContainer" containerID="a9bc590f0519afaf47c26ec767672bb8154ee489da10b99683ec2eb6e0142960" Feb 26 17:42:12 crc kubenswrapper[4805]: E0226 17:42:12.770424 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9bc590f0519afaf47c26ec767672bb8154ee489da10b99683ec2eb6e0142960\": container with ID starting with a9bc590f0519afaf47c26ec767672bb8154ee489da10b99683ec2eb6e0142960 not found: ID does not exist" containerID="a9bc590f0519afaf47c26ec767672bb8154ee489da10b99683ec2eb6e0142960" Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.770491 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9bc590f0519afaf47c26ec767672bb8154ee489da10b99683ec2eb6e0142960"} err="failed to get container status \"a9bc590f0519afaf47c26ec767672bb8154ee489da10b99683ec2eb6e0142960\": rpc error: code = NotFound desc = could not find container \"a9bc590f0519afaf47c26ec767672bb8154ee489da10b99683ec2eb6e0142960\": container with ID starting with a9bc590f0519afaf47c26ec767672bb8154ee489da10b99683ec2eb6e0142960 not found: ID does not exist" Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.776091 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 26 17:42:12 crc kubenswrapper[4805]: E0226 17:42:12.776775 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0e114c8-fc37-47f7-a02f-202774708b04" containerName="nova-api-api" Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.776799 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0e114c8-fc37-47f7-a02f-202774708b04" containerName="nova-api-api" Feb 26 17:42:12 crc kubenswrapper[4805]: E0226 17:42:12.776829 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0e114c8-fc37-47f7-a02f-202774708b04" containerName="nova-api-log" Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.776839 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0e114c8-fc37-47f7-a02f-202774708b04" containerName="nova-api-log" Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.777093 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0e114c8-fc37-47f7-a02f-202774708b04" containerName="nova-api-api" Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.777121 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0e114c8-fc37-47f7-a02f-202774708b04" containerName="nova-api-log" Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.778640 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.781357 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.781627 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.782192 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.823248 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.895451 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b70cab06-e570-46df-a9ca-62b524278672-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b70cab06-e570-46df-a9ca-62b524278672\") " pod="openstack/nova-api-0" Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.896387 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9g5k\" (UniqueName: \"kubernetes.io/projected/b70cab06-e570-46df-a9ca-62b524278672-kube-api-access-l9g5k\") pod \"nova-api-0\" (UID: \"b70cab06-e570-46df-a9ca-62b524278672\") " pod="openstack/nova-api-0" Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.896431 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b70cab06-e570-46df-a9ca-62b524278672-config-data\") pod \"nova-api-0\" (UID: \"b70cab06-e570-46df-a9ca-62b524278672\") " pod="openstack/nova-api-0" Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.896495 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b70cab06-e570-46df-a9ca-62b524278672-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b70cab06-e570-46df-a9ca-62b524278672\") " pod="openstack/nova-api-0" Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.896734 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b70cab06-e570-46df-a9ca-62b524278672-logs\") pod \"nova-api-0\" (UID: \"b70cab06-e570-46df-a9ca-62b524278672\") " pod="openstack/nova-api-0" Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.896843 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b70cab06-e570-46df-a9ca-62b524278672-public-tls-certs\") pod \"nova-api-0\" (UID: \"b70cab06-e570-46df-a9ca-62b524278672\") " pod="openstack/nova-api-0" Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.980391 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f51903c-b83f-44cb-8369-d99ed92259fa" path="/var/lib/kubelet/pods/8f51903c-b83f-44cb-8369-d99ed92259fa/volumes" Feb 26 17:42:12 crc kubenswrapper[4805]: I0226 17:42:12.981580 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0e114c8-fc37-47f7-a02f-202774708b04" path="/var/lib/kubelet/pods/c0e114c8-fc37-47f7-a02f-202774708b04/volumes" Feb 26 17:42:13 crc kubenswrapper[4805]: I0226 17:42:12.999965 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b70cab06-e570-46df-a9ca-62b524278672-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b70cab06-e570-46df-a9ca-62b524278672\") " pod="openstack/nova-api-0" Feb 26 17:42:13 crc kubenswrapper[4805]: I0226 17:42:13.000476 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9g5k\" (UniqueName: \"kubernetes.io/projected/b70cab06-e570-46df-a9ca-62b524278672-kube-api-access-l9g5k\") pod \"nova-api-0\" (UID: \"b70cab06-e570-46df-a9ca-62b524278672\") " pod="openstack/nova-api-0" Feb 26 17:42:13 crc kubenswrapper[4805]: I0226 17:42:13.000577 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b70cab06-e570-46df-a9ca-62b524278672-config-data\") pod \"nova-api-0\" (UID: \"b70cab06-e570-46df-a9ca-62b524278672\") " pod="openstack/nova-api-0" Feb 26 17:42:13 crc kubenswrapper[4805]: I0226 17:42:13.000672 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b70cab06-e570-46df-a9ca-62b524278672-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b70cab06-e570-46df-a9ca-62b524278672\") " pod="openstack/nova-api-0" Feb 26 17:42:13 crc kubenswrapper[4805]: I0226 17:42:13.000939 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b70cab06-e570-46df-a9ca-62b524278672-logs\") pod \"nova-api-0\" (UID: \"b70cab06-e570-46df-a9ca-62b524278672\") " pod="openstack/nova-api-0" Feb 26 17:42:13 crc kubenswrapper[4805]: I0226 17:42:13.001239 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b70cab06-e570-46df-a9ca-62b524278672-public-tls-certs\") pod \"nova-api-0\" (UID: \"b70cab06-e570-46df-a9ca-62b524278672\") " pod="openstack/nova-api-0" Feb 26 17:42:13 crc kubenswrapper[4805]: I0226 17:42:13.007158 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b70cab06-e570-46df-a9ca-62b524278672-logs\") pod \"nova-api-0\" (UID: \"b70cab06-e570-46df-a9ca-62b524278672\") " pod="openstack/nova-api-0" Feb 26 17:42:13 crc kubenswrapper[4805]: I0226 17:42:13.011341 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b70cab06-e570-46df-a9ca-62b524278672-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b70cab06-e570-46df-a9ca-62b524278672\") " pod="openstack/nova-api-0" Feb 26 17:42:13 crc kubenswrapper[4805]: I0226 17:42:13.011777 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b70cab06-e570-46df-a9ca-62b524278672-public-tls-certs\") pod \"nova-api-0\" (UID: \"b70cab06-e570-46df-a9ca-62b524278672\") " pod="openstack/nova-api-0" Feb 26 17:42:13 crc kubenswrapper[4805]: I0226 17:42:13.013506 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b70cab06-e570-46df-a9ca-62b524278672-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b70cab06-e570-46df-a9ca-62b524278672\") " pod="openstack/nova-api-0" Feb 26 17:42:13 crc kubenswrapper[4805]: I0226 17:42:13.013774 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b70cab06-e570-46df-a9ca-62b524278672-config-data\") pod \"nova-api-0\" (UID: \"b70cab06-e570-46df-a9ca-62b524278672\") " pod="openstack/nova-api-0" Feb 26 17:42:13 crc kubenswrapper[4805]: I0226 17:42:13.015617 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-p9sz6"] Feb 26 17:42:13 crc kubenswrapper[4805]: I0226 17:42:13.018176 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-p9sz6" Feb 26 17:42:13 crc kubenswrapper[4805]: I0226 17:42:13.020854 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 26 17:42:13 crc kubenswrapper[4805]: I0226 17:42:13.021701 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 26 17:42:13 crc kubenswrapper[4805]: I0226 17:42:13.041937 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-p9sz6"] Feb 26 17:42:13 crc kubenswrapper[4805]: I0226 17:42:13.049251 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9g5k\" (UniqueName: \"kubernetes.io/projected/b70cab06-e570-46df-a9ca-62b524278672-kube-api-access-l9g5k\") pod \"nova-api-0\" (UID: \"b70cab06-e570-46df-a9ca-62b524278672\") " pod="openstack/nova-api-0" Feb 26 17:42:13 crc kubenswrapper[4805]: I0226 17:42:13.118054 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 17:42:13 crc kubenswrapper[4805]: I0226 17:42:13.118789 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dkct\" (UniqueName: \"kubernetes.io/projected/ca29b268-971b-42f0-b538-4b02dfb52154-kube-api-access-4dkct\") pod \"nova-cell1-cell-mapping-p9sz6\" (UID: \"ca29b268-971b-42f0-b538-4b02dfb52154\") " pod="openstack/nova-cell1-cell-mapping-p9sz6" Feb 26 17:42:13 crc kubenswrapper[4805]: I0226 17:42:13.118852 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca29b268-971b-42f0-b538-4b02dfb52154-config-data\") pod \"nova-cell1-cell-mapping-p9sz6\" (UID: \"ca29b268-971b-42f0-b538-4b02dfb52154\") " pod="openstack/nova-cell1-cell-mapping-p9sz6" Feb 26 17:42:13 crc kubenswrapper[4805]: I0226 17:42:13.118898 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca29b268-971b-42f0-b538-4b02dfb52154-scripts\") pod \"nova-cell1-cell-mapping-p9sz6\" (UID: \"ca29b268-971b-42f0-b538-4b02dfb52154\") " pod="openstack/nova-cell1-cell-mapping-p9sz6" Feb 26 17:42:13 crc kubenswrapper[4805]: I0226 17:42:13.119014 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca29b268-971b-42f0-b538-4b02dfb52154-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-p9sz6\" (UID: \"ca29b268-971b-42f0-b538-4b02dfb52154\") " pod="openstack/nova-cell1-cell-mapping-p9sz6" Feb 26 17:42:13 crc kubenswrapper[4805]: I0226 17:42:13.226530 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca29b268-971b-42f0-b538-4b02dfb52154-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-p9sz6\" (UID: \"ca29b268-971b-42f0-b538-4b02dfb52154\") " pod="openstack/nova-cell1-cell-mapping-p9sz6" Feb 26 17:42:13 crc kubenswrapper[4805]: I0226 17:42:13.230682 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dkct\" (UniqueName: \"kubernetes.io/projected/ca29b268-971b-42f0-b538-4b02dfb52154-kube-api-access-4dkct\") pod \"nova-cell1-cell-mapping-p9sz6\" (UID: \"ca29b268-971b-42f0-b538-4b02dfb52154\") " pod="openstack/nova-cell1-cell-mapping-p9sz6" Feb 26 17:42:13 crc kubenswrapper[4805]: I0226 17:42:13.231459 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca29b268-971b-42f0-b538-4b02dfb52154-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-p9sz6\" (UID: \"ca29b268-971b-42f0-b538-4b02dfb52154\") " pod="openstack/nova-cell1-cell-mapping-p9sz6" Feb 26 17:42:13 crc kubenswrapper[4805]: I0226 17:42:13.232233 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca29b268-971b-42f0-b538-4b02dfb52154-config-data\") pod \"nova-cell1-cell-mapping-p9sz6\" (UID: \"ca29b268-971b-42f0-b538-4b02dfb52154\") " pod="openstack/nova-cell1-cell-mapping-p9sz6" Feb 26 17:42:13 crc kubenswrapper[4805]: I0226 17:42:13.232433 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca29b268-971b-42f0-b538-4b02dfb52154-scripts\") pod \"nova-cell1-cell-mapping-p9sz6\" (UID: \"ca29b268-971b-42f0-b538-4b02dfb52154\") " pod="openstack/nova-cell1-cell-mapping-p9sz6" Feb 26 17:42:13 crc kubenswrapper[4805]: I0226 17:42:13.241472 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca29b268-971b-42f0-b538-4b02dfb52154-scripts\") pod \"nova-cell1-cell-mapping-p9sz6\" (UID: \"ca29b268-971b-42f0-b538-4b02dfb52154\") " pod="openstack/nova-cell1-cell-mapping-p9sz6" Feb 26 17:42:13 crc kubenswrapper[4805]: I0226 17:42:13.251856 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca29b268-971b-42f0-b538-4b02dfb52154-config-data\") pod \"nova-cell1-cell-mapping-p9sz6\" (UID: \"ca29b268-971b-42f0-b538-4b02dfb52154\") " pod="openstack/nova-cell1-cell-mapping-p9sz6" Feb 26 17:42:13 crc kubenswrapper[4805]: I0226 17:42:13.257731 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dkct\" (UniqueName: \"kubernetes.io/projected/ca29b268-971b-42f0-b538-4b02dfb52154-kube-api-access-4dkct\") pod \"nova-cell1-cell-mapping-p9sz6\" (UID: \"ca29b268-971b-42f0-b538-4b02dfb52154\") " pod="openstack/nova-cell1-cell-mapping-p9sz6" Feb 26 17:42:13 crc kubenswrapper[4805]: I0226 17:42:13.477668 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-p9sz6" Feb 26 17:42:13 crc kubenswrapper[4805]: I0226 17:42:13.643591 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0be4e82e-5695-4f70-9e35-8ef03d0de22c","Type":"ContainerStarted","Data":"ebc99afe112410fa180df80defa9f86f8a6b10e2ae132b60a9592d3fa24f0f18"} Feb 26 17:42:13 crc kubenswrapper[4805]: I0226 17:42:13.643641 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0be4e82e-5695-4f70-9e35-8ef03d0de22c","Type":"ContainerStarted","Data":"7fe1edced7c98ba2755c6ad82517ace47da734edf6ab40f6d3b9e5ebcfc96c06"} Feb 26 17:42:13 crc kubenswrapper[4805]: I0226 17:42:13.714540 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 17:42:14 crc kubenswrapper[4805]: I0226 17:42:14.032640 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-p9sz6"] Feb 26 17:42:14 crc kubenswrapper[4805]: I0226 17:42:14.665884 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-p9sz6" event={"ID":"ca29b268-971b-42f0-b538-4b02dfb52154","Type":"ContainerStarted","Data":"4d4a28e52744c6908a4378d042554017ce47662a1927515efa01d296af788ffd"} Feb 26 17:42:14 crc kubenswrapper[4805]: I0226 17:42:14.667660 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-p9sz6" event={"ID":"ca29b268-971b-42f0-b538-4b02dfb52154","Type":"ContainerStarted","Data":"8b6eee1cab863a031ac4cd26fdc8102fe31ab0afae2f513b03a8bc2da77d9326"} Feb 26 17:42:14 crc kubenswrapper[4805]: I0226 17:42:14.671498 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0be4e82e-5695-4f70-9e35-8ef03d0de22c","Type":"ContainerStarted","Data":"ae4ea9ccd0093791148640a462618f1eef227a758e42342cae47104454a14145"} Feb 26 17:42:14 crc kubenswrapper[4805]: I0226 17:42:14.674507 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b70cab06-e570-46df-a9ca-62b524278672","Type":"ContainerStarted","Data":"fc41152c3a80136808d564400d6834458be9f03a2966689d65cef752d686bbfa"} Feb 26 17:42:14 crc kubenswrapper[4805]: I0226 17:42:14.674572 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b70cab06-e570-46df-a9ca-62b524278672","Type":"ContainerStarted","Data":"6ca1fffa5851a7960e0e7cc7284b7f53798248ada9c1e4511ab6a9cf9a5cb130"} Feb 26 17:42:14 crc kubenswrapper[4805]: I0226 17:42:14.674587 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b70cab06-e570-46df-a9ca-62b524278672","Type":"ContainerStarted","Data":"ea19ecd857de072694c4fcc22fc01df60926823ca27fc1c9cd21bf2be64d7470"} Feb 26 17:42:14 crc kubenswrapper[4805]: I0226 17:42:14.703634 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-p9sz6" podStartSLOduration=2.703611307 podStartE2EDuration="2.703611307s" podCreationTimestamp="2026-02-26 17:42:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:42:14.693524542 +0000 UTC m=+1649.255278891" watchObservedRunningTime="2026-02-26 17:42:14.703611307 +0000 UTC m=+1649.265365646" Feb 26 17:42:14 crc kubenswrapper[4805]: I0226 17:42:14.723044 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.723013577 podStartE2EDuration="2.723013577s" podCreationTimestamp="2026-02-26 17:42:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:42:14.720469313 +0000 UTC m=+1649.282223652" watchObservedRunningTime="2026-02-26 17:42:14.723013577 +0000 UTC m=+1649.284767916" Feb 26 17:42:15 crc kubenswrapper[4805]: I0226 17:42:15.687957 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0be4e82e-5695-4f70-9e35-8ef03d0de22c","Type":"ContainerStarted","Data":"0c39a8b06f2a415d69c939be9e610133f3a99a97298e30c889d01f03dcaffdf8"} Feb 26 17:42:16 crc kubenswrapper[4805]: I0226 17:42:16.042128 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-54dd998c-nw4mj" Feb 26 17:42:16 crc kubenswrapper[4805]: I0226 17:42:16.127433 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-wwk5t"] Feb 26 17:42:16 crc kubenswrapper[4805]: I0226 17:42:16.127739 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-884c8b8f5-wwk5t" podUID="d867cdff-5a57-451e-bb64-6b28255e4ae6" containerName="dnsmasq-dns" containerID="cri-o://e0fc914adb0fbf4f1d46ff6d613031ea90174eebf301c3431acc650c3007ab3a" gracePeriod=10 Feb 26 17:42:16 crc kubenswrapper[4805]: I0226 17:42:16.722672 4805 generic.go:334] "Generic (PLEG): container finished" podID="d867cdff-5a57-451e-bb64-6b28255e4ae6" containerID="e0fc914adb0fbf4f1d46ff6d613031ea90174eebf301c3431acc650c3007ab3a" exitCode=0 Feb 26 17:42:16 crc kubenswrapper[4805]: I0226 17:42:16.723267 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-884c8b8f5-wwk5t" event={"ID":"d867cdff-5a57-451e-bb64-6b28255e4ae6","Type":"ContainerDied","Data":"e0fc914adb0fbf4f1d46ff6d613031ea90174eebf301c3431acc650c3007ab3a"} Feb 26 17:42:16 crc kubenswrapper[4805]: I0226 17:42:16.901206 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-884c8b8f5-wwk5t" Feb 26 17:42:16 crc kubenswrapper[4805]: I0226 17:42:16.942168 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d867cdff-5a57-451e-bb64-6b28255e4ae6-dns-svc\") pod \"d867cdff-5a57-451e-bb64-6b28255e4ae6\" (UID: \"d867cdff-5a57-451e-bb64-6b28255e4ae6\") " Feb 26 17:42:16 crc kubenswrapper[4805]: I0226 17:42:16.942230 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d867cdff-5a57-451e-bb64-6b28255e4ae6-ovsdbserver-sb\") pod \"d867cdff-5a57-451e-bb64-6b28255e4ae6\" (UID: \"d867cdff-5a57-451e-bb64-6b28255e4ae6\") " Feb 26 17:42:16 crc kubenswrapper[4805]: I0226 17:42:16.942441 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d867cdff-5a57-451e-bb64-6b28255e4ae6-ovsdbserver-nb\") pod \"d867cdff-5a57-451e-bb64-6b28255e4ae6\" (UID: \"d867cdff-5a57-451e-bb64-6b28255e4ae6\") " Feb 26 17:42:16 crc kubenswrapper[4805]: I0226 17:42:16.942552 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d867cdff-5a57-451e-bb64-6b28255e4ae6-config\") pod \"d867cdff-5a57-451e-bb64-6b28255e4ae6\" (UID: \"d867cdff-5a57-451e-bb64-6b28255e4ae6\") " Feb 26 17:42:16 crc kubenswrapper[4805]: I0226 17:42:16.942602 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxs2g\" (UniqueName: \"kubernetes.io/projected/d867cdff-5a57-451e-bb64-6b28255e4ae6-kube-api-access-kxs2g\") pod \"d867cdff-5a57-451e-bb64-6b28255e4ae6\" (UID: \"d867cdff-5a57-451e-bb64-6b28255e4ae6\") " Feb 26 17:42:16 crc kubenswrapper[4805]: I0226 17:42:16.942693 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d867cdff-5a57-451e-bb64-6b28255e4ae6-dns-swift-storage-0\") pod \"d867cdff-5a57-451e-bb64-6b28255e4ae6\" (UID: \"d867cdff-5a57-451e-bb64-6b28255e4ae6\") " Feb 26 17:42:16 crc kubenswrapper[4805]: I0226 17:42:16.976764 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d867cdff-5a57-451e-bb64-6b28255e4ae6-kube-api-access-kxs2g" (OuterVolumeSpecName: "kube-api-access-kxs2g") pod "d867cdff-5a57-451e-bb64-6b28255e4ae6" (UID: "d867cdff-5a57-451e-bb64-6b28255e4ae6"). InnerVolumeSpecName "kube-api-access-kxs2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:42:17 crc kubenswrapper[4805]: I0226 17:42:17.047412 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxs2g\" (UniqueName: \"kubernetes.io/projected/d867cdff-5a57-451e-bb64-6b28255e4ae6-kube-api-access-kxs2g\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:17 crc kubenswrapper[4805]: I0226 17:42:17.054380 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d867cdff-5a57-451e-bb64-6b28255e4ae6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d867cdff-5a57-451e-bb64-6b28255e4ae6" (UID: "d867cdff-5a57-451e-bb64-6b28255e4ae6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:42:17 crc kubenswrapper[4805]: I0226 17:42:17.065918 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d867cdff-5a57-451e-bb64-6b28255e4ae6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d867cdff-5a57-451e-bb64-6b28255e4ae6" (UID: "d867cdff-5a57-451e-bb64-6b28255e4ae6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:42:17 crc kubenswrapper[4805]: I0226 17:42:17.071004 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d867cdff-5a57-451e-bb64-6b28255e4ae6-config" (OuterVolumeSpecName: "config") pod "d867cdff-5a57-451e-bb64-6b28255e4ae6" (UID: "d867cdff-5a57-451e-bb64-6b28255e4ae6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:42:17 crc kubenswrapper[4805]: I0226 17:42:17.104674 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d867cdff-5a57-451e-bb64-6b28255e4ae6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d867cdff-5a57-451e-bb64-6b28255e4ae6" (UID: "d867cdff-5a57-451e-bb64-6b28255e4ae6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:42:17 crc kubenswrapper[4805]: I0226 17:42:17.111687 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d867cdff-5a57-451e-bb64-6b28255e4ae6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d867cdff-5a57-451e-bb64-6b28255e4ae6" (UID: "d867cdff-5a57-451e-bb64-6b28255e4ae6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:42:17 crc kubenswrapper[4805]: I0226 17:42:17.150260 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d867cdff-5a57-451e-bb64-6b28255e4ae6-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:17 crc kubenswrapper[4805]: I0226 17:42:17.150305 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d867cdff-5a57-451e-bb64-6b28255e4ae6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:17 crc kubenswrapper[4805]: I0226 17:42:17.150321 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d867cdff-5a57-451e-bb64-6b28255e4ae6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:17 crc kubenswrapper[4805]: I0226 17:42:17.150332 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d867cdff-5a57-451e-bb64-6b28255e4ae6-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:17 crc kubenswrapper[4805]: I0226 17:42:17.150347 4805 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d867cdff-5a57-451e-bb64-6b28255e4ae6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:17 crc kubenswrapper[4805]: I0226 17:42:17.770441 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-884c8b8f5-wwk5t" event={"ID":"d867cdff-5a57-451e-bb64-6b28255e4ae6","Type":"ContainerDied","Data":"0e8093b8880fff5c912ba1b3957ff11fbcdbc178b27e878bdcdd3904b4f74c42"} Feb 26 17:42:17 crc kubenswrapper[4805]: I0226 17:42:17.770826 4805 scope.go:117] "RemoveContainer" containerID="e0fc914adb0fbf4f1d46ff6d613031ea90174eebf301c3431acc650c3007ab3a" Feb 26 17:42:17 crc kubenswrapper[4805]: I0226 17:42:17.770987 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-884c8b8f5-wwk5t" Feb 26 17:42:17 crc kubenswrapper[4805]: I0226 17:42:17.804392 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0be4e82e-5695-4f70-9e35-8ef03d0de22c","Type":"ContainerStarted","Data":"46fcdb54e0aa492fbfd867675e4ffb795bca6b2a13c5b4dc34c54425f203348e"} Feb 26 17:42:17 crc kubenswrapper[4805]: I0226 17:42:17.806244 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 17:42:17 crc kubenswrapper[4805]: I0226 17:42:17.818076 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-wwk5t"] Feb 26 17:42:17 crc kubenswrapper[4805]: I0226 17:42:17.822032 4805 scope.go:117] "RemoveContainer" containerID="6c493379b4cd5b0fd727761c5e0a6ca6cfe54c63a0f32a3b9aa563682302b314" Feb 26 17:42:17 crc kubenswrapper[4805]: I0226 17:42:17.839150 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-884c8b8f5-wwk5t"] Feb 26 17:42:17 crc kubenswrapper[4805]: I0226 17:42:17.863277 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.057111188 podStartE2EDuration="6.863251191s" podCreationTimestamp="2026-02-26 17:42:11 +0000 UTC" firstStartedPulling="2026-02-26 17:42:12.615691275 +0000 UTC m=+1647.177445614" lastFinishedPulling="2026-02-26 17:42:17.421831278 +0000 UTC m=+1651.983585617" observedRunningTime="2026-02-26 17:42:17.829092578 +0000 UTC m=+1652.390846917" watchObservedRunningTime="2026-02-26 17:42:17.863251191 +0000 UTC m=+1652.425005530" Feb 26 17:42:18 crc kubenswrapper[4805]: I0226 17:42:18.971385 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d867cdff-5a57-451e-bb64-6b28255e4ae6" path="/var/lib/kubelet/pods/d867cdff-5a57-451e-bb64-6b28255e4ae6/volumes" Feb 26 17:42:21 crc kubenswrapper[4805]: I0226 17:42:21.856961 4805 generic.go:334] "Generic (PLEG): container finished" podID="ca29b268-971b-42f0-b538-4b02dfb52154" containerID="4d4a28e52744c6908a4378d042554017ce47662a1927515efa01d296af788ffd" exitCode=0 Feb 26 17:42:21 crc kubenswrapper[4805]: I0226 17:42:21.857005 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-p9sz6" event={"ID":"ca29b268-971b-42f0-b538-4b02dfb52154","Type":"ContainerDied","Data":"4d4a28e52744c6908a4378d042554017ce47662a1927515efa01d296af788ffd"} Feb 26 17:42:23 crc kubenswrapper[4805]: I0226 17:42:23.119463 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 17:42:23 crc kubenswrapper[4805]: I0226 17:42:23.119843 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 17:42:23 crc kubenswrapper[4805]: I0226 17:42:23.395180 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-p9sz6" Feb 26 17:42:23 crc kubenswrapper[4805]: I0226 17:42:23.503051 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca29b268-971b-42f0-b538-4b02dfb52154-scripts\") pod \"ca29b268-971b-42f0-b538-4b02dfb52154\" (UID: \"ca29b268-971b-42f0-b538-4b02dfb52154\") " Feb 26 17:42:23 crc kubenswrapper[4805]: I0226 17:42:23.503302 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dkct\" (UniqueName: \"kubernetes.io/projected/ca29b268-971b-42f0-b538-4b02dfb52154-kube-api-access-4dkct\") pod \"ca29b268-971b-42f0-b538-4b02dfb52154\" (UID: \"ca29b268-971b-42f0-b538-4b02dfb52154\") " Feb 26 17:42:23 crc kubenswrapper[4805]: I0226 17:42:23.503406 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca29b268-971b-42f0-b538-4b02dfb52154-config-data\") pod \"ca29b268-971b-42f0-b538-4b02dfb52154\" (UID: \"ca29b268-971b-42f0-b538-4b02dfb52154\") " Feb 26 17:42:23 crc kubenswrapper[4805]: I0226 17:42:23.503441 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca29b268-971b-42f0-b538-4b02dfb52154-combined-ca-bundle\") pod \"ca29b268-971b-42f0-b538-4b02dfb52154\" (UID: \"ca29b268-971b-42f0-b538-4b02dfb52154\") " Feb 26 17:42:23 crc kubenswrapper[4805]: I0226 17:42:23.535471 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca29b268-971b-42f0-b538-4b02dfb52154-scripts" (OuterVolumeSpecName: "scripts") pod "ca29b268-971b-42f0-b538-4b02dfb52154" (UID: "ca29b268-971b-42f0-b538-4b02dfb52154"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:42:23 crc kubenswrapper[4805]: I0226 17:42:23.535682 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca29b268-971b-42f0-b538-4b02dfb52154-kube-api-access-4dkct" (OuterVolumeSpecName: "kube-api-access-4dkct") pod "ca29b268-971b-42f0-b538-4b02dfb52154" (UID: "ca29b268-971b-42f0-b538-4b02dfb52154"). InnerVolumeSpecName "kube-api-access-4dkct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:42:23 crc kubenswrapper[4805]: I0226 17:42:23.546399 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca29b268-971b-42f0-b538-4b02dfb52154-config-data" (OuterVolumeSpecName: "config-data") pod "ca29b268-971b-42f0-b538-4b02dfb52154" (UID: "ca29b268-971b-42f0-b538-4b02dfb52154"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:42:23 crc kubenswrapper[4805]: I0226 17:42:23.551189 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca29b268-971b-42f0-b538-4b02dfb52154-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca29b268-971b-42f0-b538-4b02dfb52154" (UID: "ca29b268-971b-42f0-b538-4b02dfb52154"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:42:23 crc kubenswrapper[4805]: I0226 17:42:23.606673 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca29b268-971b-42f0-b538-4b02dfb52154-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:23 crc kubenswrapper[4805]: I0226 17:42:23.606733 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca29b268-971b-42f0-b538-4b02dfb52154-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:23 crc kubenswrapper[4805]: I0226 17:42:23.606749 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca29b268-971b-42f0-b538-4b02dfb52154-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:23 crc kubenswrapper[4805]: I0226 17:42:23.606760 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dkct\" (UniqueName: \"kubernetes.io/projected/ca29b268-971b-42f0-b538-4b02dfb52154-kube-api-access-4dkct\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:23 crc kubenswrapper[4805]: I0226 17:42:23.887901 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-p9sz6" event={"ID":"ca29b268-971b-42f0-b538-4b02dfb52154","Type":"ContainerDied","Data":"8b6eee1cab863a031ac4cd26fdc8102fe31ab0afae2f513b03a8bc2da77d9326"} Feb 26 17:42:23 crc kubenswrapper[4805]: I0226 17:42:23.887956 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b6eee1cab863a031ac4cd26fdc8102fe31ab0afae2f513b03a8bc2da77d9326" Feb 26 17:42:23 crc kubenswrapper[4805]: I0226 17:42:23.887994 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-p9sz6" Feb 26 17:42:24 crc kubenswrapper[4805]: I0226 17:42:24.071075 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 26 17:42:24 crc kubenswrapper[4805]: I0226 17:42:24.071320 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b70cab06-e570-46df-a9ca-62b524278672" containerName="nova-api-log" containerID="cri-o://6ca1fffa5851a7960e0e7cc7284b7f53798248ada9c1e4511ab6a9cf9a5cb130" gracePeriod=30 Feb 26 17:42:24 crc kubenswrapper[4805]: I0226 17:42:24.071498 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b70cab06-e570-46df-a9ca-62b524278672" containerName="nova-api-api" containerID="cri-o://fc41152c3a80136808d564400d6834458be9f03a2966689d65cef752d686bbfa" gracePeriod=30 Feb 26 17:42:24 crc kubenswrapper[4805]: I0226 17:42:24.087122 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b70cab06-e570-46df-a9ca-62b524278672" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.235:8774/\": EOF" Feb 26 17:42:24 crc kubenswrapper[4805]: I0226 17:42:24.087258 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b70cab06-e570-46df-a9ca-62b524278672" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.235:8774/\": EOF" Feb 26 17:42:24 crc kubenswrapper[4805]: I0226 17:42:24.104989 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 17:42:24 crc kubenswrapper[4805]: I0226 17:42:24.105247 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="c4d891fd-08b5-44ba-a363-11b4b8c30f74" containerName="nova-scheduler-scheduler" containerID="cri-o://da638b5aa6bcb309e0fb1053f473d800759a75f22f0523c48bac14baf7e6a0cf" gracePeriod=30 Feb 26 17:42:24 crc kubenswrapper[4805]: I0226 17:42:24.132662 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 17:42:24 crc kubenswrapper[4805]: I0226 17:42:24.132935 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="655f50ae-6f7d-46d2-bae9-5c0bd4df422d" containerName="nova-metadata-log" containerID="cri-o://65db96f90483349b35e836f267b6667465eedc484347e24fc8875fd638bf0ed7" gracePeriod=30 Feb 26 17:42:24 crc kubenswrapper[4805]: I0226 17:42:24.133120 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="655f50ae-6f7d-46d2-bae9-5c0bd4df422d" containerName="nova-metadata-metadata" containerID="cri-o://2554de4d0a1888797a657e116a9ccb8477ee095ce0e590f114396b66fb531eb4" gracePeriod=30 Feb 26 17:42:24 crc kubenswrapper[4805]: E0226 17:42:24.622054 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="da638b5aa6bcb309e0fb1053f473d800759a75f22f0523c48bac14baf7e6a0cf" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 26 17:42:24 crc kubenswrapper[4805]: E0226 17:42:24.625804 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="da638b5aa6bcb309e0fb1053f473d800759a75f22f0523c48bac14baf7e6a0cf" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 26 17:42:24 crc kubenswrapper[4805]: E0226 17:42:24.628038 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="da638b5aa6bcb309e0fb1053f473d800759a75f22f0523c48bac14baf7e6a0cf" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 26 17:42:24 crc kubenswrapper[4805]: E0226 17:42:24.628155 4805 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="c4d891fd-08b5-44ba-a363-11b4b8c30f74" containerName="nova-scheduler-scheduler" Feb 26 17:42:24 crc kubenswrapper[4805]: I0226 17:42:24.903009 4805 generic.go:334] "Generic (PLEG): container finished" podID="b70cab06-e570-46df-a9ca-62b524278672" containerID="6ca1fffa5851a7960e0e7cc7284b7f53798248ada9c1e4511ab6a9cf9a5cb130" exitCode=143 Feb 26 17:42:24 crc kubenswrapper[4805]: I0226 17:42:24.903098 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b70cab06-e570-46df-a9ca-62b524278672","Type":"ContainerDied","Data":"6ca1fffa5851a7960e0e7cc7284b7f53798248ada9c1e4511ab6a9cf9a5cb130"} Feb 26 17:42:24 crc kubenswrapper[4805]: I0226 17:42:24.905650 4805 generic.go:334] "Generic (PLEG): container finished" podID="655f50ae-6f7d-46d2-bae9-5c0bd4df422d" containerID="65db96f90483349b35e836f267b6667465eedc484347e24fc8875fd638bf0ed7" exitCode=143 Feb 26 17:42:24 crc kubenswrapper[4805]: I0226 17:42:24.905717 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"655f50ae-6f7d-46d2-bae9-5c0bd4df422d","Type":"ContainerDied","Data":"65db96f90483349b35e836f267b6667465eedc484347e24fc8875fd638bf0ed7"} Feb 26 17:42:27 crc kubenswrapper[4805]: I0226 17:42:27.090640 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pplgn"] Feb 26 17:42:27 crc kubenswrapper[4805]: E0226 17:42:27.091653 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca29b268-971b-42f0-b538-4b02dfb52154" containerName="nova-manage" Feb 26 17:42:27 crc kubenswrapper[4805]: I0226 17:42:27.091674 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca29b268-971b-42f0-b538-4b02dfb52154" containerName="nova-manage" Feb 26 17:42:27 crc kubenswrapper[4805]: E0226 17:42:27.091713 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d867cdff-5a57-451e-bb64-6b28255e4ae6" containerName="dnsmasq-dns" Feb 26 17:42:27 crc kubenswrapper[4805]: I0226 17:42:27.091724 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d867cdff-5a57-451e-bb64-6b28255e4ae6" containerName="dnsmasq-dns" Feb 26 17:42:27 crc kubenswrapper[4805]: E0226 17:42:27.091744 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d867cdff-5a57-451e-bb64-6b28255e4ae6" containerName="init" Feb 26 17:42:27 crc kubenswrapper[4805]: I0226 17:42:27.091753 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d867cdff-5a57-451e-bb64-6b28255e4ae6" containerName="init" Feb 26 17:42:27 crc kubenswrapper[4805]: I0226 17:42:27.092086 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca29b268-971b-42f0-b538-4b02dfb52154" containerName="nova-manage" Feb 26 17:42:27 crc kubenswrapper[4805]: I0226 17:42:27.092102 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="d867cdff-5a57-451e-bb64-6b28255e4ae6" containerName="dnsmasq-dns" Feb 26 17:42:27 crc kubenswrapper[4805]: I0226 17:42:27.094169 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pplgn" Feb 26 17:42:27 crc kubenswrapper[4805]: I0226 17:42:27.106825 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pplgn"] Feb 26 17:42:27 crc kubenswrapper[4805]: I0226 17:42:27.196942 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvfqk\" (UniqueName: \"kubernetes.io/projected/a3c6ef35-7b6a-4dda-b774-d8607ac3a33c-kube-api-access-hvfqk\") pod \"community-operators-pplgn\" (UID: \"a3c6ef35-7b6a-4dda-b774-d8607ac3a33c\") " pod="openshift-marketplace/community-operators-pplgn" Feb 26 17:42:27 crc kubenswrapper[4805]: I0226 17:42:27.197048 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3c6ef35-7b6a-4dda-b774-d8607ac3a33c-catalog-content\") pod \"community-operators-pplgn\" (UID: \"a3c6ef35-7b6a-4dda-b774-d8607ac3a33c\") " pod="openshift-marketplace/community-operators-pplgn" Feb 26 17:42:27 crc kubenswrapper[4805]: I0226 17:42:27.197812 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3c6ef35-7b6a-4dda-b774-d8607ac3a33c-utilities\") pod \"community-operators-pplgn\" (UID: \"a3c6ef35-7b6a-4dda-b774-d8607ac3a33c\") " pod="openshift-marketplace/community-operators-pplgn" Feb 26 17:42:27 crc kubenswrapper[4805]: I0226 17:42:27.299768 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3c6ef35-7b6a-4dda-b774-d8607ac3a33c-utilities\") pod \"community-operators-pplgn\" (UID: \"a3c6ef35-7b6a-4dda-b774-d8607ac3a33c\") " pod="openshift-marketplace/community-operators-pplgn" Feb 26 17:42:27 crc kubenswrapper[4805]: I0226 17:42:27.299892 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvfqk\" (UniqueName: \"kubernetes.io/projected/a3c6ef35-7b6a-4dda-b774-d8607ac3a33c-kube-api-access-hvfqk\") pod \"community-operators-pplgn\" (UID: \"a3c6ef35-7b6a-4dda-b774-d8607ac3a33c\") " pod="openshift-marketplace/community-operators-pplgn" Feb 26 17:42:27 crc kubenswrapper[4805]: I0226 17:42:27.299938 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3c6ef35-7b6a-4dda-b774-d8607ac3a33c-catalog-content\") pod \"community-operators-pplgn\" (UID: \"a3c6ef35-7b6a-4dda-b774-d8607ac3a33c\") " pod="openshift-marketplace/community-operators-pplgn" Feb 26 17:42:27 crc kubenswrapper[4805]: I0226 17:42:27.300641 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3c6ef35-7b6a-4dda-b774-d8607ac3a33c-utilities\") pod \"community-operators-pplgn\" (UID: \"a3c6ef35-7b6a-4dda-b774-d8607ac3a33c\") " pod="openshift-marketplace/community-operators-pplgn" Feb 26 17:42:27 crc kubenswrapper[4805]: I0226 17:42:27.300708 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3c6ef35-7b6a-4dda-b774-d8607ac3a33c-catalog-content\") pod \"community-operators-pplgn\" (UID: \"a3c6ef35-7b6a-4dda-b774-d8607ac3a33c\") " pod="openshift-marketplace/community-operators-pplgn" Feb 26 17:42:27 crc kubenswrapper[4805]: I0226 17:42:27.324395 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvfqk\" (UniqueName: \"kubernetes.io/projected/a3c6ef35-7b6a-4dda-b774-d8607ac3a33c-kube-api-access-hvfqk\") pod \"community-operators-pplgn\" (UID: \"a3c6ef35-7b6a-4dda-b774-d8607ac3a33c\") " pod="openshift-marketplace/community-operators-pplgn" Feb 26 17:42:27 crc kubenswrapper[4805]: I0226 17:42:27.416111 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pplgn" Feb 26 17:42:28 crc kubenswrapper[4805]: I0226 17:42:28.116362 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pplgn"] Feb 26 17:42:28 crc kubenswrapper[4805]: I0226 17:42:28.550776 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="655f50ae-6f7d-46d2-bae9-5c0bd4df422d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.226:8775/\": read tcp 10.217.0.2:53328->10.217.0.226:8775: read: connection reset by peer" Feb 26 17:42:28 crc kubenswrapper[4805]: I0226 17:42:28.551935 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="655f50ae-6f7d-46d2-bae9-5c0bd4df422d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.226:8775/\": read tcp 10.217.0.2:53344->10.217.0.226:8775: read: connection reset by peer" Feb 26 17:42:28 crc kubenswrapper[4805]: I0226 17:42:28.971772 4805 generic.go:334] "Generic (PLEG): container finished" podID="655f50ae-6f7d-46d2-bae9-5c0bd4df422d" containerID="2554de4d0a1888797a657e116a9ccb8477ee095ce0e590f114396b66fb531eb4" exitCode=0 Feb 26 17:42:28 crc kubenswrapper[4805]: I0226 17:42:28.972419 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"655f50ae-6f7d-46d2-bae9-5c0bd4df422d","Type":"ContainerDied","Data":"2554de4d0a1888797a657e116a9ccb8477ee095ce0e590f114396b66fb531eb4"} Feb 26 17:42:28 crc kubenswrapper[4805]: I0226 17:42:28.979059 4805 generic.go:334] "Generic (PLEG): container finished" podID="a3c6ef35-7b6a-4dda-b774-d8607ac3a33c" containerID="c3cc46f11d6aecbf3ddb18be31d3f87d1bacb53d52ada6a742ddf886948b5d56" exitCode=0 Feb 26 17:42:28 crc kubenswrapper[4805]: I0226 17:42:28.979483 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pplgn" event={"ID":"a3c6ef35-7b6a-4dda-b774-d8607ac3a33c","Type":"ContainerDied","Data":"c3cc46f11d6aecbf3ddb18be31d3f87d1bacb53d52ada6a742ddf886948b5d56"} Feb 26 17:42:28 crc kubenswrapper[4805]: I0226 17:42:28.979748 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pplgn" event={"ID":"a3c6ef35-7b6a-4dda-b774-d8607ac3a33c","Type":"ContainerStarted","Data":"e4ba6f43601985c80830a110a3d91ea5481d7ceeac39d267c1aab351de07993d"} Feb 26 17:42:29 crc kubenswrapper[4805]: I0226 17:42:29.372050 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 17:42:29 crc kubenswrapper[4805]: I0226 17:42:29.571997 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/655f50ae-6f7d-46d2-bae9-5c0bd4df422d-nova-metadata-tls-certs\") pod \"655f50ae-6f7d-46d2-bae9-5c0bd4df422d\" (UID: \"655f50ae-6f7d-46d2-bae9-5c0bd4df422d\") " Feb 26 17:42:29 crc kubenswrapper[4805]: I0226 17:42:29.572122 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7jxd\" (UniqueName: \"kubernetes.io/projected/655f50ae-6f7d-46d2-bae9-5c0bd4df422d-kube-api-access-d7jxd\") pod \"655f50ae-6f7d-46d2-bae9-5c0bd4df422d\" (UID: \"655f50ae-6f7d-46d2-bae9-5c0bd4df422d\") " Feb 26 17:42:29 crc kubenswrapper[4805]: I0226 17:42:29.572614 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/655f50ae-6f7d-46d2-bae9-5c0bd4df422d-logs\") pod \"655f50ae-6f7d-46d2-bae9-5c0bd4df422d\" (UID: \"655f50ae-6f7d-46d2-bae9-5c0bd4df422d\") " Feb 26 17:42:29 crc kubenswrapper[4805]: I0226 17:42:29.573484 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/655f50ae-6f7d-46d2-bae9-5c0bd4df422d-config-data\") pod \"655f50ae-6f7d-46d2-bae9-5c0bd4df422d\" (UID: \"655f50ae-6f7d-46d2-bae9-5c0bd4df422d\") " Feb 26 17:42:29 crc kubenswrapper[4805]: I0226 17:42:29.573817 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/655f50ae-6f7d-46d2-bae9-5c0bd4df422d-logs" (OuterVolumeSpecName: "logs") pod "655f50ae-6f7d-46d2-bae9-5c0bd4df422d" (UID: "655f50ae-6f7d-46d2-bae9-5c0bd4df422d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:42:29 crc kubenswrapper[4805]: I0226 17:42:29.575315 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/655f50ae-6f7d-46d2-bae9-5c0bd4df422d-combined-ca-bundle\") pod \"655f50ae-6f7d-46d2-bae9-5c0bd4df422d\" (UID: \"655f50ae-6f7d-46d2-bae9-5c0bd4df422d\") " Feb 26 17:42:29 crc kubenswrapper[4805]: I0226 17:42:29.576638 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/655f50ae-6f7d-46d2-bae9-5c0bd4df422d-logs\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:29 crc kubenswrapper[4805]: E0226 17:42:29.617842 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of da638b5aa6bcb309e0fb1053f473d800759a75f22f0523c48bac14baf7e6a0cf is running failed: container process not found" containerID="da638b5aa6bcb309e0fb1053f473d800759a75f22f0523c48bac14baf7e6a0cf" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 26 17:42:29 crc kubenswrapper[4805]: E0226 17:42:29.618473 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of da638b5aa6bcb309e0fb1053f473d800759a75f22f0523c48bac14baf7e6a0cf is running failed: container process not found" containerID="da638b5aa6bcb309e0fb1053f473d800759a75f22f0523c48bac14baf7e6a0cf" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 26 17:42:29 crc kubenswrapper[4805]: E0226 17:42:29.619263 4805 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of da638b5aa6bcb309e0fb1053f473d800759a75f22f0523c48bac14baf7e6a0cf is running failed: container process not found" containerID="da638b5aa6bcb309e0fb1053f473d800759a75f22f0523c48bac14baf7e6a0cf" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 26 17:42:29 crc kubenswrapper[4805]: E0226 17:42:29.619298 4805 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of da638b5aa6bcb309e0fb1053f473d800759a75f22f0523c48bac14baf7e6a0cf is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="c4d891fd-08b5-44ba-a363-11b4b8c30f74" containerName="nova-scheduler-scheduler" Feb 26 17:42:29 crc kubenswrapper[4805]: I0226 17:42:29.634153 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/655f50ae-6f7d-46d2-bae9-5c0bd4df422d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "655f50ae-6f7d-46d2-bae9-5c0bd4df422d" (UID: "655f50ae-6f7d-46d2-bae9-5c0bd4df422d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:42:29 crc kubenswrapper[4805]: I0226 17:42:29.634403 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/655f50ae-6f7d-46d2-bae9-5c0bd4df422d-kube-api-access-d7jxd" (OuterVolumeSpecName: "kube-api-access-d7jxd") pod "655f50ae-6f7d-46d2-bae9-5c0bd4df422d" (UID: "655f50ae-6f7d-46d2-bae9-5c0bd4df422d"). InnerVolumeSpecName "kube-api-access-d7jxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:42:29 crc kubenswrapper[4805]: I0226 17:42:29.646512 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/655f50ae-6f7d-46d2-bae9-5c0bd4df422d-config-data" (OuterVolumeSpecName: "config-data") pod "655f50ae-6f7d-46d2-bae9-5c0bd4df422d" (UID: "655f50ae-6f7d-46d2-bae9-5c0bd4df422d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:42:29 crc kubenswrapper[4805]: I0226 17:42:29.686339 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/655f50ae-6f7d-46d2-bae9-5c0bd4df422d-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:29 crc kubenswrapper[4805]: I0226 17:42:29.686377 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/655f50ae-6f7d-46d2-bae9-5c0bd4df422d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:29 crc kubenswrapper[4805]: I0226 17:42:29.686394 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7jxd\" (UniqueName: \"kubernetes.io/projected/655f50ae-6f7d-46d2-bae9-5c0bd4df422d-kube-api-access-d7jxd\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:29 crc kubenswrapper[4805]: I0226 17:42:29.707735 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/655f50ae-6f7d-46d2-bae9-5c0bd4df422d-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "655f50ae-6f7d-46d2-bae9-5c0bd4df422d" (UID: "655f50ae-6f7d-46d2-bae9-5c0bd4df422d"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:42:29 crc kubenswrapper[4805]: I0226 17:42:29.773647 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 17:42:29 crc kubenswrapper[4805]: I0226 17:42:29.788137 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlc5g\" (UniqueName: \"kubernetes.io/projected/c4d891fd-08b5-44ba-a363-11b4b8c30f74-kube-api-access-qlc5g\") pod \"c4d891fd-08b5-44ba-a363-11b4b8c30f74\" (UID: \"c4d891fd-08b5-44ba-a363-11b4b8c30f74\") " Feb 26 17:42:29 crc kubenswrapper[4805]: I0226 17:42:29.788195 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4d891fd-08b5-44ba-a363-11b4b8c30f74-combined-ca-bundle\") pod \"c4d891fd-08b5-44ba-a363-11b4b8c30f74\" (UID: \"c4d891fd-08b5-44ba-a363-11b4b8c30f74\") " Feb 26 17:42:29 crc kubenswrapper[4805]: I0226 17:42:29.788511 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4d891fd-08b5-44ba-a363-11b4b8c30f74-config-data\") pod \"c4d891fd-08b5-44ba-a363-11b4b8c30f74\" (UID: \"c4d891fd-08b5-44ba-a363-11b4b8c30f74\") " Feb 26 17:42:29 crc kubenswrapper[4805]: I0226 17:42:29.789027 4805 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/655f50ae-6f7d-46d2-bae9-5c0bd4df422d-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:29 crc kubenswrapper[4805]: I0226 17:42:29.796340 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4d891fd-08b5-44ba-a363-11b4b8c30f74-kube-api-access-qlc5g" (OuterVolumeSpecName: "kube-api-access-qlc5g") pod "c4d891fd-08b5-44ba-a363-11b4b8c30f74" (UID: "c4d891fd-08b5-44ba-a363-11b4b8c30f74"). InnerVolumeSpecName "kube-api-access-qlc5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:42:29 crc kubenswrapper[4805]: I0226 17:42:29.837866 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4d891fd-08b5-44ba-a363-11b4b8c30f74-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4d891fd-08b5-44ba-a363-11b4b8c30f74" (UID: "c4d891fd-08b5-44ba-a363-11b4b8c30f74"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:42:29 crc kubenswrapper[4805]: I0226 17:42:29.868457 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4d891fd-08b5-44ba-a363-11b4b8c30f74-config-data" (OuterVolumeSpecName: "config-data") pod "c4d891fd-08b5-44ba-a363-11b4b8c30f74" (UID: "c4d891fd-08b5-44ba-a363-11b4b8c30f74"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:42:29 crc kubenswrapper[4805]: I0226 17:42:29.891471 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4d891fd-08b5-44ba-a363-11b4b8c30f74-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:29 crc kubenswrapper[4805]: I0226 17:42:29.891508 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlc5g\" (UniqueName: \"kubernetes.io/projected/c4d891fd-08b5-44ba-a363-11b4b8c30f74-kube-api-access-qlc5g\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:29 crc kubenswrapper[4805]: I0226 17:42:29.891519 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4d891fd-08b5-44ba-a363-11b4b8c30f74-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.028526 4805 generic.go:334] "Generic (PLEG): container finished" podID="b70cab06-e570-46df-a9ca-62b524278672" containerID="fc41152c3a80136808d564400d6834458be9f03a2966689d65cef752d686bbfa" exitCode=0 Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.028609 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b70cab06-e570-46df-a9ca-62b524278672","Type":"ContainerDied","Data":"fc41152c3a80136808d564400d6834458be9f03a2966689d65cef752d686bbfa"} Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.031926 4805 generic.go:334] "Generic (PLEG): container finished" podID="c4d891fd-08b5-44ba-a363-11b4b8c30f74" containerID="da638b5aa6bcb309e0fb1053f473d800759a75f22f0523c48bac14baf7e6a0cf" exitCode=0 Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.032094 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c4d891fd-08b5-44ba-a363-11b4b8c30f74","Type":"ContainerDied","Data":"da638b5aa6bcb309e0fb1053f473d800759a75f22f0523c48bac14baf7e6a0cf"} Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.032147 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c4d891fd-08b5-44ba-a363-11b4b8c30f74","Type":"ContainerDied","Data":"c105b4ee5556542424686ba814255ee294051b01984e44f2190123886a941a3a"} Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.032177 4805 scope.go:117] "RemoveContainer" containerID="da638b5aa6bcb309e0fb1053f473d800759a75f22f0523c48bac14baf7e6a0cf" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.032353 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.040048 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"655f50ae-6f7d-46d2-bae9-5c0bd4df422d","Type":"ContainerDied","Data":"90e3249d74f5f4a6d6b8eaa17643bd6e1caf44d261ccfcecf4b100ba1001a663"} Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.040126 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.080171 4805 scope.go:117] "RemoveContainer" containerID="da638b5aa6bcb309e0fb1053f473d800759a75f22f0523c48bac14baf7e6a0cf" Feb 26 17:42:30 crc kubenswrapper[4805]: E0226 17:42:30.089716 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da638b5aa6bcb309e0fb1053f473d800759a75f22f0523c48bac14baf7e6a0cf\": container with ID starting with da638b5aa6bcb309e0fb1053f473d800759a75f22f0523c48bac14baf7e6a0cf not found: ID does not exist" containerID="da638b5aa6bcb309e0fb1053f473d800759a75f22f0523c48bac14baf7e6a0cf" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.089797 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da638b5aa6bcb309e0fb1053f473d800759a75f22f0523c48bac14baf7e6a0cf"} err="failed to get container status \"da638b5aa6bcb309e0fb1053f473d800759a75f22f0523c48bac14baf7e6a0cf\": rpc error: code = NotFound desc = could not find container \"da638b5aa6bcb309e0fb1053f473d800759a75f22f0523c48bac14baf7e6a0cf\": container with ID starting with da638b5aa6bcb309e0fb1053f473d800759a75f22f0523c48bac14baf7e6a0cf not found: ID does not exist" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.089844 4805 scope.go:117] "RemoveContainer" containerID="2554de4d0a1888797a657e116a9ccb8477ee095ce0e590f114396b66fb531eb4" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.109141 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.158212 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.167570 4805 scope.go:117] "RemoveContainer" containerID="65db96f90483349b35e836f267b6667465eedc484347e24fc8875fd638bf0ed7" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.172801 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.176208 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.188267 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 17:42:30 crc kubenswrapper[4805]: E0226 17:42:30.188820 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b70cab06-e570-46df-a9ca-62b524278672" containerName="nova-api-log" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.188842 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b70cab06-e570-46df-a9ca-62b524278672" containerName="nova-api-log" Feb 26 17:42:30 crc kubenswrapper[4805]: E0226 17:42:30.188865 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="655f50ae-6f7d-46d2-bae9-5c0bd4df422d" containerName="nova-metadata-log" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.188874 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="655f50ae-6f7d-46d2-bae9-5c0bd4df422d" containerName="nova-metadata-log" Feb 26 17:42:30 crc kubenswrapper[4805]: E0226 17:42:30.188895 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b70cab06-e570-46df-a9ca-62b524278672" containerName="nova-api-api" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.188903 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b70cab06-e570-46df-a9ca-62b524278672" containerName="nova-api-api" Feb 26 17:42:30 crc kubenswrapper[4805]: E0226 17:42:30.188919 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4d891fd-08b5-44ba-a363-11b4b8c30f74" containerName="nova-scheduler-scheduler" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.188927 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4d891fd-08b5-44ba-a363-11b4b8c30f74" containerName="nova-scheduler-scheduler" Feb 26 17:42:30 crc kubenswrapper[4805]: E0226 17:42:30.188939 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="655f50ae-6f7d-46d2-bae9-5c0bd4df422d" containerName="nova-metadata-metadata" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.188947 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="655f50ae-6f7d-46d2-bae9-5c0bd4df422d" containerName="nova-metadata-metadata" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.189243 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="b70cab06-e570-46df-a9ca-62b524278672" containerName="nova-api-log" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.189281 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="655f50ae-6f7d-46d2-bae9-5c0bd4df422d" containerName="nova-metadata-log" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.189297 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4d891fd-08b5-44ba-a363-11b4b8c30f74" containerName="nova-scheduler-scheduler" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.189313 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="655f50ae-6f7d-46d2-bae9-5c0bd4df422d" containerName="nova-metadata-metadata" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.189325 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="b70cab06-e570-46df-a9ca-62b524278672" containerName="nova-api-api" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.190345 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.197548 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.202918 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9g5k\" (UniqueName: \"kubernetes.io/projected/b70cab06-e570-46df-a9ca-62b524278672-kube-api-access-l9g5k\") pod \"b70cab06-e570-46df-a9ca-62b524278672\" (UID: \"b70cab06-e570-46df-a9ca-62b524278672\") " Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.203117 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b70cab06-e570-46df-a9ca-62b524278672-public-tls-certs\") pod \"b70cab06-e570-46df-a9ca-62b524278672\" (UID: \"b70cab06-e570-46df-a9ca-62b524278672\") " Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.203307 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b70cab06-e570-46df-a9ca-62b524278672-internal-tls-certs\") pod \"b70cab06-e570-46df-a9ca-62b524278672\" (UID: \"b70cab06-e570-46df-a9ca-62b524278672\") " Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.203395 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b70cab06-e570-46df-a9ca-62b524278672-logs\") pod \"b70cab06-e570-46df-a9ca-62b524278672\" (UID: \"b70cab06-e570-46df-a9ca-62b524278672\") " Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.203535 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b70cab06-e570-46df-a9ca-62b524278672-config-data\") pod \"b70cab06-e570-46df-a9ca-62b524278672\" (UID: \"b70cab06-e570-46df-a9ca-62b524278672\") " Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.203640 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b70cab06-e570-46df-a9ca-62b524278672-combined-ca-bundle\") pod \"b70cab06-e570-46df-a9ca-62b524278672\" (UID: \"b70cab06-e570-46df-a9ca-62b524278672\") " Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.204559 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6b7a4cd-7367-4048-9560-99759278c58b-config-data\") pod \"nova-scheduler-0\" (UID: \"a6b7a4cd-7367-4048-9560-99759278c58b\") " pod="openstack/nova-scheduler-0" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.204652 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b70cab06-e570-46df-a9ca-62b524278672-logs" (OuterVolumeSpecName: "logs") pod "b70cab06-e570-46df-a9ca-62b524278672" (UID: "b70cab06-e570-46df-a9ca-62b524278672"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.205631 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz8gt\" (UniqueName: \"kubernetes.io/projected/a6b7a4cd-7367-4048-9560-99759278c58b-kube-api-access-qz8gt\") pod \"nova-scheduler-0\" (UID: \"a6b7a4cd-7367-4048-9560-99759278c58b\") " pod="openstack/nova-scheduler-0" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.206130 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6b7a4cd-7367-4048-9560-99759278c58b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a6b7a4cd-7367-4048-9560-99759278c58b\") " pod="openstack/nova-scheduler-0" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.206553 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b70cab06-e570-46df-a9ca-62b524278672-logs\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.223207 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b70cab06-e570-46df-a9ca-62b524278672-kube-api-access-l9g5k" (OuterVolumeSpecName: "kube-api-access-l9g5k") pod "b70cab06-e570-46df-a9ca-62b524278672" (UID: "b70cab06-e570-46df-a9ca-62b524278672"). InnerVolumeSpecName "kube-api-access-l9g5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.224238 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.250252 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b70cab06-e570-46df-a9ca-62b524278672-config-data" (OuterVolumeSpecName: "config-data") pod "b70cab06-e570-46df-a9ca-62b524278672" (UID: "b70cab06-e570-46df-a9ca-62b524278672"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.264256 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.292178 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b70cab06-e570-46df-a9ca-62b524278672-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b70cab06-e570-46df-a9ca-62b524278672" (UID: "b70cab06-e570-46df-a9ca-62b524278672"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.308485 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz8gt\" (UniqueName: \"kubernetes.io/projected/a6b7a4cd-7367-4048-9560-99759278c58b-kube-api-access-qz8gt\") pod \"nova-scheduler-0\" (UID: \"a6b7a4cd-7367-4048-9560-99759278c58b\") " pod="openstack/nova-scheduler-0" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.308621 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6b7a4cd-7367-4048-9560-99759278c58b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a6b7a4cd-7367-4048-9560-99759278c58b\") " pod="openstack/nova-scheduler-0" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.308680 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6b7a4cd-7367-4048-9560-99759278c58b-config-data\") pod \"nova-scheduler-0\" (UID: \"a6b7a4cd-7367-4048-9560-99759278c58b\") " pod="openstack/nova-scheduler-0" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.308802 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9g5k\" (UniqueName: \"kubernetes.io/projected/b70cab06-e570-46df-a9ca-62b524278672-kube-api-access-l9g5k\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.308817 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b70cab06-e570-46df-a9ca-62b524278672-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.308829 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b70cab06-e570-46df-a9ca-62b524278672-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.311298 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b70cab06-e570-46df-a9ca-62b524278672-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b70cab06-e570-46df-a9ca-62b524278672" (UID: "b70cab06-e570-46df-a9ca-62b524278672"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.319985 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6b7a4cd-7367-4048-9560-99759278c58b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a6b7a4cd-7367-4048-9560-99759278c58b\") " pod="openstack/nova-scheduler-0" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.328128 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.331607 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6b7a4cd-7367-4048-9560-99759278c58b-config-data\") pod \"nova-scheduler-0\" (UID: \"a6b7a4cd-7367-4048-9560-99759278c58b\") " pod="openstack/nova-scheduler-0" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.333808 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz8gt\" (UniqueName: \"kubernetes.io/projected/a6b7a4cd-7367-4048-9560-99759278c58b-kube-api-access-qz8gt\") pod \"nova-scheduler-0\" (UID: \"a6b7a4cd-7367-4048-9560-99759278c58b\") " pod="openstack/nova-scheduler-0" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.337073 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.338214 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.342673 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.342941 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.343706 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b70cab06-e570-46df-a9ca-62b524278672-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b70cab06-e570-46df-a9ca-62b524278672" (UID: "b70cab06-e570-46df-a9ca-62b524278672"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.410490 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14771d36-b57b-4f52-9367-0694e42e2cca-logs\") pod \"nova-metadata-0\" (UID: \"14771d36-b57b-4f52-9367-0694e42e2cca\") " pod="openstack/nova-metadata-0" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.410542 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmsz7\" (UniqueName: \"kubernetes.io/projected/14771d36-b57b-4f52-9367-0694e42e2cca-kube-api-access-jmsz7\") pod \"nova-metadata-0\" (UID: \"14771d36-b57b-4f52-9367-0694e42e2cca\") " pod="openstack/nova-metadata-0" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.410611 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14771d36-b57b-4f52-9367-0694e42e2cca-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"14771d36-b57b-4f52-9367-0694e42e2cca\") " pod="openstack/nova-metadata-0" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.410763 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14771d36-b57b-4f52-9367-0694e42e2cca-config-data\") pod \"nova-metadata-0\" (UID: \"14771d36-b57b-4f52-9367-0694e42e2cca\") " pod="openstack/nova-metadata-0" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.410896 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/14771d36-b57b-4f52-9367-0694e42e2cca-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"14771d36-b57b-4f52-9367-0694e42e2cca\") " pod="openstack/nova-metadata-0" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.411242 4805 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b70cab06-e570-46df-a9ca-62b524278672-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.411261 4805 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b70cab06-e570-46df-a9ca-62b524278672-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.512563 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/14771d36-b57b-4f52-9367-0694e42e2cca-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"14771d36-b57b-4f52-9367-0694e42e2cca\") " pod="openstack/nova-metadata-0" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.512689 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14771d36-b57b-4f52-9367-0694e42e2cca-logs\") pod \"nova-metadata-0\" (UID: \"14771d36-b57b-4f52-9367-0694e42e2cca\") " pod="openstack/nova-metadata-0" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.512729 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmsz7\" (UniqueName: \"kubernetes.io/projected/14771d36-b57b-4f52-9367-0694e42e2cca-kube-api-access-jmsz7\") pod \"nova-metadata-0\" (UID: \"14771d36-b57b-4f52-9367-0694e42e2cca\") " pod="openstack/nova-metadata-0" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.512771 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14771d36-b57b-4f52-9367-0694e42e2cca-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"14771d36-b57b-4f52-9367-0694e42e2cca\") " pod="openstack/nova-metadata-0" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.512835 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14771d36-b57b-4f52-9367-0694e42e2cca-config-data\") pod \"nova-metadata-0\" (UID: \"14771d36-b57b-4f52-9367-0694e42e2cca\") " pod="openstack/nova-metadata-0" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.514708 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14771d36-b57b-4f52-9367-0694e42e2cca-logs\") pod \"nova-metadata-0\" (UID: \"14771d36-b57b-4f52-9367-0694e42e2cca\") " pod="openstack/nova-metadata-0" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.516589 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14771d36-b57b-4f52-9367-0694e42e2cca-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"14771d36-b57b-4f52-9367-0694e42e2cca\") " pod="openstack/nova-metadata-0" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.517516 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14771d36-b57b-4f52-9367-0694e42e2cca-config-data\") pod \"nova-metadata-0\" (UID: \"14771d36-b57b-4f52-9367-0694e42e2cca\") " pod="openstack/nova-metadata-0" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.518598 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.518716 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/14771d36-b57b-4f52-9367-0694e42e2cca-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"14771d36-b57b-4f52-9367-0694e42e2cca\") " pod="openstack/nova-metadata-0" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.531614 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmsz7\" (UniqueName: \"kubernetes.io/projected/14771d36-b57b-4f52-9367-0694e42e2cca-kube-api-access-jmsz7\") pod \"nova-metadata-0\" (UID: \"14771d36-b57b-4f52-9367-0694e42e2cca\") " pod="openstack/nova-metadata-0" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.700524 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.980002 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="655f50ae-6f7d-46d2-bae9-5c0bd4df422d" path="/var/lib/kubelet/pods/655f50ae-6f7d-46d2-bae9-5c0bd4df422d/volumes" Feb 26 17:42:30 crc kubenswrapper[4805]: I0226 17:42:30.980786 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4d891fd-08b5-44ba-a363-11b4b8c30f74" path="/var/lib/kubelet/pods/c4d891fd-08b5-44ba-a363-11b4b8c30f74/volumes" Feb 26 17:42:31 crc kubenswrapper[4805]: I0226 17:42:31.079253 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 17:42:31 crc kubenswrapper[4805]: I0226 17:42:31.079262 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b70cab06-e570-46df-a9ca-62b524278672","Type":"ContainerDied","Data":"ea19ecd857de072694c4fcc22fc01df60926823ca27fc1c9cd21bf2be64d7470"} Feb 26 17:42:31 crc kubenswrapper[4805]: I0226 17:42:31.080235 4805 scope.go:117] "RemoveContainer" containerID="fc41152c3a80136808d564400d6834458be9f03a2966689d65cef752d686bbfa" Feb 26 17:42:31 crc kubenswrapper[4805]: I0226 17:42:31.084843 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 26 17:42:31 crc kubenswrapper[4805]: I0226 17:42:31.088491 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pplgn" event={"ID":"a3c6ef35-7b6a-4dda-b774-d8607ac3a33c","Type":"ContainerStarted","Data":"38f5e3b50d5cd993d641e5673596a9359f7f5460faf8538f404d67e80c03fb2b"} Feb 26 17:42:31 crc kubenswrapper[4805]: W0226 17:42:31.117293 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6b7a4cd_7367_4048_9560_99759278c58b.slice/crio-b1089dd67d6fa46d157d9f05cadae1e016efea186d5ad034ca996ddf4ececcba WatchSource:0}: Error finding container b1089dd67d6fa46d157d9f05cadae1e016efea186d5ad034ca996ddf4ececcba: Status 404 returned error can't find the container with id b1089dd67d6fa46d157d9f05cadae1e016efea186d5ad034ca996ddf4ececcba Feb 26 17:42:31 crc kubenswrapper[4805]: I0226 17:42:31.119352 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 26 17:42:31 crc kubenswrapper[4805]: I0226 17:42:31.131334 4805 scope.go:117] "RemoveContainer" containerID="6ca1fffa5851a7960e0e7cc7284b7f53798248ada9c1e4511ab6a9cf9a5cb130" Feb 26 17:42:31 crc kubenswrapper[4805]: I0226 17:42:31.140809 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 26 17:42:31 crc kubenswrapper[4805]: I0226 17:42:31.187392 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 26 17:42:31 crc kubenswrapper[4805]: I0226 17:42:31.192072 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 17:42:31 crc kubenswrapper[4805]: I0226 17:42:31.195615 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 26 17:42:31 crc kubenswrapper[4805]: I0226 17:42:31.196119 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 26 17:42:31 crc kubenswrapper[4805]: I0226 17:42:31.196242 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 26 17:42:31 crc kubenswrapper[4805]: I0226 17:42:31.203236 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 17:42:31 crc kubenswrapper[4805]: I0226 17:42:31.219925 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 26 17:42:31 crc kubenswrapper[4805]: I0226 17:42:31.244310 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcfa6af2-6085-43cb-94a8-a02bebd05f49-internal-tls-certs\") pod \"nova-api-0\" (UID: \"dcfa6af2-6085-43cb-94a8-a02bebd05f49\") " pod="openstack/nova-api-0" Feb 26 17:42:31 crc kubenswrapper[4805]: I0226 17:42:31.244410 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvv2b\" (UniqueName: \"kubernetes.io/projected/dcfa6af2-6085-43cb-94a8-a02bebd05f49-kube-api-access-fvv2b\") pod \"nova-api-0\" (UID: \"dcfa6af2-6085-43cb-94a8-a02bebd05f49\") " pod="openstack/nova-api-0" Feb 26 17:42:31 crc kubenswrapper[4805]: I0226 17:42:31.244596 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcfa6af2-6085-43cb-94a8-a02bebd05f49-config-data\") pod \"nova-api-0\" (UID: \"dcfa6af2-6085-43cb-94a8-a02bebd05f49\") " pod="openstack/nova-api-0" Feb 26 17:42:31 crc kubenswrapper[4805]: I0226 17:42:31.244801 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcfa6af2-6085-43cb-94a8-a02bebd05f49-logs\") pod \"nova-api-0\" (UID: \"dcfa6af2-6085-43cb-94a8-a02bebd05f49\") " pod="openstack/nova-api-0" Feb 26 17:42:31 crc kubenswrapper[4805]: I0226 17:42:31.244898 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcfa6af2-6085-43cb-94a8-a02bebd05f49-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dcfa6af2-6085-43cb-94a8-a02bebd05f49\") " pod="openstack/nova-api-0" Feb 26 17:42:31 crc kubenswrapper[4805]: I0226 17:42:31.244974 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcfa6af2-6085-43cb-94a8-a02bebd05f49-public-tls-certs\") pod \"nova-api-0\" (UID: \"dcfa6af2-6085-43cb-94a8-a02bebd05f49\") " pod="openstack/nova-api-0" Feb 26 17:42:31 crc kubenswrapper[4805]: I0226 17:42:31.346310 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcfa6af2-6085-43cb-94a8-a02bebd05f49-public-tls-certs\") pod \"nova-api-0\" (UID: \"dcfa6af2-6085-43cb-94a8-a02bebd05f49\") " pod="openstack/nova-api-0" Feb 26 17:42:31 crc kubenswrapper[4805]: I0226 17:42:31.346374 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcfa6af2-6085-43cb-94a8-a02bebd05f49-internal-tls-certs\") pod \"nova-api-0\" (UID: \"dcfa6af2-6085-43cb-94a8-a02bebd05f49\") " pod="openstack/nova-api-0" Feb 26 17:42:31 crc kubenswrapper[4805]: I0226 17:42:31.346403 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvv2b\" (UniqueName: \"kubernetes.io/projected/dcfa6af2-6085-43cb-94a8-a02bebd05f49-kube-api-access-fvv2b\") pod \"nova-api-0\" (UID: \"dcfa6af2-6085-43cb-94a8-a02bebd05f49\") " pod="openstack/nova-api-0" Feb 26 17:42:31 crc kubenswrapper[4805]: I0226 17:42:31.346468 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcfa6af2-6085-43cb-94a8-a02bebd05f49-config-data\") pod \"nova-api-0\" (UID: \"dcfa6af2-6085-43cb-94a8-a02bebd05f49\") " pod="openstack/nova-api-0" Feb 26 17:42:31 crc kubenswrapper[4805]: I0226 17:42:31.346543 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcfa6af2-6085-43cb-94a8-a02bebd05f49-logs\") pod \"nova-api-0\" (UID: \"dcfa6af2-6085-43cb-94a8-a02bebd05f49\") " pod="openstack/nova-api-0" Feb 26 17:42:31 crc kubenswrapper[4805]: I0226 17:42:31.346586 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcfa6af2-6085-43cb-94a8-a02bebd05f49-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dcfa6af2-6085-43cb-94a8-a02bebd05f49\") " pod="openstack/nova-api-0" Feb 26 17:42:31 crc kubenswrapper[4805]: I0226 17:42:31.348941 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcfa6af2-6085-43cb-94a8-a02bebd05f49-logs\") pod \"nova-api-0\" (UID: \"dcfa6af2-6085-43cb-94a8-a02bebd05f49\") " pod="openstack/nova-api-0" Feb 26 17:42:31 crc kubenswrapper[4805]: I0226 17:42:31.352587 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcfa6af2-6085-43cb-94a8-a02bebd05f49-public-tls-certs\") pod \"nova-api-0\" (UID: \"dcfa6af2-6085-43cb-94a8-a02bebd05f49\") " pod="openstack/nova-api-0" Feb 26 17:42:31 crc kubenswrapper[4805]: I0226 17:42:31.357620 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcfa6af2-6085-43cb-94a8-a02bebd05f49-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dcfa6af2-6085-43cb-94a8-a02bebd05f49\") " pod="openstack/nova-api-0" Feb 26 17:42:31 crc kubenswrapper[4805]: I0226 17:42:31.360286 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcfa6af2-6085-43cb-94a8-a02bebd05f49-config-data\") pod \"nova-api-0\" (UID: \"dcfa6af2-6085-43cb-94a8-a02bebd05f49\") " pod="openstack/nova-api-0" Feb 26 17:42:31 crc kubenswrapper[4805]: I0226 17:42:31.360663 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcfa6af2-6085-43cb-94a8-a02bebd05f49-internal-tls-certs\") pod \"nova-api-0\" (UID: \"dcfa6af2-6085-43cb-94a8-a02bebd05f49\") " pod="openstack/nova-api-0" Feb 26 17:42:31 crc kubenswrapper[4805]: I0226 17:42:31.366751 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvv2b\" (UniqueName: \"kubernetes.io/projected/dcfa6af2-6085-43cb-94a8-a02bebd05f49-kube-api-access-fvv2b\") pod \"nova-api-0\" (UID: \"dcfa6af2-6085-43cb-94a8-a02bebd05f49\") " pod="openstack/nova-api-0" Feb 26 17:42:31 crc kubenswrapper[4805]: I0226 17:42:31.588974 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 26 17:42:31 crc kubenswrapper[4805]: E0226 17:42:31.909244 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3c6ef35_7b6a_4dda_b774_d8607ac3a33c.slice/crio-38f5e3b50d5cd993d641e5673596a9359f7f5460faf8538f404d67e80c03fb2b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3c6ef35_7b6a_4dda_b774_d8607ac3a33c.slice/crio-conmon-38f5e3b50d5cd993d641e5673596a9359f7f5460faf8538f404d67e80c03fb2b.scope\": RecentStats: unable to find data in memory cache]" Feb 26 17:42:32 crc kubenswrapper[4805]: I0226 17:42:32.104424 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a6b7a4cd-7367-4048-9560-99759278c58b","Type":"ContainerStarted","Data":"1ce967d18f89740a9415ec22c895ba4df8b507e5ae4f5ffef9bbb4674be5931c"} Feb 26 17:42:32 crc kubenswrapper[4805]: I0226 17:42:32.104470 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a6b7a4cd-7367-4048-9560-99759278c58b","Type":"ContainerStarted","Data":"b1089dd67d6fa46d157d9f05cadae1e016efea186d5ad034ca996ddf4ececcba"} Feb 26 17:42:32 crc kubenswrapper[4805]: I0226 17:42:32.120966 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"14771d36-b57b-4f52-9367-0694e42e2cca","Type":"ContainerStarted","Data":"a88932491812cb137ab2e0ed88e356e40d01c4bb6816356e7e86754a90072353"} Feb 26 17:42:32 crc kubenswrapper[4805]: I0226 17:42:32.121030 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"14771d36-b57b-4f52-9367-0694e42e2cca","Type":"ContainerStarted","Data":"6e22ec5bbe85992a7a61eba5399099cbc6c663809d17f4b066abe7a15c912159"} Feb 26 17:42:32 crc kubenswrapper[4805]: I0226 17:42:32.121040 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"14771d36-b57b-4f52-9367-0694e42e2cca","Type":"ContainerStarted","Data":"dac11ea46d3acf6844cfb707e8af97c8d8025d6543fe12af6f10b47715daf07e"} Feb 26 17:42:32 crc kubenswrapper[4805]: I0226 17:42:32.135741 4805 generic.go:334] "Generic (PLEG): container finished" podID="a3c6ef35-7b6a-4dda-b774-d8607ac3a33c" containerID="38f5e3b50d5cd993d641e5673596a9359f7f5460faf8538f404d67e80c03fb2b" exitCode=0 Feb 26 17:42:32 crc kubenswrapper[4805]: I0226 17:42:32.135789 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pplgn" event={"ID":"a3c6ef35-7b6a-4dda-b774-d8607ac3a33c","Type":"ContainerDied","Data":"38f5e3b50d5cd993d641e5673596a9359f7f5460faf8538f404d67e80c03fb2b"} Feb 26 17:42:32 crc kubenswrapper[4805]: I0226 17:42:32.141221 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 26 17:42:32 crc kubenswrapper[4805]: I0226 17:42:32.157593 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.157565255 podStartE2EDuration="2.157565255s" podCreationTimestamp="2026-02-26 17:42:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:42:32.121039313 +0000 UTC m=+1666.682793682" watchObservedRunningTime="2026-02-26 17:42:32.157565255 +0000 UTC m=+1666.719319594" Feb 26 17:42:32 crc kubenswrapper[4805]: I0226 17:42:32.185437 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.185414638 podStartE2EDuration="2.185414638s" podCreationTimestamp="2026-02-26 17:42:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:42:32.147382998 +0000 UTC m=+1666.709137337" watchObservedRunningTime="2026-02-26 17:42:32.185414638 +0000 UTC m=+1666.747168977" Feb 26 17:42:32 crc kubenswrapper[4805]: I0226 17:42:32.965631 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b70cab06-e570-46df-a9ca-62b524278672" path="/var/lib/kubelet/pods/b70cab06-e570-46df-a9ca-62b524278672/volumes" Feb 26 17:42:33 crc kubenswrapper[4805]: I0226 17:42:33.149555 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dcfa6af2-6085-43cb-94a8-a02bebd05f49","Type":"ContainerStarted","Data":"a00e293372d320b3acb5698269033c71651c76eefc1dd7618e13c3093aa3eb47"} Feb 26 17:42:33 crc kubenswrapper[4805]: I0226 17:42:33.149603 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dcfa6af2-6085-43cb-94a8-a02bebd05f49","Type":"ContainerStarted","Data":"e0b1b4557fde03070577365e682c103193d18a53e18d264bb3666d424b86ce66"} Feb 26 17:42:33 crc kubenswrapper[4805]: I0226 17:42:33.149616 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dcfa6af2-6085-43cb-94a8-a02bebd05f49","Type":"ContainerStarted","Data":"7ce2ecc94937343796aab2328b5f300b06970e0c2aae6914c23f5248ad29568d"} Feb 26 17:42:33 crc kubenswrapper[4805]: I0226 17:42:33.156868 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pplgn" event={"ID":"a3c6ef35-7b6a-4dda-b774-d8607ac3a33c","Type":"ContainerStarted","Data":"e0bf4b7eef0bcd5b51ffc6231d2a0122ec128f2032afb59b73cfb267a7209a7e"} Feb 26 17:42:33 crc kubenswrapper[4805]: I0226 17:42:33.193279 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.193243586 podStartE2EDuration="2.193243586s" podCreationTimestamp="2026-02-26 17:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:42:33.188523497 +0000 UTC m=+1667.750277856" watchObservedRunningTime="2026-02-26 17:42:33.193243586 +0000 UTC m=+1667.754997925" Feb 26 17:42:33 crc kubenswrapper[4805]: I0226 17:42:33.225656 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pplgn" podStartSLOduration=2.655723244 podStartE2EDuration="6.225630883s" podCreationTimestamp="2026-02-26 17:42:27 +0000 UTC" firstStartedPulling="2026-02-26 17:42:28.982679477 +0000 UTC m=+1663.544433816" lastFinishedPulling="2026-02-26 17:42:32.552587116 +0000 UTC m=+1667.114341455" observedRunningTime="2026-02-26 17:42:33.219385286 +0000 UTC m=+1667.781139655" watchObservedRunningTime="2026-02-26 17:42:33.225630883 +0000 UTC m=+1667.787385222" Feb 26 17:42:35 crc kubenswrapper[4805]: I0226 17:42:35.519081 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 26 17:42:35 crc kubenswrapper[4805]: I0226 17:42:35.702436 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 26 17:42:35 crc kubenswrapper[4805]: I0226 17:42:35.702522 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 26 17:42:37 crc kubenswrapper[4805]: I0226 17:42:37.416762 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pplgn" Feb 26 17:42:37 crc kubenswrapper[4805]: I0226 17:42:37.417122 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pplgn" Feb 26 17:42:37 crc kubenswrapper[4805]: I0226 17:42:37.462310 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pplgn" Feb 26 17:42:38 crc kubenswrapper[4805]: I0226 17:42:38.293900 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pplgn" Feb 26 17:42:38 crc kubenswrapper[4805]: I0226 17:42:38.364875 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pplgn"] Feb 26 17:42:40 crc kubenswrapper[4805]: I0226 17:42:40.253195 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pplgn" podUID="a3c6ef35-7b6a-4dda-b774-d8607ac3a33c" containerName="registry-server" containerID="cri-o://e0bf4b7eef0bcd5b51ffc6231d2a0122ec128f2032afb59b73cfb267a7209a7e" gracePeriod=2 Feb 26 17:42:40 crc kubenswrapper[4805]: I0226 17:42:40.518856 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 26 17:42:40 crc kubenswrapper[4805]: I0226 17:42:40.576877 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 26 17:42:40 crc kubenswrapper[4805]: I0226 17:42:40.702119 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 26 17:42:40 crc kubenswrapper[4805]: I0226 17:42:40.704447 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 26 17:42:40 crc kubenswrapper[4805]: I0226 17:42:40.911433 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pplgn" Feb 26 17:42:40 crc kubenswrapper[4805]: I0226 17:42:40.998949 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvfqk\" (UniqueName: \"kubernetes.io/projected/a3c6ef35-7b6a-4dda-b774-d8607ac3a33c-kube-api-access-hvfqk\") pod \"a3c6ef35-7b6a-4dda-b774-d8607ac3a33c\" (UID: \"a3c6ef35-7b6a-4dda-b774-d8607ac3a33c\") " Feb 26 17:42:40 crc kubenswrapper[4805]: I0226 17:42:40.999225 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3c6ef35-7b6a-4dda-b774-d8607ac3a33c-catalog-content\") pod \"a3c6ef35-7b6a-4dda-b774-d8607ac3a33c\" (UID: \"a3c6ef35-7b6a-4dda-b774-d8607ac3a33c\") " Feb 26 17:42:40 crc kubenswrapper[4805]: I0226 17:42:40.999402 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3c6ef35-7b6a-4dda-b774-d8607ac3a33c-utilities\") pod \"a3c6ef35-7b6a-4dda-b774-d8607ac3a33c\" (UID: \"a3c6ef35-7b6a-4dda-b774-d8607ac3a33c\") " Feb 26 17:42:41 crc kubenswrapper[4805]: I0226 17:42:41.000406 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3c6ef35-7b6a-4dda-b774-d8607ac3a33c-utilities" (OuterVolumeSpecName: "utilities") pod "a3c6ef35-7b6a-4dda-b774-d8607ac3a33c" (UID: "a3c6ef35-7b6a-4dda-b774-d8607ac3a33c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:42:41 crc kubenswrapper[4805]: I0226 17:42:41.005132 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3c6ef35-7b6a-4dda-b774-d8607ac3a33c-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:41 crc kubenswrapper[4805]: I0226 17:42:41.013928 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3c6ef35-7b6a-4dda-b774-d8607ac3a33c-kube-api-access-hvfqk" (OuterVolumeSpecName: "kube-api-access-hvfqk") pod "a3c6ef35-7b6a-4dda-b774-d8607ac3a33c" (UID: "a3c6ef35-7b6a-4dda-b774-d8607ac3a33c"). InnerVolumeSpecName "kube-api-access-hvfqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:42:41 crc kubenswrapper[4805]: I0226 17:42:41.074340 4805 scope.go:117] "RemoveContainer" containerID="48cc9977d90049604cbba3c4e80c842c8e7a834fc1c9627b2fc0b84f24944ca9" Feb 26 17:42:41 crc kubenswrapper[4805]: I0226 17:42:41.083517 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3c6ef35-7b6a-4dda-b774-d8607ac3a33c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a3c6ef35-7b6a-4dda-b774-d8607ac3a33c" (UID: "a3c6ef35-7b6a-4dda-b774-d8607ac3a33c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:42:41 crc kubenswrapper[4805]: I0226 17:42:41.107246 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvfqk\" (UniqueName: \"kubernetes.io/projected/a3c6ef35-7b6a-4dda-b774-d8607ac3a33c-kube-api-access-hvfqk\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:41 crc kubenswrapper[4805]: I0226 17:42:41.107299 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3c6ef35-7b6a-4dda-b774-d8607ac3a33c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 17:42:41 crc kubenswrapper[4805]: I0226 17:42:41.191259 4805 scope.go:117] "RemoveContainer" containerID="2817559e889b1a61eb26a2ea9dc87e75e4d3db17bcbb2aa350cec5dd4478ae3f" Feb 26 17:42:41 crc kubenswrapper[4805]: I0226 17:42:41.286303 4805 generic.go:334] "Generic (PLEG): container finished" podID="a3c6ef35-7b6a-4dda-b774-d8607ac3a33c" containerID="e0bf4b7eef0bcd5b51ffc6231d2a0122ec128f2032afb59b73cfb267a7209a7e" exitCode=0 Feb 26 17:42:41 crc kubenswrapper[4805]: I0226 17:42:41.286403 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pplgn" Feb 26 17:42:41 crc kubenswrapper[4805]: I0226 17:42:41.286381 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pplgn" event={"ID":"a3c6ef35-7b6a-4dda-b774-d8607ac3a33c","Type":"ContainerDied","Data":"e0bf4b7eef0bcd5b51ffc6231d2a0122ec128f2032afb59b73cfb267a7209a7e"} Feb 26 17:42:41 crc kubenswrapper[4805]: I0226 17:42:41.286531 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pplgn" event={"ID":"a3c6ef35-7b6a-4dda-b774-d8607ac3a33c","Type":"ContainerDied","Data":"e4ba6f43601985c80830a110a3d91ea5481d7ceeac39d267c1aab351de07993d"} Feb 26 17:42:41 crc kubenswrapper[4805]: I0226 17:42:41.286558 4805 scope.go:117] "RemoveContainer" containerID="e0bf4b7eef0bcd5b51ffc6231d2a0122ec128f2032afb59b73cfb267a7209a7e" Feb 26 17:42:41 crc kubenswrapper[4805]: I0226 17:42:41.336785 4805 scope.go:117] "RemoveContainer" containerID="38f5e3b50d5cd993d641e5673596a9359f7f5460faf8538f404d67e80c03fb2b" Feb 26 17:42:41 crc kubenswrapper[4805]: I0226 17:42:41.341557 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 26 17:42:41 crc kubenswrapper[4805]: I0226 17:42:41.349345 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pplgn"] Feb 26 17:42:41 crc kubenswrapper[4805]: I0226 17:42:41.385186 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pplgn"] Feb 26 17:42:41 crc kubenswrapper[4805]: I0226 17:42:41.414246 4805 scope.go:117] "RemoveContainer" containerID="c3cc46f11d6aecbf3ddb18be31d3f87d1bacb53d52ada6a742ddf886948b5d56" Feb 26 17:42:41 crc kubenswrapper[4805]: I0226 17:42:41.530151 4805 scope.go:117] "RemoveContainer" containerID="e0bf4b7eef0bcd5b51ffc6231d2a0122ec128f2032afb59b73cfb267a7209a7e" Feb 26 17:42:41 crc kubenswrapper[4805]: E0226 17:42:41.531598 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0bf4b7eef0bcd5b51ffc6231d2a0122ec128f2032afb59b73cfb267a7209a7e\": container with ID starting with e0bf4b7eef0bcd5b51ffc6231d2a0122ec128f2032afb59b73cfb267a7209a7e not found: ID does not exist" containerID="e0bf4b7eef0bcd5b51ffc6231d2a0122ec128f2032afb59b73cfb267a7209a7e" Feb 26 17:42:41 crc kubenswrapper[4805]: I0226 17:42:41.531650 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0bf4b7eef0bcd5b51ffc6231d2a0122ec128f2032afb59b73cfb267a7209a7e"} err="failed to get container status \"e0bf4b7eef0bcd5b51ffc6231d2a0122ec128f2032afb59b73cfb267a7209a7e\": rpc error: code = NotFound desc = could not find container \"e0bf4b7eef0bcd5b51ffc6231d2a0122ec128f2032afb59b73cfb267a7209a7e\": container with ID starting with e0bf4b7eef0bcd5b51ffc6231d2a0122ec128f2032afb59b73cfb267a7209a7e not found: ID does not exist" Feb 26 17:42:41 crc kubenswrapper[4805]: I0226 17:42:41.531685 4805 scope.go:117] "RemoveContainer" containerID="38f5e3b50d5cd993d641e5673596a9359f7f5460faf8538f404d67e80c03fb2b" Feb 26 17:42:41 crc kubenswrapper[4805]: E0226 17:42:41.532426 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38f5e3b50d5cd993d641e5673596a9359f7f5460faf8538f404d67e80c03fb2b\": container with ID starting with 38f5e3b50d5cd993d641e5673596a9359f7f5460faf8538f404d67e80c03fb2b not found: ID does not exist" containerID="38f5e3b50d5cd993d641e5673596a9359f7f5460faf8538f404d67e80c03fb2b" Feb 26 17:42:41 crc kubenswrapper[4805]: I0226 17:42:41.532510 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38f5e3b50d5cd993d641e5673596a9359f7f5460faf8538f404d67e80c03fb2b"} err="failed to get container status \"38f5e3b50d5cd993d641e5673596a9359f7f5460faf8538f404d67e80c03fb2b\": rpc error: code = NotFound desc = could not find container \"38f5e3b50d5cd993d641e5673596a9359f7f5460faf8538f404d67e80c03fb2b\": container with ID starting with 38f5e3b50d5cd993d641e5673596a9359f7f5460faf8538f404d67e80c03fb2b not found: ID does not exist" Feb 26 17:42:41 crc kubenswrapper[4805]: I0226 17:42:41.532561 4805 scope.go:117] "RemoveContainer" containerID="c3cc46f11d6aecbf3ddb18be31d3f87d1bacb53d52ada6a742ddf886948b5d56" Feb 26 17:42:41 crc kubenswrapper[4805]: E0226 17:42:41.532989 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3cc46f11d6aecbf3ddb18be31d3f87d1bacb53d52ada6a742ddf886948b5d56\": container with ID starting with c3cc46f11d6aecbf3ddb18be31d3f87d1bacb53d52ada6a742ddf886948b5d56 not found: ID does not exist" containerID="c3cc46f11d6aecbf3ddb18be31d3f87d1bacb53d52ada6a742ddf886948b5d56" Feb 26 17:42:41 crc kubenswrapper[4805]: I0226 17:42:41.533039 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3cc46f11d6aecbf3ddb18be31d3f87d1bacb53d52ada6a742ddf886948b5d56"} err="failed to get container status \"c3cc46f11d6aecbf3ddb18be31d3f87d1bacb53d52ada6a742ddf886948b5d56\": rpc error: code = NotFound desc = could not find container \"c3cc46f11d6aecbf3ddb18be31d3f87d1bacb53d52ada6a742ddf886948b5d56\": container with ID starting with c3cc46f11d6aecbf3ddb18be31d3f87d1bacb53d52ada6a742ddf886948b5d56 not found: ID does not exist" Feb 26 17:42:41 crc kubenswrapper[4805]: I0226 17:42:41.589939 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 17:42:41 crc kubenswrapper[4805]: I0226 17:42:41.590050 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 26 17:42:41 crc kubenswrapper[4805]: I0226 17:42:41.711451 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="14771d36-b57b-4f52-9367-0694e42e2cca" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.239:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 17:42:41 crc kubenswrapper[4805]: I0226 17:42:41.718280 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="14771d36-b57b-4f52-9367-0694e42e2cca" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.239:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 17:42:42 crc kubenswrapper[4805]: I0226 17:42:42.020881 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 26 17:42:42 crc kubenswrapper[4805]: I0226 17:42:42.612419 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="dcfa6af2-6085-43cb-94a8-a02bebd05f49" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.240:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 26 17:42:42 crc kubenswrapper[4805]: I0226 17:42:42.612448 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="dcfa6af2-6085-43cb-94a8-a02bebd05f49" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.240:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 26 17:42:42 crc kubenswrapper[4805]: I0226 17:42:42.987315 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3c6ef35-7b6a-4dda-b774-d8607ac3a33c" path="/var/lib/kubelet/pods/a3c6ef35-7b6a-4dda-b774-d8607ac3a33c/volumes" Feb 26 17:42:50 crc kubenswrapper[4805]: I0226 17:42:50.712095 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 26 17:42:50 crc kubenswrapper[4805]: I0226 17:42:50.714671 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 26 17:42:50 crc kubenswrapper[4805]: I0226 17:42:50.718645 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 26 17:42:51 crc kubenswrapper[4805]: I0226 17:42:51.402705 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 26 17:42:51 crc kubenswrapper[4805]: I0226 17:42:51.636510 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 26 17:42:51 crc kubenswrapper[4805]: I0226 17:42:51.637048 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 26 17:42:51 crc kubenswrapper[4805]: I0226 17:42:51.646435 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 26 17:42:51 crc kubenswrapper[4805]: I0226 17:42:51.659356 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 26 17:42:52 crc kubenswrapper[4805]: I0226 17:42:52.409106 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 26 17:42:52 crc kubenswrapper[4805]: I0226 17:42:52.415971 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 26 17:43:02 crc kubenswrapper[4805]: I0226 17:43:02.910362 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-sync-xgqc6"] Feb 26 17:43:02 crc kubenswrapper[4805]: I0226 17:43:02.920740 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-sync-xgqc6"] Feb 26 17:43:02 crc kubenswrapper[4805]: I0226 17:43:02.969245 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96d28605-2282-4e5f-93d6-5a3023c7bc9c" path="/var/lib/kubelet/pods/96d28605-2282-4e5f-93d6-5a3023c7bc9c/volumes" Feb 26 17:43:03 crc kubenswrapper[4805]: I0226 17:43:03.003860 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-f94vb"] Feb 26 17:43:03 crc kubenswrapper[4805]: E0226 17:43:03.004530 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3c6ef35-7b6a-4dda-b774-d8607ac3a33c" containerName="extract-utilities" Feb 26 17:43:03 crc kubenswrapper[4805]: I0226 17:43:03.004607 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3c6ef35-7b6a-4dda-b774-d8607ac3a33c" containerName="extract-utilities" Feb 26 17:43:03 crc kubenswrapper[4805]: E0226 17:43:03.004680 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3c6ef35-7b6a-4dda-b774-d8607ac3a33c" containerName="extract-content" Feb 26 17:43:03 crc kubenswrapper[4805]: I0226 17:43:03.004735 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3c6ef35-7b6a-4dda-b774-d8607ac3a33c" containerName="extract-content" Feb 26 17:43:03 crc kubenswrapper[4805]: E0226 17:43:03.004825 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3c6ef35-7b6a-4dda-b774-d8607ac3a33c" containerName="registry-server" Feb 26 17:43:03 crc kubenswrapper[4805]: I0226 17:43:03.004877 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3c6ef35-7b6a-4dda-b774-d8607ac3a33c" containerName="registry-server" Feb 26 17:43:03 crc kubenswrapper[4805]: I0226 17:43:03.005224 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3c6ef35-7b6a-4dda-b774-d8607ac3a33c" containerName="registry-server" Feb 26 17:43:03 crc kubenswrapper[4805]: I0226 17:43:03.005988 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-f94vb" Feb 26 17:43:03 crc kubenswrapper[4805]: I0226 17:43:03.008213 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 26 17:43:03 crc kubenswrapper[4805]: I0226 17:43:03.033252 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-f94vb"] Feb 26 17:43:03 crc kubenswrapper[4805]: I0226 17:43:03.116985 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82ed3f58-3a14-4f1d-9366-e5816386c23b-combined-ca-bundle\") pod \"cloudkitty-db-sync-f94vb\" (UID: \"82ed3f58-3a14-4f1d-9366-e5816386c23b\") " pod="openstack/cloudkitty-db-sync-f94vb" Feb 26 17:43:03 crc kubenswrapper[4805]: I0226 17:43:03.117343 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5ln5\" (UniqueName: \"kubernetes.io/projected/82ed3f58-3a14-4f1d-9366-e5816386c23b-kube-api-access-l5ln5\") pod \"cloudkitty-db-sync-f94vb\" (UID: \"82ed3f58-3a14-4f1d-9366-e5816386c23b\") " pod="openstack/cloudkitty-db-sync-f94vb" Feb 26 17:43:03 crc kubenswrapper[4805]: I0226 17:43:03.117397 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82ed3f58-3a14-4f1d-9366-e5816386c23b-scripts\") pod \"cloudkitty-db-sync-f94vb\" (UID: \"82ed3f58-3a14-4f1d-9366-e5816386c23b\") " pod="openstack/cloudkitty-db-sync-f94vb" Feb 26 17:43:03 crc kubenswrapper[4805]: I0226 17:43:03.117419 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82ed3f58-3a14-4f1d-9366-e5816386c23b-config-data\") pod \"cloudkitty-db-sync-f94vb\" (UID: \"82ed3f58-3a14-4f1d-9366-e5816386c23b\") " pod="openstack/cloudkitty-db-sync-f94vb" Feb 26 17:43:03 crc kubenswrapper[4805]: I0226 17:43:03.117523 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/82ed3f58-3a14-4f1d-9366-e5816386c23b-certs\") pod \"cloudkitty-db-sync-f94vb\" (UID: \"82ed3f58-3a14-4f1d-9366-e5816386c23b\") " pod="openstack/cloudkitty-db-sync-f94vb" Feb 26 17:43:03 crc kubenswrapper[4805]: I0226 17:43:03.219260 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/82ed3f58-3a14-4f1d-9366-e5816386c23b-certs\") pod \"cloudkitty-db-sync-f94vb\" (UID: \"82ed3f58-3a14-4f1d-9366-e5816386c23b\") " pod="openstack/cloudkitty-db-sync-f94vb" Feb 26 17:43:03 crc kubenswrapper[4805]: I0226 17:43:03.219508 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82ed3f58-3a14-4f1d-9366-e5816386c23b-combined-ca-bundle\") pod \"cloudkitty-db-sync-f94vb\" (UID: \"82ed3f58-3a14-4f1d-9366-e5816386c23b\") " pod="openstack/cloudkitty-db-sync-f94vb" Feb 26 17:43:03 crc kubenswrapper[4805]: I0226 17:43:03.219617 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5ln5\" (UniqueName: \"kubernetes.io/projected/82ed3f58-3a14-4f1d-9366-e5816386c23b-kube-api-access-l5ln5\") pod \"cloudkitty-db-sync-f94vb\" (UID: \"82ed3f58-3a14-4f1d-9366-e5816386c23b\") " pod="openstack/cloudkitty-db-sync-f94vb" Feb 26 17:43:03 crc kubenswrapper[4805]: I0226 17:43:03.219706 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82ed3f58-3a14-4f1d-9366-e5816386c23b-scripts\") pod \"cloudkitty-db-sync-f94vb\" (UID: \"82ed3f58-3a14-4f1d-9366-e5816386c23b\") " pod="openstack/cloudkitty-db-sync-f94vb" Feb 26 17:43:03 crc kubenswrapper[4805]: I0226 17:43:03.219753 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82ed3f58-3a14-4f1d-9366-e5816386c23b-config-data\") pod \"cloudkitty-db-sync-f94vb\" (UID: \"82ed3f58-3a14-4f1d-9366-e5816386c23b\") " pod="openstack/cloudkitty-db-sync-f94vb" Feb 26 17:43:03 crc kubenswrapper[4805]: I0226 17:43:03.225756 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82ed3f58-3a14-4f1d-9366-e5816386c23b-scripts\") pod \"cloudkitty-db-sync-f94vb\" (UID: \"82ed3f58-3a14-4f1d-9366-e5816386c23b\") " pod="openstack/cloudkitty-db-sync-f94vb" Feb 26 17:43:03 crc kubenswrapper[4805]: I0226 17:43:03.225946 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82ed3f58-3a14-4f1d-9366-e5816386c23b-combined-ca-bundle\") pod \"cloudkitty-db-sync-f94vb\" (UID: \"82ed3f58-3a14-4f1d-9366-e5816386c23b\") " pod="openstack/cloudkitty-db-sync-f94vb" Feb 26 17:43:03 crc kubenswrapper[4805]: I0226 17:43:03.226314 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82ed3f58-3a14-4f1d-9366-e5816386c23b-config-data\") pod \"cloudkitty-db-sync-f94vb\" (UID: \"82ed3f58-3a14-4f1d-9366-e5816386c23b\") " pod="openstack/cloudkitty-db-sync-f94vb" Feb 26 17:43:03 crc kubenswrapper[4805]: I0226 17:43:03.234573 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/82ed3f58-3a14-4f1d-9366-e5816386c23b-certs\") pod \"cloudkitty-db-sync-f94vb\" (UID: \"82ed3f58-3a14-4f1d-9366-e5816386c23b\") " pod="openstack/cloudkitty-db-sync-f94vb" Feb 26 17:43:03 crc kubenswrapper[4805]: I0226 17:43:03.237290 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5ln5\" (UniqueName: \"kubernetes.io/projected/82ed3f58-3a14-4f1d-9366-e5816386c23b-kube-api-access-l5ln5\") pod \"cloudkitty-db-sync-f94vb\" (UID: \"82ed3f58-3a14-4f1d-9366-e5816386c23b\") " pod="openstack/cloudkitty-db-sync-f94vb" Feb 26 17:43:03 crc kubenswrapper[4805]: I0226 17:43:03.328069 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-f94vb" Feb 26 17:43:03 crc kubenswrapper[4805]: I0226 17:43:03.803644 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-f94vb"] Feb 26 17:43:04 crc kubenswrapper[4805]: I0226 17:43:04.561321 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:43:04 crc kubenswrapper[4805]: I0226 17:43:04.563268 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0be4e82e-5695-4f70-9e35-8ef03d0de22c" containerName="ceilometer-central-agent" containerID="cri-o://ebc99afe112410fa180df80defa9f86f8a6b10e2ae132b60a9592d3fa24f0f18" gracePeriod=30 Feb 26 17:43:04 crc kubenswrapper[4805]: I0226 17:43:04.563405 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0be4e82e-5695-4f70-9e35-8ef03d0de22c" containerName="sg-core" containerID="cri-o://0c39a8b06f2a415d69c939be9e610133f3a99a97298e30c889d01f03dcaffdf8" gracePeriod=30 Feb 26 17:43:04 crc kubenswrapper[4805]: I0226 17:43:04.563349 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0be4e82e-5695-4f70-9e35-8ef03d0de22c" containerName="ceilometer-notification-agent" containerID="cri-o://ae4ea9ccd0093791148640a462618f1eef227a758e42342cae47104454a14145" gracePeriod=30 Feb 26 17:43:04 crc kubenswrapper[4805]: I0226 17:43:04.563319 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0be4e82e-5695-4f70-9e35-8ef03d0de22c" containerName="proxy-httpd" containerID="cri-o://46fcdb54e0aa492fbfd867675e4ffb795bca6b2a13c5b4dc34c54425f203348e" gracePeriod=30 Feb 26 17:43:04 crc kubenswrapper[4805]: I0226 17:43:04.584655 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-f94vb" event={"ID":"82ed3f58-3a14-4f1d-9366-e5816386c23b","Type":"ContainerStarted","Data":"53f85add3a4d93cdd311452c753176994fc5f98d7227cbf05bbe1b1085cf2764"} Feb 26 17:43:04 crc kubenswrapper[4805]: I0226 17:43:04.584698 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-f94vb" event={"ID":"82ed3f58-3a14-4f1d-9366-e5816386c23b","Type":"ContainerStarted","Data":"abe1f6b2693ea28ba620756ecc760d28d74e592891d8e15fe3d81a2dbffc7f7c"} Feb 26 17:43:04 crc kubenswrapper[4805]: I0226 17:43:04.614360 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-db-sync-f94vb" podStartSLOduration=2.407284387 podStartE2EDuration="2.614340275s" podCreationTimestamp="2026-02-26 17:43:02 +0000 UTC" firstStartedPulling="2026-02-26 17:43:03.801053039 +0000 UTC m=+1698.362807378" lastFinishedPulling="2026-02-26 17:43:04.008108917 +0000 UTC m=+1698.569863266" observedRunningTime="2026-02-26 17:43:04.606104667 +0000 UTC m=+1699.167859006" watchObservedRunningTime="2026-02-26 17:43:04.614340275 +0000 UTC m=+1699.176094614" Feb 26 17:43:05 crc kubenswrapper[4805]: I0226 17:43:05.296620 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 17:43:05 crc kubenswrapper[4805]: I0226 17:43:05.600057 4805 generic.go:334] "Generic (PLEG): container finished" podID="0be4e82e-5695-4f70-9e35-8ef03d0de22c" containerID="46fcdb54e0aa492fbfd867675e4ffb795bca6b2a13c5b4dc34c54425f203348e" exitCode=0 Feb 26 17:43:05 crc kubenswrapper[4805]: I0226 17:43:05.600123 4805 generic.go:334] "Generic (PLEG): container finished" podID="0be4e82e-5695-4f70-9e35-8ef03d0de22c" containerID="0c39a8b06f2a415d69c939be9e610133f3a99a97298e30c889d01f03dcaffdf8" exitCode=2 Feb 26 17:43:05 crc kubenswrapper[4805]: I0226 17:43:05.600133 4805 generic.go:334] "Generic (PLEG): container finished" podID="0be4e82e-5695-4f70-9e35-8ef03d0de22c" containerID="ebc99afe112410fa180df80defa9f86f8a6b10e2ae132b60a9592d3fa24f0f18" exitCode=0 Feb 26 17:43:05 crc kubenswrapper[4805]: I0226 17:43:05.600140 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0be4e82e-5695-4f70-9e35-8ef03d0de22c","Type":"ContainerDied","Data":"46fcdb54e0aa492fbfd867675e4ffb795bca6b2a13c5b4dc34c54425f203348e"} Feb 26 17:43:05 crc kubenswrapper[4805]: I0226 17:43:05.600202 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0be4e82e-5695-4f70-9e35-8ef03d0de22c","Type":"ContainerDied","Data":"0c39a8b06f2a415d69c939be9e610133f3a99a97298e30c889d01f03dcaffdf8"} Feb 26 17:43:05 crc kubenswrapper[4805]: I0226 17:43:05.600219 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0be4e82e-5695-4f70-9e35-8ef03d0de22c","Type":"ContainerDied","Data":"ebc99afe112410fa180df80defa9f86f8a6b10e2ae132b60a9592d3fa24f0f18"} Feb 26 17:43:06 crc kubenswrapper[4805]: I0226 17:43:06.407876 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 17:43:06 crc kubenswrapper[4805]: I0226 17:43:06.453056 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j998b"] Feb 26 17:43:06 crc kubenswrapper[4805]: I0226 17:43:06.455812 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j998b" Feb 26 17:43:06 crc kubenswrapper[4805]: I0226 17:43:06.468592 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j998b"] Feb 26 17:43:06 crc kubenswrapper[4805]: I0226 17:43:06.597205 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2vvh\" (UniqueName: \"kubernetes.io/projected/12581950-1ba2-48cc-ace1-798bfc3c6a54-kube-api-access-s2vvh\") pod \"certified-operators-j998b\" (UID: \"12581950-1ba2-48cc-ace1-798bfc3c6a54\") " pod="openshift-marketplace/certified-operators-j998b" Feb 26 17:43:06 crc kubenswrapper[4805]: I0226 17:43:06.597607 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12581950-1ba2-48cc-ace1-798bfc3c6a54-utilities\") pod \"certified-operators-j998b\" (UID: \"12581950-1ba2-48cc-ace1-798bfc3c6a54\") " pod="openshift-marketplace/certified-operators-j998b" Feb 26 17:43:06 crc kubenswrapper[4805]: I0226 17:43:06.597674 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12581950-1ba2-48cc-ace1-798bfc3c6a54-catalog-content\") pod \"certified-operators-j998b\" (UID: \"12581950-1ba2-48cc-ace1-798bfc3c6a54\") " pod="openshift-marketplace/certified-operators-j998b" Feb 26 17:43:06 crc kubenswrapper[4805]: I0226 17:43:06.700148 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2vvh\" (UniqueName: \"kubernetes.io/projected/12581950-1ba2-48cc-ace1-798bfc3c6a54-kube-api-access-s2vvh\") pod \"certified-operators-j998b\" (UID: \"12581950-1ba2-48cc-ace1-798bfc3c6a54\") " pod="openshift-marketplace/certified-operators-j998b" Feb 26 17:43:06 crc kubenswrapper[4805]: I0226 17:43:06.700249 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12581950-1ba2-48cc-ace1-798bfc3c6a54-utilities\") pod \"certified-operators-j998b\" (UID: \"12581950-1ba2-48cc-ace1-798bfc3c6a54\") " pod="openshift-marketplace/certified-operators-j998b" Feb 26 17:43:06 crc kubenswrapper[4805]: I0226 17:43:06.700292 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12581950-1ba2-48cc-ace1-798bfc3c6a54-catalog-content\") pod \"certified-operators-j998b\" (UID: \"12581950-1ba2-48cc-ace1-798bfc3c6a54\") " pod="openshift-marketplace/certified-operators-j998b" Feb 26 17:43:06 crc kubenswrapper[4805]: I0226 17:43:06.700735 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12581950-1ba2-48cc-ace1-798bfc3c6a54-catalog-content\") pod \"certified-operators-j998b\" (UID: \"12581950-1ba2-48cc-ace1-798bfc3c6a54\") " pod="openshift-marketplace/certified-operators-j998b" Feb 26 17:43:06 crc kubenswrapper[4805]: I0226 17:43:06.701217 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12581950-1ba2-48cc-ace1-798bfc3c6a54-utilities\") pod \"certified-operators-j998b\" (UID: \"12581950-1ba2-48cc-ace1-798bfc3c6a54\") " pod="openshift-marketplace/certified-operators-j998b" Feb 26 17:43:06 crc kubenswrapper[4805]: I0226 17:43:06.722713 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2vvh\" (UniqueName: \"kubernetes.io/projected/12581950-1ba2-48cc-ace1-798bfc3c6a54-kube-api-access-s2vvh\") pod \"certified-operators-j998b\" (UID: \"12581950-1ba2-48cc-ace1-798bfc3c6a54\") " pod="openshift-marketplace/certified-operators-j998b" Feb 26 17:43:06 crc kubenswrapper[4805]: I0226 17:43:06.805307 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j998b" Feb 26 17:43:07 crc kubenswrapper[4805]: I0226 17:43:07.254562 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j998b"] Feb 26 17:43:07 crc kubenswrapper[4805]: I0226 17:43:07.624336 4805 generic.go:334] "Generic (PLEG): container finished" podID="82ed3f58-3a14-4f1d-9366-e5816386c23b" containerID="53f85add3a4d93cdd311452c753176994fc5f98d7227cbf05bbe1b1085cf2764" exitCode=0 Feb 26 17:43:07 crc kubenswrapper[4805]: I0226 17:43:07.624421 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-f94vb" event={"ID":"82ed3f58-3a14-4f1d-9366-e5816386c23b","Type":"ContainerDied","Data":"53f85add3a4d93cdd311452c753176994fc5f98d7227cbf05bbe1b1085cf2764"} Feb 26 17:43:07 crc kubenswrapper[4805]: I0226 17:43:07.629329 4805 generic.go:334] "Generic (PLEG): container finished" podID="12581950-1ba2-48cc-ace1-798bfc3c6a54" containerID="b978c7ec5d4e9264412cea98e8cc5ffe83518114c97dacbd5f556951b1d14c70" exitCode=0 Feb 26 17:43:07 crc kubenswrapper[4805]: I0226 17:43:07.629379 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j998b" event={"ID":"12581950-1ba2-48cc-ace1-798bfc3c6a54","Type":"ContainerDied","Data":"b978c7ec5d4e9264412cea98e8cc5ffe83518114c97dacbd5f556951b1d14c70"} Feb 26 17:43:07 crc kubenswrapper[4805]: I0226 17:43:07.629411 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j998b" event={"ID":"12581950-1ba2-48cc-ace1-798bfc3c6a54","Type":"ContainerStarted","Data":"ed11d3d99ab0e875837de59ad108695eaf372a14a99cedc0209de27b7ad4afe9"} Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.224808 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-f94vb" Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.374600 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82ed3f58-3a14-4f1d-9366-e5816386c23b-combined-ca-bundle\") pod \"82ed3f58-3a14-4f1d-9366-e5816386c23b\" (UID: \"82ed3f58-3a14-4f1d-9366-e5816386c23b\") " Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.374648 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/82ed3f58-3a14-4f1d-9366-e5816386c23b-certs\") pod \"82ed3f58-3a14-4f1d-9366-e5816386c23b\" (UID: \"82ed3f58-3a14-4f1d-9366-e5816386c23b\") " Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.374697 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82ed3f58-3a14-4f1d-9366-e5816386c23b-scripts\") pod \"82ed3f58-3a14-4f1d-9366-e5816386c23b\" (UID: \"82ed3f58-3a14-4f1d-9366-e5816386c23b\") " Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.374821 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5ln5\" (UniqueName: \"kubernetes.io/projected/82ed3f58-3a14-4f1d-9366-e5816386c23b-kube-api-access-l5ln5\") pod \"82ed3f58-3a14-4f1d-9366-e5816386c23b\" (UID: \"82ed3f58-3a14-4f1d-9366-e5816386c23b\") " Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.374836 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82ed3f58-3a14-4f1d-9366-e5816386c23b-config-data\") pod \"82ed3f58-3a14-4f1d-9366-e5816386c23b\" (UID: \"82ed3f58-3a14-4f1d-9366-e5816386c23b\") " Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.383407 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82ed3f58-3a14-4f1d-9366-e5816386c23b-kube-api-access-l5ln5" (OuterVolumeSpecName: "kube-api-access-l5ln5") pod "82ed3f58-3a14-4f1d-9366-e5816386c23b" (UID: "82ed3f58-3a14-4f1d-9366-e5816386c23b"). InnerVolumeSpecName "kube-api-access-l5ln5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.386202 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82ed3f58-3a14-4f1d-9366-e5816386c23b-certs" (OuterVolumeSpecName: "certs") pod "82ed3f58-3a14-4f1d-9366-e5816386c23b" (UID: "82ed3f58-3a14-4f1d-9366-e5816386c23b"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.413620 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82ed3f58-3a14-4f1d-9366-e5816386c23b-scripts" (OuterVolumeSpecName: "scripts") pod "82ed3f58-3a14-4f1d-9366-e5816386c23b" (UID: "82ed3f58-3a14-4f1d-9366-e5816386c23b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:43:09 crc kubenswrapper[4805]: E0226 17:43:09.419301 4805 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82ed3f58-3a14-4f1d-9366-e5816386c23b-combined-ca-bundle podName:82ed3f58-3a14-4f1d-9366-e5816386c23b nodeName:}" failed. No retries permitted until 2026-02-26 17:43:09.919273102 +0000 UTC m=+1704.481027441 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/82ed3f58-3a14-4f1d-9366-e5816386c23b-combined-ca-bundle") pod "82ed3f58-3a14-4f1d-9366-e5816386c23b" (UID: "82ed3f58-3a14-4f1d-9366-e5816386c23b") : error deleting /var/lib/kubelet/pods/82ed3f58-3a14-4f1d-9366-e5816386c23b/volume-subpaths: remove /var/lib/kubelet/pods/82ed3f58-3a14-4f1d-9366-e5816386c23b/volume-subpaths: no such file or directory Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.424289 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82ed3f58-3a14-4f1d-9366-e5816386c23b-config-data" (OuterVolumeSpecName: "config-data") pod "82ed3f58-3a14-4f1d-9366-e5816386c23b" (UID: "82ed3f58-3a14-4f1d-9366-e5816386c23b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.477196 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82ed3f58-3a14-4f1d-9366-e5816386c23b-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.477237 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5ln5\" (UniqueName: \"kubernetes.io/projected/82ed3f58-3a14-4f1d-9366-e5816386c23b-kube-api-access-l5ln5\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.477250 4805 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/82ed3f58-3a14-4f1d-9366-e5816386c23b-certs\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.477258 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82ed3f58-3a14-4f1d-9366-e5816386c23b-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.515810 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.650891 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-f94vb" event={"ID":"82ed3f58-3a14-4f1d-9366-e5816386c23b","Type":"ContainerDied","Data":"abe1f6b2693ea28ba620756ecc760d28d74e592891d8e15fe3d81a2dbffc7f7c"} Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.651325 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abe1f6b2693ea28ba620756ecc760d28d74e592891d8e15fe3d81a2dbffc7f7c" Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.650956 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-f94vb" Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.653572 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j998b" event={"ID":"12581950-1ba2-48cc-ace1-798bfc3c6a54","Type":"ContainerStarted","Data":"388b7afd6386cf53cf706c499cbcf735ab922cf555b331bc9a7bb8ef4263026a"} Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.657057 4805 generic.go:334] "Generic (PLEG): container finished" podID="0be4e82e-5695-4f70-9e35-8ef03d0de22c" containerID="ae4ea9ccd0093791148640a462618f1eef227a758e42342cae47104454a14145" exitCode=0 Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.657101 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0be4e82e-5695-4f70-9e35-8ef03d0de22c","Type":"ContainerDied","Data":"ae4ea9ccd0093791148640a462618f1eef227a758e42342cae47104454a14145"} Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.657129 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0be4e82e-5695-4f70-9e35-8ef03d0de22c","Type":"ContainerDied","Data":"7fe1edced7c98ba2755c6ad82517ace47da734edf6ab40f6d3b9e5ebcfc96c06"} Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.657150 4805 scope.go:117] "RemoveContainer" containerID="46fcdb54e0aa492fbfd867675e4ffb795bca6b2a13c5b4dc34c54425f203348e" Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.657289 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.679718 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0be4e82e-5695-4f70-9e35-8ef03d0de22c-combined-ca-bundle\") pod \"0be4e82e-5695-4f70-9e35-8ef03d0de22c\" (UID: \"0be4e82e-5695-4f70-9e35-8ef03d0de22c\") " Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.679773 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0be4e82e-5695-4f70-9e35-8ef03d0de22c-log-httpd\") pod \"0be4e82e-5695-4f70-9e35-8ef03d0de22c\" (UID: \"0be4e82e-5695-4f70-9e35-8ef03d0de22c\") " Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.679822 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0be4e82e-5695-4f70-9e35-8ef03d0de22c-ceilometer-tls-certs\") pod \"0be4e82e-5695-4f70-9e35-8ef03d0de22c\" (UID: \"0be4e82e-5695-4f70-9e35-8ef03d0de22c\") " Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.679868 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0be4e82e-5695-4f70-9e35-8ef03d0de22c-sg-core-conf-yaml\") pod \"0be4e82e-5695-4f70-9e35-8ef03d0de22c\" (UID: \"0be4e82e-5695-4f70-9e35-8ef03d0de22c\") " Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.680005 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0be4e82e-5695-4f70-9e35-8ef03d0de22c-config-data\") pod \"0be4e82e-5695-4f70-9e35-8ef03d0de22c\" (UID: \"0be4e82e-5695-4f70-9e35-8ef03d0de22c\") " Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.680134 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0be4e82e-5695-4f70-9e35-8ef03d0de22c-run-httpd\") pod \"0be4e82e-5695-4f70-9e35-8ef03d0de22c\" (UID: \"0be4e82e-5695-4f70-9e35-8ef03d0de22c\") " Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.680217 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0be4e82e-5695-4f70-9e35-8ef03d0de22c-scripts\") pod \"0be4e82e-5695-4f70-9e35-8ef03d0de22c\" (UID: \"0be4e82e-5695-4f70-9e35-8ef03d0de22c\") " Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.680272 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wzxf\" (UniqueName: \"kubernetes.io/projected/0be4e82e-5695-4f70-9e35-8ef03d0de22c-kube-api-access-7wzxf\") pod \"0be4e82e-5695-4f70-9e35-8ef03d0de22c\" (UID: \"0be4e82e-5695-4f70-9e35-8ef03d0de22c\") " Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.695933 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0be4e82e-5695-4f70-9e35-8ef03d0de22c-kube-api-access-7wzxf" (OuterVolumeSpecName: "kube-api-access-7wzxf") pod "0be4e82e-5695-4f70-9e35-8ef03d0de22c" (UID: "0be4e82e-5695-4f70-9e35-8ef03d0de22c"). InnerVolumeSpecName "kube-api-access-7wzxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.702650 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0be4e82e-5695-4f70-9e35-8ef03d0de22c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0be4e82e-5695-4f70-9e35-8ef03d0de22c" (UID: "0be4e82e-5695-4f70-9e35-8ef03d0de22c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.707698 4805 scope.go:117] "RemoveContainer" containerID="0c39a8b06f2a415d69c939be9e610133f3a99a97298e30c889d01f03dcaffdf8" Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.707834 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0be4e82e-5695-4f70-9e35-8ef03d0de22c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0be4e82e-5695-4f70-9e35-8ef03d0de22c" (UID: "0be4e82e-5695-4f70-9e35-8ef03d0de22c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.719046 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0be4e82e-5695-4f70-9e35-8ef03d0de22c-scripts" (OuterVolumeSpecName: "scripts") pod "0be4e82e-5695-4f70-9e35-8ef03d0de22c" (UID: "0be4e82e-5695-4f70-9e35-8ef03d0de22c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.768140 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0be4e82e-5695-4f70-9e35-8ef03d0de22c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0be4e82e-5695-4f70-9e35-8ef03d0de22c" (UID: "0be4e82e-5695-4f70-9e35-8ef03d0de22c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.784475 4805 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0be4e82e-5695-4f70-9e35-8ef03d0de22c-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.784526 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0be4e82e-5695-4f70-9e35-8ef03d0de22c-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.784538 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wzxf\" (UniqueName: \"kubernetes.io/projected/0be4e82e-5695-4f70-9e35-8ef03d0de22c-kube-api-access-7wzxf\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.784552 4805 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0be4e82e-5695-4f70-9e35-8ef03d0de22c-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.784562 4805 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0be4e82e-5695-4f70-9e35-8ef03d0de22c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.792778 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-storageinit-zzvd9"] Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.827884 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-storageinit-zzvd9"] Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.845575 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0be4e82e-5695-4f70-9e35-8ef03d0de22c-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "0be4e82e-5695-4f70-9e35-8ef03d0de22c" (UID: "0be4e82e-5695-4f70-9e35-8ef03d0de22c"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.856637 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-storageinit-5twt4"] Feb 26 17:43:09 crc kubenswrapper[4805]: E0226 17:43:09.857313 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0be4e82e-5695-4f70-9e35-8ef03d0de22c" containerName="ceilometer-central-agent" Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.857336 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0be4e82e-5695-4f70-9e35-8ef03d0de22c" containerName="ceilometer-central-agent" Feb 26 17:43:09 crc kubenswrapper[4805]: E0226 17:43:09.857368 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0be4e82e-5695-4f70-9e35-8ef03d0de22c" containerName="sg-core" Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.857377 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0be4e82e-5695-4f70-9e35-8ef03d0de22c" containerName="sg-core" Feb 26 17:43:09 crc kubenswrapper[4805]: E0226 17:43:09.857392 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82ed3f58-3a14-4f1d-9366-e5816386c23b" containerName="cloudkitty-db-sync" Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.857399 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="82ed3f58-3a14-4f1d-9366-e5816386c23b" containerName="cloudkitty-db-sync" Feb 26 17:43:09 crc kubenswrapper[4805]: E0226 17:43:09.857419 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0be4e82e-5695-4f70-9e35-8ef03d0de22c" containerName="ceilometer-notification-agent" Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.857426 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0be4e82e-5695-4f70-9e35-8ef03d0de22c" containerName="ceilometer-notification-agent" Feb 26 17:43:09 crc kubenswrapper[4805]: E0226 17:43:09.857434 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0be4e82e-5695-4f70-9e35-8ef03d0de22c" containerName="proxy-httpd" Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.857444 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0be4e82e-5695-4f70-9e35-8ef03d0de22c" containerName="proxy-httpd" Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.857668 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="0be4e82e-5695-4f70-9e35-8ef03d0de22c" containerName="sg-core" Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.857689 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="82ed3f58-3a14-4f1d-9366-e5816386c23b" containerName="cloudkitty-db-sync" Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.857701 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="0be4e82e-5695-4f70-9e35-8ef03d0de22c" containerName="ceilometer-notification-agent" Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.857726 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="0be4e82e-5695-4f70-9e35-8ef03d0de22c" containerName="proxy-httpd" Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.857748 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="0be4e82e-5695-4f70-9e35-8ef03d0de22c" containerName="ceilometer-central-agent" Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.858710 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-5twt4" Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.897756 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-5twt4"] Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.899298 4805 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0be4e82e-5695-4f70-9e35-8ef03d0de22c-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.938909 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0be4e82e-5695-4f70-9e35-8ef03d0de22c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0be4e82e-5695-4f70-9e35-8ef03d0de22c" (UID: "0be4e82e-5695-4f70-9e35-8ef03d0de22c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:43:09 crc kubenswrapper[4805]: I0226 17:43:09.954638 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0be4e82e-5695-4f70-9e35-8ef03d0de22c-config-data" (OuterVolumeSpecName: "config-data") pod "0be4e82e-5695-4f70-9e35-8ef03d0de22c" (UID: "0be4e82e-5695-4f70-9e35-8ef03d0de22c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.001666 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82ed3f58-3a14-4f1d-9366-e5816386c23b-combined-ca-bundle\") pod \"82ed3f58-3a14-4f1d-9366-e5816386c23b\" (UID: \"82ed3f58-3a14-4f1d-9366-e5816386c23b\") " Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.002156 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32fc7d22-e37b-433b-85be-58596c3e9c0a-config-data\") pod \"cloudkitty-storageinit-5twt4\" (UID: \"32fc7d22-e37b-433b-85be-58596c3e9c0a\") " pod="openstack/cloudkitty-storageinit-5twt4" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.002236 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4w4l\" (UniqueName: \"kubernetes.io/projected/32fc7d22-e37b-433b-85be-58596c3e9c0a-kube-api-access-t4w4l\") pod \"cloudkitty-storageinit-5twt4\" (UID: \"32fc7d22-e37b-433b-85be-58596c3e9c0a\") " pod="openstack/cloudkitty-storageinit-5twt4" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.002374 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32fc7d22-e37b-433b-85be-58596c3e9c0a-combined-ca-bundle\") pod \"cloudkitty-storageinit-5twt4\" (UID: \"32fc7d22-e37b-433b-85be-58596c3e9c0a\") " pod="openstack/cloudkitty-storageinit-5twt4" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.002523 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/32fc7d22-e37b-433b-85be-58596c3e9c0a-certs\") pod \"cloudkitty-storageinit-5twt4\" (UID: \"32fc7d22-e37b-433b-85be-58596c3e9c0a\") " pod="openstack/cloudkitty-storageinit-5twt4" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.002619 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32fc7d22-e37b-433b-85be-58596c3e9c0a-scripts\") pod \"cloudkitty-storageinit-5twt4\" (UID: \"32fc7d22-e37b-433b-85be-58596c3e9c0a\") " pod="openstack/cloudkitty-storageinit-5twt4" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.002737 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0be4e82e-5695-4f70-9e35-8ef03d0de22c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.002787 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0be4e82e-5695-4f70-9e35-8ef03d0de22c-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.006550 4805 scope.go:117] "RemoveContainer" containerID="ae4ea9ccd0093791148640a462618f1eef227a758e42342cae47104454a14145" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.013253 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82ed3f58-3a14-4f1d-9366-e5816386c23b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82ed3f58-3a14-4f1d-9366-e5816386c23b" (UID: "82ed3f58-3a14-4f1d-9366-e5816386c23b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.045606 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.056871 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.057888 4805 scope.go:117] "RemoveContainer" containerID="ebc99afe112410fa180df80defa9f86f8a6b10e2ae132b60a9592d3fa24f0f18" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.077377 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.079784 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.082897 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.084890 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.085170 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.111488 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/32fc7d22-e37b-433b-85be-58596c3e9c0a-certs\") pod \"cloudkitty-storageinit-5twt4\" (UID: \"32fc7d22-e37b-433b-85be-58596c3e9c0a\") " pod="openstack/cloudkitty-storageinit-5twt4" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.111602 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32fc7d22-e37b-433b-85be-58596c3e9c0a-scripts\") pod \"cloudkitty-storageinit-5twt4\" (UID: \"32fc7d22-e37b-433b-85be-58596c3e9c0a\") " pod="openstack/cloudkitty-storageinit-5twt4" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.111742 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32fc7d22-e37b-433b-85be-58596c3e9c0a-config-data\") pod \"cloudkitty-storageinit-5twt4\" (UID: \"32fc7d22-e37b-433b-85be-58596c3e9c0a\") " pod="openstack/cloudkitty-storageinit-5twt4" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.111796 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4w4l\" (UniqueName: \"kubernetes.io/projected/32fc7d22-e37b-433b-85be-58596c3e9c0a-kube-api-access-t4w4l\") pod \"cloudkitty-storageinit-5twt4\" (UID: \"32fc7d22-e37b-433b-85be-58596c3e9c0a\") " pod="openstack/cloudkitty-storageinit-5twt4" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.111903 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32fc7d22-e37b-433b-85be-58596c3e9c0a-combined-ca-bundle\") pod \"cloudkitty-storageinit-5twt4\" (UID: \"32fc7d22-e37b-433b-85be-58596c3e9c0a\") " pod="openstack/cloudkitty-storageinit-5twt4" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.112011 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82ed3f58-3a14-4f1d-9366-e5816386c23b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.114149 4805 scope.go:117] "RemoveContainer" containerID="46fcdb54e0aa492fbfd867675e4ffb795bca6b2a13c5b4dc34c54425f203348e" Feb 26 17:43:10 crc kubenswrapper[4805]: E0226 17:43:10.116424 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46fcdb54e0aa492fbfd867675e4ffb795bca6b2a13c5b4dc34c54425f203348e\": container with ID starting with 46fcdb54e0aa492fbfd867675e4ffb795bca6b2a13c5b4dc34c54425f203348e not found: ID does not exist" containerID="46fcdb54e0aa492fbfd867675e4ffb795bca6b2a13c5b4dc34c54425f203348e" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.116484 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46fcdb54e0aa492fbfd867675e4ffb795bca6b2a13c5b4dc34c54425f203348e"} err="failed to get container status \"46fcdb54e0aa492fbfd867675e4ffb795bca6b2a13c5b4dc34c54425f203348e\": rpc error: code = NotFound desc = could not find container \"46fcdb54e0aa492fbfd867675e4ffb795bca6b2a13c5b4dc34c54425f203348e\": container with ID starting with 46fcdb54e0aa492fbfd867675e4ffb795bca6b2a13c5b4dc34c54425f203348e not found: ID does not exist" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.116516 4805 scope.go:117] "RemoveContainer" containerID="0c39a8b06f2a415d69c939be9e610133f3a99a97298e30c889d01f03dcaffdf8" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.118036 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32fc7d22-e37b-433b-85be-58596c3e9c0a-scripts\") pod \"cloudkitty-storageinit-5twt4\" (UID: \"32fc7d22-e37b-433b-85be-58596c3e9c0a\") " pod="openstack/cloudkitty-storageinit-5twt4" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.123830 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/32fc7d22-e37b-433b-85be-58596c3e9c0a-certs\") pod \"cloudkitty-storageinit-5twt4\" (UID: \"32fc7d22-e37b-433b-85be-58596c3e9c0a\") " pod="openstack/cloudkitty-storageinit-5twt4" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.125477 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32fc7d22-e37b-433b-85be-58596c3e9c0a-combined-ca-bundle\") pod \"cloudkitty-storageinit-5twt4\" (UID: \"32fc7d22-e37b-433b-85be-58596c3e9c0a\") " pod="openstack/cloudkitty-storageinit-5twt4" Feb 26 17:43:10 crc kubenswrapper[4805]: E0226 17:43:10.125901 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c39a8b06f2a415d69c939be9e610133f3a99a97298e30c889d01f03dcaffdf8\": container with ID starting with 0c39a8b06f2a415d69c939be9e610133f3a99a97298e30c889d01f03dcaffdf8 not found: ID does not exist" containerID="0c39a8b06f2a415d69c939be9e610133f3a99a97298e30c889d01f03dcaffdf8" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.125933 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c39a8b06f2a415d69c939be9e610133f3a99a97298e30c889d01f03dcaffdf8"} err="failed to get container status \"0c39a8b06f2a415d69c939be9e610133f3a99a97298e30c889d01f03dcaffdf8\": rpc error: code = NotFound desc = could not find container \"0c39a8b06f2a415d69c939be9e610133f3a99a97298e30c889d01f03dcaffdf8\": container with ID starting with 0c39a8b06f2a415d69c939be9e610133f3a99a97298e30c889d01f03dcaffdf8 not found: ID does not exist" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.125957 4805 scope.go:117] "RemoveContainer" containerID="ae4ea9ccd0093791148640a462618f1eef227a758e42342cae47104454a14145" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.129082 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32fc7d22-e37b-433b-85be-58596c3e9c0a-config-data\") pod \"cloudkitty-storageinit-5twt4\" (UID: \"32fc7d22-e37b-433b-85be-58596c3e9c0a\") " pod="openstack/cloudkitty-storageinit-5twt4" Feb 26 17:43:10 crc kubenswrapper[4805]: E0226 17:43:10.131186 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae4ea9ccd0093791148640a462618f1eef227a758e42342cae47104454a14145\": container with ID starting with ae4ea9ccd0093791148640a462618f1eef227a758e42342cae47104454a14145 not found: ID does not exist" containerID="ae4ea9ccd0093791148640a462618f1eef227a758e42342cae47104454a14145" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.131248 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae4ea9ccd0093791148640a462618f1eef227a758e42342cae47104454a14145"} err="failed to get container status \"ae4ea9ccd0093791148640a462618f1eef227a758e42342cae47104454a14145\": rpc error: code = NotFound desc = could not find container \"ae4ea9ccd0093791148640a462618f1eef227a758e42342cae47104454a14145\": container with ID starting with ae4ea9ccd0093791148640a462618f1eef227a758e42342cae47104454a14145 not found: ID does not exist" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.131284 4805 scope.go:117] "RemoveContainer" containerID="ebc99afe112410fa180df80defa9f86f8a6b10e2ae132b60a9592d3fa24f0f18" Feb 26 17:43:10 crc kubenswrapper[4805]: E0226 17:43:10.133846 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebc99afe112410fa180df80defa9f86f8a6b10e2ae132b60a9592d3fa24f0f18\": container with ID starting with ebc99afe112410fa180df80defa9f86f8a6b10e2ae132b60a9592d3fa24f0f18 not found: ID does not exist" containerID="ebc99afe112410fa180df80defa9f86f8a6b10e2ae132b60a9592d3fa24f0f18" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.133892 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebc99afe112410fa180df80defa9f86f8a6b10e2ae132b60a9592d3fa24f0f18"} err="failed to get container status \"ebc99afe112410fa180df80defa9f86f8a6b10e2ae132b60a9592d3fa24f0f18\": rpc error: code = NotFound desc = could not find container \"ebc99afe112410fa180df80defa9f86f8a6b10e2ae132b60a9592d3fa24f0f18\": container with ID starting with ebc99afe112410fa180df80defa9f86f8a6b10e2ae132b60a9592d3fa24f0f18 not found: ID does not exist" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.144834 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.148743 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4w4l\" (UniqueName: \"kubernetes.io/projected/32fc7d22-e37b-433b-85be-58596c3e9c0a-kube-api-access-t4w4l\") pod \"cloudkitty-storageinit-5twt4\" (UID: \"32fc7d22-e37b-433b-85be-58596c3e9c0a\") " pod="openstack/cloudkitty-storageinit-5twt4" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.214221 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvcsr\" (UniqueName: \"kubernetes.io/projected/6c49186e-522c-4f97-8d17-40c887d09de8-kube-api-access-xvcsr\") pod \"ceilometer-0\" (UID: \"6c49186e-522c-4f97-8d17-40c887d09de8\") " pod="openstack/ceilometer-0" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.214334 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c49186e-522c-4f97-8d17-40c887d09de8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6c49186e-522c-4f97-8d17-40c887d09de8\") " pod="openstack/ceilometer-0" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.214507 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c49186e-522c-4f97-8d17-40c887d09de8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6c49186e-522c-4f97-8d17-40c887d09de8\") " pod="openstack/ceilometer-0" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.214775 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6c49186e-522c-4f97-8d17-40c887d09de8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6c49186e-522c-4f97-8d17-40c887d09de8\") " pod="openstack/ceilometer-0" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.214910 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c49186e-522c-4f97-8d17-40c887d09de8-log-httpd\") pod \"ceilometer-0\" (UID: \"6c49186e-522c-4f97-8d17-40c887d09de8\") " pod="openstack/ceilometer-0" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.214967 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c49186e-522c-4f97-8d17-40c887d09de8-config-data\") pod \"ceilometer-0\" (UID: \"6c49186e-522c-4f97-8d17-40c887d09de8\") " pod="openstack/ceilometer-0" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.215168 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c49186e-522c-4f97-8d17-40c887d09de8-run-httpd\") pod \"ceilometer-0\" (UID: \"6c49186e-522c-4f97-8d17-40c887d09de8\") " pod="openstack/ceilometer-0" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.215214 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c49186e-522c-4f97-8d17-40c887d09de8-scripts\") pod \"ceilometer-0\" (UID: \"6c49186e-522c-4f97-8d17-40c887d09de8\") " pod="openstack/ceilometer-0" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.312713 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-5twt4" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.317226 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6c49186e-522c-4f97-8d17-40c887d09de8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6c49186e-522c-4f97-8d17-40c887d09de8\") " pod="openstack/ceilometer-0" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.317311 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c49186e-522c-4f97-8d17-40c887d09de8-log-httpd\") pod \"ceilometer-0\" (UID: \"6c49186e-522c-4f97-8d17-40c887d09de8\") " pod="openstack/ceilometer-0" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.317349 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c49186e-522c-4f97-8d17-40c887d09de8-config-data\") pod \"ceilometer-0\" (UID: \"6c49186e-522c-4f97-8d17-40c887d09de8\") " pod="openstack/ceilometer-0" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.317419 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c49186e-522c-4f97-8d17-40c887d09de8-run-httpd\") pod \"ceilometer-0\" (UID: \"6c49186e-522c-4f97-8d17-40c887d09de8\") " pod="openstack/ceilometer-0" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.317456 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c49186e-522c-4f97-8d17-40c887d09de8-scripts\") pod \"ceilometer-0\" (UID: \"6c49186e-522c-4f97-8d17-40c887d09de8\") " pod="openstack/ceilometer-0" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.317514 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvcsr\" (UniqueName: \"kubernetes.io/projected/6c49186e-522c-4f97-8d17-40c887d09de8-kube-api-access-xvcsr\") pod \"ceilometer-0\" (UID: \"6c49186e-522c-4f97-8d17-40c887d09de8\") " pod="openstack/ceilometer-0" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.317561 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c49186e-522c-4f97-8d17-40c887d09de8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6c49186e-522c-4f97-8d17-40c887d09de8\") " pod="openstack/ceilometer-0" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.317606 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c49186e-522c-4f97-8d17-40c887d09de8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6c49186e-522c-4f97-8d17-40c887d09de8\") " pod="openstack/ceilometer-0" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.317901 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c49186e-522c-4f97-8d17-40c887d09de8-log-httpd\") pod \"ceilometer-0\" (UID: \"6c49186e-522c-4f97-8d17-40c887d09de8\") " pod="openstack/ceilometer-0" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.320821 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6c49186e-522c-4f97-8d17-40c887d09de8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6c49186e-522c-4f97-8d17-40c887d09de8\") " pod="openstack/ceilometer-0" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.320954 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c49186e-522c-4f97-8d17-40c887d09de8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6c49186e-522c-4f97-8d17-40c887d09de8\") " pod="openstack/ceilometer-0" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.321498 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c49186e-522c-4f97-8d17-40c887d09de8-run-httpd\") pod \"ceilometer-0\" (UID: \"6c49186e-522c-4f97-8d17-40c887d09de8\") " pod="openstack/ceilometer-0" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.322323 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c49186e-522c-4f97-8d17-40c887d09de8-config-data\") pod \"ceilometer-0\" (UID: \"6c49186e-522c-4f97-8d17-40c887d09de8\") " pod="openstack/ceilometer-0" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.322454 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c49186e-522c-4f97-8d17-40c887d09de8-scripts\") pod \"ceilometer-0\" (UID: \"6c49186e-522c-4f97-8d17-40c887d09de8\") " pod="openstack/ceilometer-0" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.323615 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c49186e-522c-4f97-8d17-40c887d09de8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6c49186e-522c-4f97-8d17-40c887d09de8\") " pod="openstack/ceilometer-0" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.344761 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvcsr\" (UniqueName: \"kubernetes.io/projected/6c49186e-522c-4f97-8d17-40c887d09de8-kube-api-access-xvcsr\") pod \"ceilometer-0\" (UID: \"6c49186e-522c-4f97-8d17-40c887d09de8\") " pod="openstack/ceilometer-0" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.401189 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.841880 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-5twt4"] Feb 26 17:43:10 crc kubenswrapper[4805]: W0226 17:43:10.853294 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32fc7d22_e37b_433b_85be_58596c3e9c0a.slice/crio-e38acb509a587facd9c787df831c7a6346f4ba28150d3a9448c230c5de80bb8f WatchSource:0}: Error finding container e38acb509a587facd9c787df831c7a6346f4ba28150d3a9448c230c5de80bb8f: Status 404 returned error can't find the container with id e38acb509a587facd9c787df831c7a6346f4ba28150d3a9448c230c5de80bb8f Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.971390 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0be4e82e-5695-4f70-9e35-8ef03d0de22c" path="/var/lib/kubelet/pods/0be4e82e-5695-4f70-9e35-8ef03d0de22c/volumes" Feb 26 17:43:10 crc kubenswrapper[4805]: I0226 17:43:10.974641 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f369c5e8-1932-4293-9e75-6b74c4d4eb21" path="/var/lib/kubelet/pods/f369c5e8-1932-4293-9e75-6b74c4d4eb21/volumes" Feb 26 17:43:11 crc kubenswrapper[4805]: I0226 17:43:11.018700 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 26 17:43:11 crc kubenswrapper[4805]: W0226 17:43:11.023875 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c49186e_522c_4f97_8d17_40c887d09de8.slice/crio-383ec287fd66801047f6e3f1972a7100e04817b9e1a595beca73a737f0fa10d0 WatchSource:0}: Error finding container 383ec287fd66801047f6e3f1972a7100e04817b9e1a595beca73a737f0fa10d0: Status 404 returned error can't find the container with id 383ec287fd66801047f6e3f1972a7100e04817b9e1a595beca73a737f0fa10d0 Feb 26 17:43:11 crc kubenswrapper[4805]: I0226 17:43:11.195080 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="82935132-2a23-4b0c-86c5-be40089b7e0b" containerName="rabbitmq" containerID="cri-o://2b9573e545642ce301ecb6bd38b1385665b816b74305fdbf5b8d3ba79aedd146" gracePeriod=604795 Feb 26 17:43:11 crc kubenswrapper[4805]: I0226 17:43:11.687480 4805 generic.go:334] "Generic (PLEG): container finished" podID="12581950-1ba2-48cc-ace1-798bfc3c6a54" containerID="388b7afd6386cf53cf706c499cbcf735ab922cf555b331bc9a7bb8ef4263026a" exitCode=0 Feb 26 17:43:11 crc kubenswrapper[4805]: I0226 17:43:11.687577 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j998b" event={"ID":"12581950-1ba2-48cc-ace1-798bfc3c6a54","Type":"ContainerDied","Data":"388b7afd6386cf53cf706c499cbcf735ab922cf555b331bc9a7bb8ef4263026a"} Feb 26 17:43:11 crc kubenswrapper[4805]: I0226 17:43:11.691966 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c49186e-522c-4f97-8d17-40c887d09de8","Type":"ContainerStarted","Data":"383ec287fd66801047f6e3f1972a7100e04817b9e1a595beca73a737f0fa10d0"} Feb 26 17:43:11 crc kubenswrapper[4805]: I0226 17:43:11.696521 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-5twt4" event={"ID":"32fc7d22-e37b-433b-85be-58596c3e9c0a","Type":"ContainerStarted","Data":"b74665d65368ed1a299c21ea9acd633f03fbb9ccfa23d912aad66481618bfdca"} Feb 26 17:43:11 crc kubenswrapper[4805]: I0226 17:43:11.696561 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-5twt4" event={"ID":"32fc7d22-e37b-433b-85be-58596c3e9c0a","Type":"ContainerStarted","Data":"e38acb509a587facd9c787df831c7a6346f4ba28150d3a9448c230c5de80bb8f"} Feb 26 17:43:11 crc kubenswrapper[4805]: I0226 17:43:11.736049 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-storageinit-5twt4" podStartSLOduration=2.736028771 podStartE2EDuration="2.736028771s" podCreationTimestamp="2026-02-26 17:43:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:43:11.725315901 +0000 UTC m=+1706.287070240" watchObservedRunningTime="2026-02-26 17:43:11.736028771 +0000 UTC m=+1706.297783110" Feb 26 17:43:11 crc kubenswrapper[4805]: I0226 17:43:11.743169 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="9c793c17-a107-4006-9e15-5a2ac2afa296" containerName="rabbitmq" containerID="cri-o://b42a861aab0cae3e450ecd3eb59fe2377389b7472abe53b997d75d139154328d" gracePeriod=604795 Feb 26 17:43:12 crc kubenswrapper[4805]: I0226 17:43:12.716727 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j998b" event={"ID":"12581950-1ba2-48cc-ace1-798bfc3c6a54","Type":"ContainerStarted","Data":"519c9f37e17b085f4c73241c86dd7faa6d01b175b0ad5f9b91e12f949ad9acd1"} Feb 26 17:43:12 crc kubenswrapper[4805]: I0226 17:43:12.749791 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j998b" podStartSLOduration=1.990047823 podStartE2EDuration="6.749773768s" podCreationTimestamp="2026-02-26 17:43:06 +0000 UTC" firstStartedPulling="2026-02-26 17:43:07.630941746 +0000 UTC m=+1702.192696085" lastFinishedPulling="2026-02-26 17:43:12.390667691 +0000 UTC m=+1706.952422030" observedRunningTime="2026-02-26 17:43:12.738051672 +0000 UTC m=+1707.299806011" watchObservedRunningTime="2026-02-26 17:43:12.749773768 +0000 UTC m=+1707.311528107" Feb 26 17:43:13 crc kubenswrapper[4805]: I0226 17:43:13.731854 4805 generic.go:334] "Generic (PLEG): container finished" podID="32fc7d22-e37b-433b-85be-58596c3e9c0a" containerID="b74665d65368ed1a299c21ea9acd633f03fbb9ccfa23d912aad66481618bfdca" exitCode=0 Feb 26 17:43:13 crc kubenswrapper[4805]: I0226 17:43:13.731899 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-5twt4" event={"ID":"32fc7d22-e37b-433b-85be-58596c3e9c0a","Type":"ContainerDied","Data":"b74665d65368ed1a299c21ea9acd633f03fbb9ccfa23d912aad66481618bfdca"} Feb 26 17:43:15 crc kubenswrapper[4805]: I0226 17:43:15.754468 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-5twt4" event={"ID":"32fc7d22-e37b-433b-85be-58596c3e9c0a","Type":"ContainerDied","Data":"e38acb509a587facd9c787df831c7a6346f4ba28150d3a9448c230c5de80bb8f"} Feb 26 17:43:15 crc kubenswrapper[4805]: I0226 17:43:15.754986 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e38acb509a587facd9c787df831c7a6346f4ba28150d3a9448c230c5de80bb8f" Feb 26 17:43:15 crc kubenswrapper[4805]: I0226 17:43:15.815715 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-5twt4" Feb 26 17:43:15 crc kubenswrapper[4805]: I0226 17:43:15.940469 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32fc7d22-e37b-433b-85be-58596c3e9c0a-config-data\") pod \"32fc7d22-e37b-433b-85be-58596c3e9c0a\" (UID: \"32fc7d22-e37b-433b-85be-58596c3e9c0a\") " Feb 26 17:43:15 crc kubenswrapper[4805]: I0226 17:43:15.940536 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4w4l\" (UniqueName: \"kubernetes.io/projected/32fc7d22-e37b-433b-85be-58596c3e9c0a-kube-api-access-t4w4l\") pod \"32fc7d22-e37b-433b-85be-58596c3e9c0a\" (UID: \"32fc7d22-e37b-433b-85be-58596c3e9c0a\") " Feb 26 17:43:15 crc kubenswrapper[4805]: I0226 17:43:15.940581 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32fc7d22-e37b-433b-85be-58596c3e9c0a-combined-ca-bundle\") pod \"32fc7d22-e37b-433b-85be-58596c3e9c0a\" (UID: \"32fc7d22-e37b-433b-85be-58596c3e9c0a\") " Feb 26 17:43:15 crc kubenswrapper[4805]: I0226 17:43:15.940695 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/32fc7d22-e37b-433b-85be-58596c3e9c0a-certs\") pod \"32fc7d22-e37b-433b-85be-58596c3e9c0a\" (UID: \"32fc7d22-e37b-433b-85be-58596c3e9c0a\") " Feb 26 17:43:15 crc kubenswrapper[4805]: I0226 17:43:15.940735 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32fc7d22-e37b-433b-85be-58596c3e9c0a-scripts\") pod \"32fc7d22-e37b-433b-85be-58596c3e9c0a\" (UID: \"32fc7d22-e37b-433b-85be-58596c3e9c0a\") " Feb 26 17:43:15 crc kubenswrapper[4805]: I0226 17:43:15.948391 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32fc7d22-e37b-433b-85be-58596c3e9c0a-kube-api-access-t4w4l" (OuterVolumeSpecName: "kube-api-access-t4w4l") pod "32fc7d22-e37b-433b-85be-58596c3e9c0a" (UID: "32fc7d22-e37b-433b-85be-58596c3e9c0a"). InnerVolumeSpecName "kube-api-access-t4w4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:43:15 crc kubenswrapper[4805]: I0226 17:43:15.977114 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32fc7d22-e37b-433b-85be-58596c3e9c0a-certs" (OuterVolumeSpecName: "certs") pod "32fc7d22-e37b-433b-85be-58596c3e9c0a" (UID: "32fc7d22-e37b-433b-85be-58596c3e9c0a"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:43:15 crc kubenswrapper[4805]: I0226 17:43:15.983462 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32fc7d22-e37b-433b-85be-58596c3e9c0a-config-data" (OuterVolumeSpecName: "config-data") pod "32fc7d22-e37b-433b-85be-58596c3e9c0a" (UID: "32fc7d22-e37b-433b-85be-58596c3e9c0a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:43:15 crc kubenswrapper[4805]: I0226 17:43:15.985304 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32fc7d22-e37b-433b-85be-58596c3e9c0a-scripts" (OuterVolumeSpecName: "scripts") pod "32fc7d22-e37b-433b-85be-58596c3e9c0a" (UID: "32fc7d22-e37b-433b-85be-58596c3e9c0a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:43:15 crc kubenswrapper[4805]: I0226 17:43:15.985538 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32fc7d22-e37b-433b-85be-58596c3e9c0a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32fc7d22-e37b-433b-85be-58596c3e9c0a" (UID: "32fc7d22-e37b-433b-85be-58596c3e9c0a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:43:16 crc kubenswrapper[4805]: I0226 17:43:16.046428 4805 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/32fc7d22-e37b-433b-85be-58596c3e9c0a-certs\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:16 crc kubenswrapper[4805]: I0226 17:43:16.046461 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32fc7d22-e37b-433b-85be-58596c3e9c0a-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:16 crc kubenswrapper[4805]: I0226 17:43:16.046470 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32fc7d22-e37b-433b-85be-58596c3e9c0a-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:16 crc kubenswrapper[4805]: I0226 17:43:16.046479 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4w4l\" (UniqueName: \"kubernetes.io/projected/32fc7d22-e37b-433b-85be-58596c3e9c0a-kube-api-access-t4w4l\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:16 crc kubenswrapper[4805]: I0226 17:43:16.046490 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32fc7d22-e37b-433b-85be-58596c3e9c0a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:16 crc kubenswrapper[4805]: I0226 17:43:16.769123 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-5twt4" Feb 26 17:43:16 crc kubenswrapper[4805]: I0226 17:43:16.770186 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c49186e-522c-4f97-8d17-40c887d09de8","Type":"ContainerStarted","Data":"5ed89411c6ae2407b75c96e39b8ba1c1a3e19153d074768f14aa6bf90efc60e8"} Feb 26 17:43:16 crc kubenswrapper[4805]: I0226 17:43:16.770242 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c49186e-522c-4f97-8d17-40c887d09de8","Type":"ContainerStarted","Data":"1f85eef685b650b19ffe1d2654cbfe3622c560390ceb83ebcbcd16e52e57691b"} Feb 26 17:43:16 crc kubenswrapper[4805]: I0226 17:43:16.806182 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j998b" Feb 26 17:43:16 crc kubenswrapper[4805]: I0226 17:43:16.806230 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j998b" Feb 26 17:43:16 crc kubenswrapper[4805]: I0226 17:43:16.873428 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j998b" Feb 26 17:43:16 crc kubenswrapper[4805]: I0226 17:43:16.947850 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 26 17:43:16 crc kubenswrapper[4805]: I0226 17:43:16.948178 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-proc-0" podUID="72a9ed44-dc10-4f81-be61-b6ba20c83548" containerName="cloudkitty-proc" containerID="cri-o://198b395ccd7c0775cca85f690aa9da5aed7d703bf945bda979ab5a0685c36e39" gracePeriod=30 Feb 26 17:43:16 crc kubenswrapper[4805]: I0226 17:43:16.973901 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 26 17:43:16 crc kubenswrapper[4805]: I0226 17:43:16.974226 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="69411227-14e0-40b4-a753-f2178bfbdd2a" containerName="cloudkitty-api-log" containerID="cri-o://f74cd03523a7658add610343484f110bf6d064e020411cae410e73d3f4efc332" gracePeriod=30 Feb 26 17:43:16 crc kubenswrapper[4805]: I0226 17:43:16.974398 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="69411227-14e0-40b4-a753-f2178bfbdd2a" containerName="cloudkitty-api" containerID="cri-o://4bc1b1627ba0aaf3af3c9f5f6d1e1d4c956b78ed1f87c20b3c197955a2e00e05" gracePeriod=30 Feb 26 17:43:17 crc kubenswrapper[4805]: I0226 17:43:17.796320 4805 generic.go:334] "Generic (PLEG): container finished" podID="82935132-2a23-4b0c-86c5-be40089b7e0b" containerID="2b9573e545642ce301ecb6bd38b1385665b816b74305fdbf5b8d3ba79aedd146" exitCode=0 Feb 26 17:43:17 crc kubenswrapper[4805]: I0226 17:43:17.796399 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"82935132-2a23-4b0c-86c5-be40089b7e0b","Type":"ContainerDied","Data":"2b9573e545642ce301ecb6bd38b1385665b816b74305fdbf5b8d3ba79aedd146"} Feb 26 17:43:17 crc kubenswrapper[4805]: I0226 17:43:17.802657 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c49186e-522c-4f97-8d17-40c887d09de8","Type":"ContainerStarted","Data":"7a21e8eeb860fe1cfc2a8fa3b5a15f07d26e9e0ecaab59a8d406c3576c4f3b8d"} Feb 26 17:43:17 crc kubenswrapper[4805]: I0226 17:43:17.805509 4805 generic.go:334] "Generic (PLEG): container finished" podID="69411227-14e0-40b4-a753-f2178bfbdd2a" containerID="f74cd03523a7658add610343484f110bf6d064e020411cae410e73d3f4efc332" exitCode=143 Feb 26 17:43:17 crc kubenswrapper[4805]: I0226 17:43:17.806353 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"69411227-14e0-40b4-a753-f2178bfbdd2a","Type":"ContainerDied","Data":"f74cd03523a7658add610343484f110bf6d064e020411cae410e73d3f4efc332"} Feb 26 17:43:17 crc kubenswrapper[4805]: I0226 17:43:17.887909 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j998b" Feb 26 17:43:17 crc kubenswrapper[4805]: I0226 17:43:17.942169 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j998b"] Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.317189 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.407067 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/82935132-2a23-4b0c-86c5-be40089b7e0b-rabbitmq-tls\") pod \"82935132-2a23-4b0c-86c5-be40089b7e0b\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.411245 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a21ab19a-588e-4827-9716-83290db70476\") pod \"82935132-2a23-4b0c-86c5-be40089b7e0b\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.411354 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/82935132-2a23-4b0c-86c5-be40089b7e0b-rabbitmq-erlang-cookie\") pod \"82935132-2a23-4b0c-86c5-be40089b7e0b\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.411492 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/82935132-2a23-4b0c-86c5-be40089b7e0b-erlang-cookie-secret\") pod \"82935132-2a23-4b0c-86c5-be40089b7e0b\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.411538 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsdj5\" (UniqueName: \"kubernetes.io/projected/82935132-2a23-4b0c-86c5-be40089b7e0b-kube-api-access-hsdj5\") pod \"82935132-2a23-4b0c-86c5-be40089b7e0b\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.411778 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/82935132-2a23-4b0c-86c5-be40089b7e0b-plugins-conf\") pod \"82935132-2a23-4b0c-86c5-be40089b7e0b\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.411812 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/82935132-2a23-4b0c-86c5-be40089b7e0b-server-conf\") pod \"82935132-2a23-4b0c-86c5-be40089b7e0b\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.411871 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/82935132-2a23-4b0c-86c5-be40089b7e0b-rabbitmq-confd\") pod \"82935132-2a23-4b0c-86c5-be40089b7e0b\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.411905 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/82935132-2a23-4b0c-86c5-be40089b7e0b-rabbitmq-plugins\") pod \"82935132-2a23-4b0c-86c5-be40089b7e0b\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.411952 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/82935132-2a23-4b0c-86c5-be40089b7e0b-config-data\") pod \"82935132-2a23-4b0c-86c5-be40089b7e0b\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.412045 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/82935132-2a23-4b0c-86c5-be40089b7e0b-pod-info\") pod \"82935132-2a23-4b0c-86c5-be40089b7e0b\" (UID: \"82935132-2a23-4b0c-86c5-be40089b7e0b\") " Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.412422 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82935132-2a23-4b0c-86c5-be40089b7e0b-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "82935132-2a23-4b0c-86c5-be40089b7e0b" (UID: "82935132-2a23-4b0c-86c5-be40089b7e0b"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.412633 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82935132-2a23-4b0c-86c5-be40089b7e0b-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "82935132-2a23-4b0c-86c5-be40089b7e0b" (UID: "82935132-2a23-4b0c-86c5-be40089b7e0b"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.413531 4805 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/82935132-2a23-4b0c-86c5-be40089b7e0b-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.413552 4805 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/82935132-2a23-4b0c-86c5-be40089b7e0b-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.422078 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82935132-2a23-4b0c-86c5-be40089b7e0b-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "82935132-2a23-4b0c-86c5-be40089b7e0b" (UID: "82935132-2a23-4b0c-86c5-be40089b7e0b"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.423768 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/82935132-2a23-4b0c-86c5-be40089b7e0b-pod-info" (OuterVolumeSpecName: "pod-info") pod "82935132-2a23-4b0c-86c5-be40089b7e0b" (UID: "82935132-2a23-4b0c-86c5-be40089b7e0b"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.425726 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82935132-2a23-4b0c-86c5-be40089b7e0b-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "82935132-2a23-4b0c-86c5-be40089b7e0b" (UID: "82935132-2a23-4b0c-86c5-be40089b7e0b"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.429867 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82935132-2a23-4b0c-86c5-be40089b7e0b-kube-api-access-hsdj5" (OuterVolumeSpecName: "kube-api-access-hsdj5") pod "82935132-2a23-4b0c-86c5-be40089b7e0b" (UID: "82935132-2a23-4b0c-86c5-be40089b7e0b"). InnerVolumeSpecName "kube-api-access-hsdj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.430360 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82935132-2a23-4b0c-86c5-be40089b7e0b-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "82935132-2a23-4b0c-86c5-be40089b7e0b" (UID: "82935132-2a23-4b0c-86c5-be40089b7e0b"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.518427 4805 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/82935132-2a23-4b0c-86c5-be40089b7e0b-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.518461 4805 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/82935132-2a23-4b0c-86c5-be40089b7e0b-pod-info\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.518473 4805 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/82935132-2a23-4b0c-86c5-be40089b7e0b-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.518492 4805 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/82935132-2a23-4b0c-86c5-be40089b7e0b-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.518504 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsdj5\" (UniqueName: \"kubernetes.io/projected/82935132-2a23-4b0c-86c5-be40089b7e0b-kube-api-access-hsdj5\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.532851 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a21ab19a-588e-4827-9716-83290db70476" (OuterVolumeSpecName: "persistence") pod "82935132-2a23-4b0c-86c5-be40089b7e0b" (UID: "82935132-2a23-4b0c-86c5-be40089b7e0b"). InnerVolumeSpecName "pvc-a21ab19a-588e-4827-9716-83290db70476". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.537992 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82935132-2a23-4b0c-86c5-be40089b7e0b-config-data" (OuterVolumeSpecName: "config-data") pod "82935132-2a23-4b0c-86c5-be40089b7e0b" (UID: "82935132-2a23-4b0c-86c5-be40089b7e0b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.558627 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82935132-2a23-4b0c-86c5-be40089b7e0b-server-conf" (OuterVolumeSpecName: "server-conf") pod "82935132-2a23-4b0c-86c5-be40089b7e0b" (UID: "82935132-2a23-4b0c-86c5-be40089b7e0b"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.620057 4805 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/82935132-2a23-4b0c-86c5-be40089b7e0b-server-conf\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.620083 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/82935132-2a23-4b0c-86c5-be40089b7e0b-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.620107 4805 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-a21ab19a-588e-4827-9716-83290db70476\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a21ab19a-588e-4827-9716-83290db70476\") on node \"crc\" " Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.693919 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82935132-2a23-4b0c-86c5-be40089b7e0b-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "82935132-2a23-4b0c-86c5-be40089b7e0b" (UID: "82935132-2a23-4b0c-86c5-be40089b7e0b"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.696341 4805 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.696574 4805 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-a21ab19a-588e-4827-9716-83290db70476" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a21ab19a-588e-4827-9716-83290db70476") on node "crc" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.705767 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.723529 4805 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/82935132-2a23-4b0c-86c5-be40089b7e0b-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.723635 4805 reconciler_common.go:293] "Volume detached for volume \"pvc-a21ab19a-588e-4827-9716-83290db70476\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a21ab19a-588e-4827-9716-83290db70476\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.824493 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9c793c17-a107-4006-9e15-5a2ac2afa296-rabbitmq-erlang-cookie\") pod \"9c793c17-a107-4006-9e15-5a2ac2afa296\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.824544 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9c793c17-a107-4006-9e15-5a2ac2afa296-rabbitmq-plugins\") pod \"9c793c17-a107-4006-9e15-5a2ac2afa296\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.824673 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5k4l4\" (UniqueName: \"kubernetes.io/projected/9c793c17-a107-4006-9e15-5a2ac2afa296-kube-api-access-5k4l4\") pod \"9c793c17-a107-4006-9e15-5a2ac2afa296\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.824707 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9c793c17-a107-4006-9e15-5a2ac2afa296-rabbitmq-tls\") pod \"9c793c17-a107-4006-9e15-5a2ac2afa296\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.824938 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c793c17-a107-4006-9e15-5a2ac2afa296-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "9c793c17-a107-4006-9e15-5a2ac2afa296" (UID: "9c793c17-a107-4006-9e15-5a2ac2afa296"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.825329 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c793c17-a107-4006-9e15-5a2ac2afa296-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "9c793c17-a107-4006-9e15-5a2ac2afa296" (UID: "9c793c17-a107-4006-9e15-5a2ac2afa296"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.825790 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-436dc230-dcc1-4c94-a5cd-efd150a21809\") pod \"9c793c17-a107-4006-9e15-5a2ac2afa296\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.825876 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9c793c17-a107-4006-9e15-5a2ac2afa296-erlang-cookie-secret\") pod \"9c793c17-a107-4006-9e15-5a2ac2afa296\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.825907 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9c793c17-a107-4006-9e15-5a2ac2afa296-server-conf\") pod \"9c793c17-a107-4006-9e15-5a2ac2afa296\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.825941 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9c793c17-a107-4006-9e15-5a2ac2afa296-rabbitmq-confd\") pod \"9c793c17-a107-4006-9e15-5a2ac2afa296\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.826076 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c793c17-a107-4006-9e15-5a2ac2afa296-config-data\") pod \"9c793c17-a107-4006-9e15-5a2ac2afa296\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.826097 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9c793c17-a107-4006-9e15-5a2ac2afa296-plugins-conf\") pod \"9c793c17-a107-4006-9e15-5a2ac2afa296\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.826154 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9c793c17-a107-4006-9e15-5a2ac2afa296-pod-info\") pod \"9c793c17-a107-4006-9e15-5a2ac2afa296\" (UID: \"9c793c17-a107-4006-9e15-5a2ac2afa296\") " Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.826950 4805 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9c793c17-a107-4006-9e15-5a2ac2afa296-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.826964 4805 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9c793c17-a107-4006-9e15-5a2ac2afa296-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.833095 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c793c17-a107-4006-9e15-5a2ac2afa296-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "9c793c17-a107-4006-9e15-5a2ac2afa296" (UID: "9c793c17-a107-4006-9e15-5a2ac2afa296"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.833824 4805 generic.go:334] "Generic (PLEG): container finished" podID="72a9ed44-dc10-4f81-be61-b6ba20c83548" containerID="198b395ccd7c0775cca85f690aa9da5aed7d703bf945bda979ab5a0685c36e39" exitCode=0 Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.833858 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9c793c17-a107-4006-9e15-5a2ac2afa296-pod-info" (OuterVolumeSpecName: "pod-info") pod "9c793c17-a107-4006-9e15-5a2ac2afa296" (UID: "9c793c17-a107-4006-9e15-5a2ac2afa296"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.833894 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"72a9ed44-dc10-4f81-be61-b6ba20c83548","Type":"ContainerDied","Data":"198b395ccd7c0775cca85f690aa9da5aed7d703bf945bda979ab5a0685c36e39"} Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.834582 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c793c17-a107-4006-9e15-5a2ac2afa296-kube-api-access-5k4l4" (OuterVolumeSpecName: "kube-api-access-5k4l4") pod "9c793c17-a107-4006-9e15-5a2ac2afa296" (UID: "9c793c17-a107-4006-9e15-5a2ac2afa296"). InnerVolumeSpecName "kube-api-access-5k4l4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.835846 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c793c17-a107-4006-9e15-5a2ac2afa296-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "9c793c17-a107-4006-9e15-5a2ac2afa296" (UID: "9c793c17-a107-4006-9e15-5a2ac2afa296"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.839292 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c793c17-a107-4006-9e15-5a2ac2afa296-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "9c793c17-a107-4006-9e15-5a2ac2afa296" (UID: "9c793c17-a107-4006-9e15-5a2ac2afa296"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.843639 4805 generic.go:334] "Generic (PLEG): container finished" podID="9c793c17-a107-4006-9e15-5a2ac2afa296" containerID="b42a861aab0cae3e450ecd3eb59fe2377389b7472abe53b997d75d139154328d" exitCode=0 Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.843705 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9c793c17-a107-4006-9e15-5a2ac2afa296","Type":"ContainerDied","Data":"b42a861aab0cae3e450ecd3eb59fe2377389b7472abe53b997d75d139154328d"} Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.843738 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9c793c17-a107-4006-9e15-5a2ac2afa296","Type":"ContainerDied","Data":"69fdfc19086dc3d09d2b2a5bb4e95415a661a317a1aae6e0b13e5fa1c04267fd"} Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.843759 4805 scope.go:117] "RemoveContainer" containerID="b42a861aab0cae3e450ecd3eb59fe2377389b7472abe53b997d75d139154328d" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.843902 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.860123 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"82935132-2a23-4b0c-86c5-be40089b7e0b","Type":"ContainerDied","Data":"902f9ec07bec344dd871d3f27d472d3de39be409f4fbeb88dfadf59c7df9f257"} Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.860172 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.869127 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-436dc230-dcc1-4c94-a5cd-efd150a21809" (OuterVolumeSpecName: "persistence") pod "9c793c17-a107-4006-9e15-5a2ac2afa296" (UID: "9c793c17-a107-4006-9e15-5a2ac2afa296"). InnerVolumeSpecName "pvc-436dc230-dcc1-4c94-a5cd-efd150a21809". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.932161 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c793c17-a107-4006-9e15-5a2ac2afa296-config-data" (OuterVolumeSpecName: "config-data") pod "9c793c17-a107-4006-9e15-5a2ac2afa296" (UID: "9c793c17-a107-4006-9e15-5a2ac2afa296"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.934333 4805 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9c793c17-a107-4006-9e15-5a2ac2afa296-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.934364 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c793c17-a107-4006-9e15-5a2ac2afa296-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.934376 4805 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9c793c17-a107-4006-9e15-5a2ac2afa296-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.934388 4805 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9c793c17-a107-4006-9e15-5a2ac2afa296-pod-info\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.934404 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5k4l4\" (UniqueName: \"kubernetes.io/projected/9c793c17-a107-4006-9e15-5a2ac2afa296-kube-api-access-5k4l4\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.934415 4805 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9c793c17-a107-4006-9e15-5a2ac2afa296-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.934444 4805 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-436dc230-dcc1-4c94-a5cd-efd150a21809\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-436dc230-dcc1-4c94-a5cd-efd150a21809\") on node \"crc\" " Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.984904 4805 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.985328 4805 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-436dc230-dcc1-4c94-a5cd-efd150a21809" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-436dc230-dcc1-4c94-a5cd-efd150a21809") on node "crc" Feb 26 17:43:18 crc kubenswrapper[4805]: I0226 17:43:18.988717 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c793c17-a107-4006-9e15-5a2ac2afa296-server-conf" (OuterVolumeSpecName: "server-conf") pod "9c793c17-a107-4006-9e15-5a2ac2afa296" (UID: "9c793c17-a107-4006-9e15-5a2ac2afa296"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.036312 4805 reconciler_common.go:293] "Volume detached for volume \"pvc-436dc230-dcc1-4c94-a5cd-efd150a21809\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-436dc230-dcc1-4c94-a5cd-efd150a21809\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.036365 4805 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9c793c17-a107-4006-9e15-5a2ac2afa296-server-conf\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.125417 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.180888 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c793c17-a107-4006-9e15-5a2ac2afa296-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "9c793c17-a107-4006-9e15-5a2ac2afa296" (UID: "9c793c17-a107-4006-9e15-5a2ac2afa296"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.193605 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.202076 4805 scope.go:117] "RemoveContainer" containerID="62b3d33a0aa7871219f4b9d15d0569cc3a99df503b74f0a8d470c476d2904b2f" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.210463 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.238714 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 17:43:19 crc kubenswrapper[4805]: E0226 17:43:19.239594 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72a9ed44-dc10-4f81-be61-b6ba20c83548" containerName="cloudkitty-proc" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.239639 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="72a9ed44-dc10-4f81-be61-b6ba20c83548" containerName="cloudkitty-proc" Feb 26 17:43:19 crc kubenswrapper[4805]: E0226 17:43:19.239661 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32fc7d22-e37b-433b-85be-58596c3e9c0a" containerName="cloudkitty-storageinit" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.239672 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="32fc7d22-e37b-433b-85be-58596c3e9c0a" containerName="cloudkitty-storageinit" Feb 26 17:43:19 crc kubenswrapper[4805]: E0226 17:43:19.239699 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c793c17-a107-4006-9e15-5a2ac2afa296" containerName="setup-container" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.239708 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c793c17-a107-4006-9e15-5a2ac2afa296" containerName="setup-container" Feb 26 17:43:19 crc kubenswrapper[4805]: E0226 17:43:19.239735 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82935132-2a23-4b0c-86c5-be40089b7e0b" containerName="setup-container" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.239741 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="82935132-2a23-4b0c-86c5-be40089b7e0b" containerName="setup-container" Feb 26 17:43:19 crc kubenswrapper[4805]: E0226 17:43:19.239748 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82935132-2a23-4b0c-86c5-be40089b7e0b" containerName="rabbitmq" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.239755 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="82935132-2a23-4b0c-86c5-be40089b7e0b" containerName="rabbitmq" Feb 26 17:43:19 crc kubenswrapper[4805]: E0226 17:43:19.239771 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c793c17-a107-4006-9e15-5a2ac2afa296" containerName="rabbitmq" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.239778 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c793c17-a107-4006-9e15-5a2ac2afa296" containerName="rabbitmq" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.240056 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="82935132-2a23-4b0c-86c5-be40089b7e0b" containerName="rabbitmq" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.240084 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="72a9ed44-dc10-4f81-be61-b6ba20c83548" containerName="cloudkitty-proc" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.240099 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c793c17-a107-4006-9e15-5a2ac2afa296" containerName="rabbitmq" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.240114 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="32fc7d22-e37b-433b-85be-58596c3e9c0a" containerName="cloudkitty-storageinit" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.241714 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72a9ed44-dc10-4f81-be61-b6ba20c83548-config-data\") pod \"72a9ed44-dc10-4f81-be61-b6ba20c83548\" (UID: \"72a9ed44-dc10-4f81-be61-b6ba20c83548\") " Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.241725 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.241815 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxhhf\" (UniqueName: \"kubernetes.io/projected/72a9ed44-dc10-4f81-be61-b6ba20c83548-kube-api-access-zxhhf\") pod \"72a9ed44-dc10-4f81-be61-b6ba20c83548\" (UID: \"72a9ed44-dc10-4f81-be61-b6ba20c83548\") " Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.242147 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72a9ed44-dc10-4f81-be61-b6ba20c83548-combined-ca-bundle\") pod \"72a9ed44-dc10-4f81-be61-b6ba20c83548\" (UID: \"72a9ed44-dc10-4f81-be61-b6ba20c83548\") " Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.242245 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72a9ed44-dc10-4f81-be61-b6ba20c83548-config-data-custom\") pod \"72a9ed44-dc10-4f81-be61-b6ba20c83548\" (UID: \"72a9ed44-dc10-4f81-be61-b6ba20c83548\") " Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.242370 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72a9ed44-dc10-4f81-be61-b6ba20c83548-scripts\") pod \"72a9ed44-dc10-4f81-be61-b6ba20c83548\" (UID: \"72a9ed44-dc10-4f81-be61-b6ba20c83548\") " Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.242468 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/72a9ed44-dc10-4f81-be61-b6ba20c83548-certs\") pod \"72a9ed44-dc10-4f81-be61-b6ba20c83548\" (UID: \"72a9ed44-dc10-4f81-be61-b6ba20c83548\") " Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.243501 4805 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9c793c17-a107-4006-9e15-5a2ac2afa296-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.245191 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.245639 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.245789 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.245919 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.246372 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-bphbl" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.247614 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.249820 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.256412 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.256864 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72a9ed44-dc10-4f81-be61-b6ba20c83548-certs" (OuterVolumeSpecName: "certs") pod "72a9ed44-dc10-4f81-be61-b6ba20c83548" (UID: "72a9ed44-dc10-4f81-be61-b6ba20c83548"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.257287 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72a9ed44-dc10-4f81-be61-b6ba20c83548-kube-api-access-zxhhf" (OuterVolumeSpecName: "kube-api-access-zxhhf") pod "72a9ed44-dc10-4f81-be61-b6ba20c83548" (UID: "72a9ed44-dc10-4f81-be61-b6ba20c83548"). InnerVolumeSpecName "kube-api-access-zxhhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.257747 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72a9ed44-dc10-4f81-be61-b6ba20c83548-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "72a9ed44-dc10-4f81-be61-b6ba20c83548" (UID: "72a9ed44-dc10-4f81-be61-b6ba20c83548"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.257796 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72a9ed44-dc10-4f81-be61-b6ba20c83548-scripts" (OuterVolumeSpecName: "scripts") pod "72a9ed44-dc10-4f81-be61-b6ba20c83548" (UID: "72a9ed44-dc10-4f81-be61-b6ba20c83548"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.339125 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72a9ed44-dc10-4f81-be61-b6ba20c83548-config-data" (OuterVolumeSpecName: "config-data") pod "72a9ed44-dc10-4f81-be61-b6ba20c83548" (UID: "72a9ed44-dc10-4f81-be61-b6ba20c83548"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.339176 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72a9ed44-dc10-4f81-be61-b6ba20c83548-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72a9ed44-dc10-4f81-be61-b6ba20c83548" (UID: "72a9ed44-dc10-4f81-be61-b6ba20c83548"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.339346 4805 scope.go:117] "RemoveContainer" containerID="b42a861aab0cae3e450ecd3eb59fe2377389b7472abe53b997d75d139154328d" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.348364 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2\") " pod="openstack/rabbitmq-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.348418 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2\") " pod="openstack/rabbitmq-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.348437 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc4z4\" (UniqueName: \"kubernetes.io/projected/3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2-kube-api-access-wc4z4\") pod \"rabbitmq-server-0\" (UID: \"3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2\") " pod="openstack/rabbitmq-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.348471 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2\") " pod="openstack/rabbitmq-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.348486 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2\") " pod="openstack/rabbitmq-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.348504 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2\") " pod="openstack/rabbitmq-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.348544 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a21ab19a-588e-4827-9716-83290db70476\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a21ab19a-588e-4827-9716-83290db70476\") pod \"rabbitmq-server-0\" (UID: \"3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2\") " pod="openstack/rabbitmq-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.348580 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2-config-data\") pod \"rabbitmq-server-0\" (UID: \"3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2\") " pod="openstack/rabbitmq-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.348599 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2\") " pod="openstack/rabbitmq-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.348641 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2\") " pod="openstack/rabbitmq-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.348666 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2\") " pod="openstack/rabbitmq-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: E0226 17:43:19.354549 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b42a861aab0cae3e450ecd3eb59fe2377389b7472abe53b997d75d139154328d\": container with ID starting with b42a861aab0cae3e450ecd3eb59fe2377389b7472abe53b997d75d139154328d not found: ID does not exist" containerID="b42a861aab0cae3e450ecd3eb59fe2377389b7472abe53b997d75d139154328d" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.354601 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b42a861aab0cae3e450ecd3eb59fe2377389b7472abe53b997d75d139154328d"} err="failed to get container status \"b42a861aab0cae3e450ecd3eb59fe2377389b7472abe53b997d75d139154328d\": rpc error: code = NotFound desc = could not find container \"b42a861aab0cae3e450ecd3eb59fe2377389b7472abe53b997d75d139154328d\": container with ID starting with b42a861aab0cae3e450ecd3eb59fe2377389b7472abe53b997d75d139154328d not found: ID does not exist" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.354629 4805 scope.go:117] "RemoveContainer" containerID="62b3d33a0aa7871219f4b9d15d0569cc3a99df503b74f0a8d470c476d2904b2f" Feb 26 17:43:19 crc kubenswrapper[4805]: E0226 17:43:19.354946 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62b3d33a0aa7871219f4b9d15d0569cc3a99df503b74f0a8d470c476d2904b2f\": container with ID starting with 62b3d33a0aa7871219f4b9d15d0569cc3a99df503b74f0a8d470c476d2904b2f not found: ID does not exist" containerID="62b3d33a0aa7871219f4b9d15d0569cc3a99df503b74f0a8d470c476d2904b2f" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.354969 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62b3d33a0aa7871219f4b9d15d0569cc3a99df503b74f0a8d470c476d2904b2f"} err="failed to get container status \"62b3d33a0aa7871219f4b9d15d0569cc3a99df503b74f0a8d470c476d2904b2f\": rpc error: code = NotFound desc = could not find container \"62b3d33a0aa7871219f4b9d15d0569cc3a99df503b74f0a8d470c476d2904b2f\": container with ID starting with 62b3d33a0aa7871219f4b9d15d0569cc3a99df503b74f0a8d470c476d2904b2f not found: ID does not exist" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.354980 4805 scope.go:117] "RemoveContainer" containerID="2b9573e545642ce301ecb6bd38b1385665b816b74305fdbf5b8d3ba79aedd146" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.359691 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72a9ed44-dc10-4f81-be61-b6ba20c83548-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.359740 4805 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/72a9ed44-dc10-4f81-be61-b6ba20c83548-certs\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.359752 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72a9ed44-dc10-4f81-be61-b6ba20c83548-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.359763 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxhhf\" (UniqueName: \"kubernetes.io/projected/72a9ed44-dc10-4f81-be61-b6ba20c83548-kube-api-access-zxhhf\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.359773 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72a9ed44-dc10-4f81-be61-b6ba20c83548-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.359784 4805 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72a9ed44-dc10-4f81-be61-b6ba20c83548-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.396498 4805 scope.go:117] "RemoveContainer" containerID="db9ad2b23715de3e228e0c29dc0b879be5e7ad0805b9838a501fa17eb0b47b4c" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.461877 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2-config-data\") pod \"rabbitmq-server-0\" (UID: \"3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2\") " pod="openstack/rabbitmq-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.461911 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2\") " pod="openstack/rabbitmq-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.461954 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2\") " pod="openstack/rabbitmq-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.461986 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2\") " pod="openstack/rabbitmq-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.462091 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2\") " pod="openstack/rabbitmq-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.462114 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2\") " pod="openstack/rabbitmq-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.462130 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc4z4\" (UniqueName: \"kubernetes.io/projected/3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2-kube-api-access-wc4z4\") pod \"rabbitmq-server-0\" (UID: \"3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2\") " pod="openstack/rabbitmq-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.462156 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2\") " pod="openstack/rabbitmq-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.462172 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2\") " pod="openstack/rabbitmq-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.462189 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2\") " pod="openstack/rabbitmq-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.462231 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a21ab19a-588e-4827-9716-83290db70476\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a21ab19a-588e-4827-9716-83290db70476\") pod \"rabbitmq-server-0\" (UID: \"3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2\") " pod="openstack/rabbitmq-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.462696 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2\") " pod="openstack/rabbitmq-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.462852 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2\") " pod="openstack/rabbitmq-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.463784 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2-config-data\") pod \"rabbitmq-server-0\" (UID: \"3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2\") " pod="openstack/rabbitmq-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.464995 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2\") " pod="openstack/rabbitmq-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.469459 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2\") " pod="openstack/rabbitmq-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.470186 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2\") " pod="openstack/rabbitmq-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.471570 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2\") " pod="openstack/rabbitmq-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.473180 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2\") " pod="openstack/rabbitmq-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.473313 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2\") " pod="openstack/rabbitmq-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.479330 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.479368 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a21ab19a-588e-4827-9716-83290db70476\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a21ab19a-588e-4827-9716-83290db70476\") pod \"rabbitmq-server-0\" (UID: \"3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/94e1f2f5e6b4d98c41fa2e76b2416407adf395bf747ae59a28cbbbf46e2baffb/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.483138 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.496066 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc4z4\" (UniqueName: \"kubernetes.io/projected/3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2-kube-api-access-wc4z4\") pod \"rabbitmq-server-0\" (UID: \"3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2\") " pod="openstack/rabbitmq-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.510329 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.527218 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.563732 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69411227-14e0-40b4-a753-f2178bfbdd2a-internal-tls-certs\") pod \"69411227-14e0-40b4-a753-f2178bfbdd2a\" (UID: \"69411227-14e0-40b4-a753-f2178bfbdd2a\") " Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.563823 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69411227-14e0-40b4-a753-f2178bfbdd2a-logs\") pod \"69411227-14e0-40b4-a753-f2178bfbdd2a\" (UID: \"69411227-14e0-40b4-a753-f2178bfbdd2a\") " Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.563847 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69411227-14e0-40b4-a753-f2178bfbdd2a-config-data-custom\") pod \"69411227-14e0-40b4-a753-f2178bfbdd2a\" (UID: \"69411227-14e0-40b4-a753-f2178bfbdd2a\") " Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.563883 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69411227-14e0-40b4-a753-f2178bfbdd2a-public-tls-certs\") pod \"69411227-14e0-40b4-a753-f2178bfbdd2a\" (UID: \"69411227-14e0-40b4-a753-f2178bfbdd2a\") " Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.563973 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/69411227-14e0-40b4-a753-f2178bfbdd2a-certs\") pod \"69411227-14e0-40b4-a753-f2178bfbdd2a\" (UID: \"69411227-14e0-40b4-a753-f2178bfbdd2a\") " Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.564110 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69411227-14e0-40b4-a753-f2178bfbdd2a-combined-ca-bundle\") pod \"69411227-14e0-40b4-a753-f2178bfbdd2a\" (UID: \"69411227-14e0-40b4-a753-f2178bfbdd2a\") " Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.564131 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69411227-14e0-40b4-a753-f2178bfbdd2a-config-data\") pod \"69411227-14e0-40b4-a753-f2178bfbdd2a\" (UID: \"69411227-14e0-40b4-a753-f2178bfbdd2a\") " Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.564146 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69411227-14e0-40b4-a753-f2178bfbdd2a-scripts\") pod \"69411227-14e0-40b4-a753-f2178bfbdd2a\" (UID: \"69411227-14e0-40b4-a753-f2178bfbdd2a\") " Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.564173 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5rhh\" (UniqueName: \"kubernetes.io/projected/69411227-14e0-40b4-a753-f2178bfbdd2a-kube-api-access-t5rhh\") pod \"69411227-14e0-40b4-a753-f2178bfbdd2a\" (UID: \"69411227-14e0-40b4-a753-f2178bfbdd2a\") " Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.573135 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69411227-14e0-40b4-a753-f2178bfbdd2a-logs" (OuterVolumeSpecName: "logs") pod "69411227-14e0-40b4-a753-f2178bfbdd2a" (UID: "69411227-14e0-40b4-a753-f2178bfbdd2a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.573191 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69411227-14e0-40b4-a753-f2178bfbdd2a-kube-api-access-t5rhh" (OuterVolumeSpecName: "kube-api-access-t5rhh") pod "69411227-14e0-40b4-a753-f2178bfbdd2a" (UID: "69411227-14e0-40b4-a753-f2178bfbdd2a"). InnerVolumeSpecName "kube-api-access-t5rhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.576734 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69411227-14e0-40b4-a753-f2178bfbdd2a-certs" (OuterVolumeSpecName: "certs") pod "69411227-14e0-40b4-a753-f2178bfbdd2a" (UID: "69411227-14e0-40b4-a753-f2178bfbdd2a"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.591654 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 17:43:19 crc kubenswrapper[4805]: E0226 17:43:19.592185 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69411227-14e0-40b4-a753-f2178bfbdd2a" containerName="cloudkitty-api" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.592199 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="69411227-14e0-40b4-a753-f2178bfbdd2a" containerName="cloudkitty-api" Feb 26 17:43:19 crc kubenswrapper[4805]: E0226 17:43:19.592232 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69411227-14e0-40b4-a753-f2178bfbdd2a" containerName="cloudkitty-api-log" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.592240 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="69411227-14e0-40b4-a753-f2178bfbdd2a" containerName="cloudkitty-api-log" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.592430 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="69411227-14e0-40b4-a753-f2178bfbdd2a" containerName="cloudkitty-api" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.592451 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="69411227-14e0-40b4-a753-f2178bfbdd2a" containerName="cloudkitty-api-log" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.594535 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69411227-14e0-40b4-a753-f2178bfbdd2a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "69411227-14e0-40b4-a753-f2178bfbdd2a" (UID: "69411227-14e0-40b4-a753-f2178bfbdd2a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.597212 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.608544 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.608929 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.609085 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.609727 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.609896 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.610043 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-jlr6v" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.610370 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.625874 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69411227-14e0-40b4-a753-f2178bfbdd2a-scripts" (OuterVolumeSpecName: "scripts") pod "69411227-14e0-40b4-a753-f2178bfbdd2a" (UID: "69411227-14e0-40b4-a753-f2178bfbdd2a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.628694 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.645887 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69411227-14e0-40b4-a753-f2178bfbdd2a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "69411227-14e0-40b4-a753-f2178bfbdd2a" (UID: "69411227-14e0-40b4-a753-f2178bfbdd2a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.654407 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69411227-14e0-40b4-a753-f2178bfbdd2a-config-data" (OuterVolumeSpecName: "config-data") pod "69411227-14e0-40b4-a753-f2178bfbdd2a" (UID: "69411227-14e0-40b4-a753-f2178bfbdd2a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.666322 4805 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/69411227-14e0-40b4-a753-f2178bfbdd2a-certs\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.666356 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69411227-14e0-40b4-a753-f2178bfbdd2a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.666368 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69411227-14e0-40b4-a753-f2178bfbdd2a-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.666376 4805 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69411227-14e0-40b4-a753-f2178bfbdd2a-scripts\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.666397 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5rhh\" (UniqueName: \"kubernetes.io/projected/69411227-14e0-40b4-a753-f2178bfbdd2a-kube-api-access-t5rhh\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.666406 4805 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69411227-14e0-40b4-a753-f2178bfbdd2a-logs\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.666414 4805 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69411227-14e0-40b4-a753-f2178bfbdd2a-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.668081 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a21ab19a-588e-4827-9716-83290db70476\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a21ab19a-588e-4827-9716-83290db70476\") pod \"rabbitmq-server-0\" (UID: \"3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2\") " pod="openstack/rabbitmq-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.743914 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69411227-14e0-40b4-a753-f2178bfbdd2a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "69411227-14e0-40b4-a753-f2178bfbdd2a" (UID: "69411227-14e0-40b4-a753-f2178bfbdd2a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.753648 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69411227-14e0-40b4-a753-f2178bfbdd2a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "69411227-14e0-40b4-a753-f2178bfbdd2a" (UID: "69411227-14e0-40b4-a753-f2178bfbdd2a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.770095 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.770161 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.770194 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.770343 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.770407 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-436dc230-dcc1-4c94-a5cd-efd150a21809\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-436dc230-dcc1-4c94-a5cd-efd150a21809\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.770450 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.770589 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.770685 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.770756 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.770776 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.770844 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rt4d\" (UniqueName: \"kubernetes.io/projected/0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a-kube-api-access-5rt4d\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.770917 4805 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/69411227-14e0-40b4-a753-f2178bfbdd2a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.770955 4805 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/69411227-14e0-40b4-a753-f2178bfbdd2a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.870531 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.872308 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rt4d\" (UniqueName: \"kubernetes.io/projected/0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a-kube-api-access-5rt4d\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.872375 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.872403 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.872429 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.872503 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.872540 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-436dc230-dcc1-4c94-a5cd-efd150a21809\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-436dc230-dcc1-4c94-a5cd-efd150a21809\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.872569 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.872638 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.872707 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.872785 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.872818 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.874448 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.875079 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.875391 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.877142 4805 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.877167 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-436dc230-dcc1-4c94-a5cd-efd150a21809\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-436dc230-dcc1-4c94-a5cd-efd150a21809\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/245f79fdaa526276c0e2ee03c805fa691f64a89402818eb13855aaab894d5f00/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.877411 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.877998 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.880810 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.880918 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.886276 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.887466 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.888334 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"72a9ed44-dc10-4f81-be61-b6ba20c83548","Type":"ContainerDied","Data":"be23123dce29ad40004c65d95f88da324c0315aac4f3769b4844f9b274a23bd9"} Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.888414 4805 scope.go:117] "RemoveContainer" containerID="198b395ccd7c0775cca85f690aa9da5aed7d703bf945bda979ab5a0685c36e39" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.898759 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rt4d\" (UniqueName: \"kubernetes.io/projected/0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a-kube-api-access-5rt4d\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.900867 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.942900 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c49186e-522c-4f97-8d17-40c887d09de8","Type":"ContainerStarted","Data":"75ece8dd9be9ce0a62f053805165a8fa30fc378dd4793a76500f3ef12e64c440"} Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.946101 4805 generic.go:334] "Generic (PLEG): container finished" podID="69411227-14e0-40b4-a753-f2178bfbdd2a" containerID="4bc1b1627ba0aaf3af3c9f5f6d1e1d4c956b78ed1f87c20b3c197955a2e00e05" exitCode=0 Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.946173 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"69411227-14e0-40b4-a753-f2178bfbdd2a","Type":"ContainerDied","Data":"4bc1b1627ba0aaf3af3c9f5f6d1e1d4c956b78ed1f87c20b3c197955a2e00e05"} Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.946224 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"69411227-14e0-40b4-a753-f2178bfbdd2a","Type":"ContainerDied","Data":"0720c61c9fb74f97c3a65740cded3a4bd88f661c3803374031cbc06b78858a88"} Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.946299 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.955788 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-436dc230-dcc1-4c94-a5cd-efd150a21809\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-436dc230-dcc1-4c94-a5cd-efd150a21809\") pod \"rabbitmq-cell1-server-0\" (UID: \"0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:43:19 crc kubenswrapper[4805]: I0226 17:43:19.991690 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-j998b" podUID="12581950-1ba2-48cc-ace1-798bfc3c6a54" containerName="registry-server" containerID="cri-o://519c9f37e17b085f4c73241c86dd7faa6d01b175b0ad5f9b91e12f949ad9acd1" gracePeriod=2 Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.043982 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.061049 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.075238 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.102090 4805 scope.go:117] "RemoveContainer" containerID="4bc1b1627ba0aaf3af3c9f5f6d1e1d4c956b78ed1f87c20b3c197955a2e00e05" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.102236 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.126768 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.128471 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.132243 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.132805 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-ttgs5" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.132956 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.133145 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.133327 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.168298 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.180584 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a048b7a1-414b-4465-931c-cf987921d7e6-scripts\") pod \"cloudkitty-proc-0\" (UID: \"a048b7a1-414b-4465-931c-cf987921d7e6\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.180727 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a048b7a1-414b-4465-931c-cf987921d7e6-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"a048b7a1-414b-4465-931c-cf987921d7e6\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.180835 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/a048b7a1-414b-4465-931c-cf987921d7e6-certs\") pod \"cloudkitty-proc-0\" (UID: \"a048b7a1-414b-4465-931c-cf987921d7e6\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.180875 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a048b7a1-414b-4465-931c-cf987921d7e6-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"a048b7a1-414b-4465-931c-cf987921d7e6\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.180930 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhl99\" (UniqueName: \"kubernetes.io/projected/a048b7a1-414b-4465-931c-cf987921d7e6-kube-api-access-bhl99\") pod \"cloudkitty-proc-0\" (UID: \"a048b7a1-414b-4465-931c-cf987921d7e6\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.180996 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a048b7a1-414b-4465-931c-cf987921d7e6-config-data\") pod \"cloudkitty-proc-0\" (UID: \"a048b7a1-414b-4465-931c-cf987921d7e6\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.191560 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.198146 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.200104 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.200465 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-public-svc" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.200613 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-internal-svc" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.207114 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.244493 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.286848 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a048b7a1-414b-4465-931c-cf987921d7e6-scripts\") pod \"cloudkitty-proc-0\" (UID: \"a048b7a1-414b-4465-931c-cf987921d7e6\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.286954 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6cb0afe-79a7-421d-a18f-b42cebd4398f-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"b6cb0afe-79a7-421d-a18f-b42cebd4398f\") " pod="openstack/cloudkitty-api-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.287041 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6cb0afe-79a7-421d-a18f-b42cebd4398f-logs\") pod \"cloudkitty-api-0\" (UID: \"b6cb0afe-79a7-421d-a18f-b42cebd4398f\") " pod="openstack/cloudkitty-api-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.287111 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a048b7a1-414b-4465-931c-cf987921d7e6-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"a048b7a1-414b-4465-931c-cf987921d7e6\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.287155 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6cb0afe-79a7-421d-a18f-b42cebd4398f-scripts\") pod \"cloudkitty-api-0\" (UID: \"b6cb0afe-79a7-421d-a18f-b42cebd4398f\") " pod="openstack/cloudkitty-api-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.287232 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6cb0afe-79a7-421d-a18f-b42cebd4398f-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"b6cb0afe-79a7-421d-a18f-b42cebd4398f\") " pod="openstack/cloudkitty-api-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.287307 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx2jh\" (UniqueName: \"kubernetes.io/projected/b6cb0afe-79a7-421d-a18f-b42cebd4398f-kube-api-access-mx2jh\") pod \"cloudkitty-api-0\" (UID: \"b6cb0afe-79a7-421d-a18f-b42cebd4398f\") " pod="openstack/cloudkitty-api-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.287352 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/a048b7a1-414b-4465-931c-cf987921d7e6-certs\") pod \"cloudkitty-proc-0\" (UID: \"a048b7a1-414b-4465-931c-cf987921d7e6\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.287381 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a048b7a1-414b-4465-931c-cf987921d7e6-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"a048b7a1-414b-4465-931c-cf987921d7e6\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.287463 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhl99\" (UniqueName: \"kubernetes.io/projected/a048b7a1-414b-4465-931c-cf987921d7e6-kube-api-access-bhl99\") pod \"cloudkitty-proc-0\" (UID: \"a048b7a1-414b-4465-931c-cf987921d7e6\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.287494 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6cb0afe-79a7-421d-a18f-b42cebd4398f-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"b6cb0afe-79a7-421d-a18f-b42cebd4398f\") " pod="openstack/cloudkitty-api-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.287546 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6cb0afe-79a7-421d-a18f-b42cebd4398f-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"b6cb0afe-79a7-421d-a18f-b42cebd4398f\") " pod="openstack/cloudkitty-api-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.287576 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/b6cb0afe-79a7-421d-a18f-b42cebd4398f-certs\") pod \"cloudkitty-api-0\" (UID: \"b6cb0afe-79a7-421d-a18f-b42cebd4398f\") " pod="openstack/cloudkitty-api-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.287623 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6cb0afe-79a7-421d-a18f-b42cebd4398f-config-data\") pod \"cloudkitty-api-0\" (UID: \"b6cb0afe-79a7-421d-a18f-b42cebd4398f\") " pod="openstack/cloudkitty-api-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.287652 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a048b7a1-414b-4465-931c-cf987921d7e6-config-data\") pod \"cloudkitty-proc-0\" (UID: \"a048b7a1-414b-4465-931c-cf987921d7e6\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.295085 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/a048b7a1-414b-4465-931c-cf987921d7e6-certs\") pod \"cloudkitty-proc-0\" (UID: \"a048b7a1-414b-4465-931c-cf987921d7e6\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.304115 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a048b7a1-414b-4465-931c-cf987921d7e6-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"a048b7a1-414b-4465-931c-cf987921d7e6\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.311503 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a048b7a1-414b-4465-931c-cf987921d7e6-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"a048b7a1-414b-4465-931c-cf987921d7e6\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.314770 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a048b7a1-414b-4465-931c-cf987921d7e6-scripts\") pod \"cloudkitty-proc-0\" (UID: \"a048b7a1-414b-4465-931c-cf987921d7e6\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.315056 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhl99\" (UniqueName: \"kubernetes.io/projected/a048b7a1-414b-4465-931c-cf987921d7e6-kube-api-access-bhl99\") pod \"cloudkitty-proc-0\" (UID: \"a048b7a1-414b-4465-931c-cf987921d7e6\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.317966 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a048b7a1-414b-4465-931c-cf987921d7e6-config-data\") pod \"cloudkitty-proc-0\" (UID: \"a048b7a1-414b-4465-931c-cf987921d7e6\") " pod="openstack/cloudkitty-proc-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.389577 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6cb0afe-79a7-421d-a18f-b42cebd4398f-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"b6cb0afe-79a7-421d-a18f-b42cebd4398f\") " pod="openstack/cloudkitty-api-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.389918 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6cb0afe-79a7-421d-a18f-b42cebd4398f-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"b6cb0afe-79a7-421d-a18f-b42cebd4398f\") " pod="openstack/cloudkitty-api-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.390068 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/b6cb0afe-79a7-421d-a18f-b42cebd4398f-certs\") pod \"cloudkitty-api-0\" (UID: \"b6cb0afe-79a7-421d-a18f-b42cebd4398f\") " pod="openstack/cloudkitty-api-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.390200 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6cb0afe-79a7-421d-a18f-b42cebd4398f-config-data\") pod \"cloudkitty-api-0\" (UID: \"b6cb0afe-79a7-421d-a18f-b42cebd4398f\") " pod="openstack/cloudkitty-api-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.390387 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6cb0afe-79a7-421d-a18f-b42cebd4398f-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"b6cb0afe-79a7-421d-a18f-b42cebd4398f\") " pod="openstack/cloudkitty-api-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.390497 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6cb0afe-79a7-421d-a18f-b42cebd4398f-logs\") pod \"cloudkitty-api-0\" (UID: \"b6cb0afe-79a7-421d-a18f-b42cebd4398f\") " pod="openstack/cloudkitty-api-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.390641 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6cb0afe-79a7-421d-a18f-b42cebd4398f-scripts\") pod \"cloudkitty-api-0\" (UID: \"b6cb0afe-79a7-421d-a18f-b42cebd4398f\") " pod="openstack/cloudkitty-api-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.390782 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6cb0afe-79a7-421d-a18f-b42cebd4398f-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"b6cb0afe-79a7-421d-a18f-b42cebd4398f\") " pod="openstack/cloudkitty-api-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.390945 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx2jh\" (UniqueName: \"kubernetes.io/projected/b6cb0afe-79a7-421d-a18f-b42cebd4398f-kube-api-access-mx2jh\") pod \"cloudkitty-api-0\" (UID: \"b6cb0afe-79a7-421d-a18f-b42cebd4398f\") " pod="openstack/cloudkitty-api-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.391826 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6cb0afe-79a7-421d-a18f-b42cebd4398f-logs\") pod \"cloudkitty-api-0\" (UID: \"b6cb0afe-79a7-421d-a18f-b42cebd4398f\") " pod="openstack/cloudkitty-api-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.393801 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6cb0afe-79a7-421d-a18f-b42cebd4398f-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"b6cb0afe-79a7-421d-a18f-b42cebd4398f\") " pod="openstack/cloudkitty-api-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.395037 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6cb0afe-79a7-421d-a18f-b42cebd4398f-scripts\") pod \"cloudkitty-api-0\" (UID: \"b6cb0afe-79a7-421d-a18f-b42cebd4398f\") " pod="openstack/cloudkitty-api-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.395241 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/b6cb0afe-79a7-421d-a18f-b42cebd4398f-certs\") pod \"cloudkitty-api-0\" (UID: \"b6cb0afe-79a7-421d-a18f-b42cebd4398f\") " pod="openstack/cloudkitty-api-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.395747 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6cb0afe-79a7-421d-a18f-b42cebd4398f-config-data\") pod \"cloudkitty-api-0\" (UID: \"b6cb0afe-79a7-421d-a18f-b42cebd4398f\") " pod="openstack/cloudkitty-api-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.397167 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6cb0afe-79a7-421d-a18f-b42cebd4398f-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"b6cb0afe-79a7-421d-a18f-b42cebd4398f\") " pod="openstack/cloudkitty-api-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.400328 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6cb0afe-79a7-421d-a18f-b42cebd4398f-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"b6cb0afe-79a7-421d-a18f-b42cebd4398f\") " pod="openstack/cloudkitty-api-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.408167 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6cb0afe-79a7-421d-a18f-b42cebd4398f-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"b6cb0afe-79a7-421d-a18f-b42cebd4398f\") " pod="openstack/cloudkitty-api-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.411252 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx2jh\" (UniqueName: \"kubernetes.io/projected/b6cb0afe-79a7-421d-a18f-b42cebd4398f-kube-api-access-mx2jh\") pod \"cloudkitty-api-0\" (UID: \"b6cb0afe-79a7-421d-a18f-b42cebd4398f\") " pod="openstack/cloudkitty-api-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.461291 4805 scope.go:117] "RemoveContainer" containerID="f74cd03523a7658add610343484f110bf6d064e020411cae410e73d3f4efc332" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.462093 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.475878 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.504757 4805 scope.go:117] "RemoveContainer" containerID="4bc1b1627ba0aaf3af3c9f5f6d1e1d4c956b78ed1f87c20b3c197955a2e00e05" Feb 26 17:43:20 crc kubenswrapper[4805]: E0226 17:43:20.505591 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bc1b1627ba0aaf3af3c9f5f6d1e1d4c956b78ed1f87c20b3c197955a2e00e05\": container with ID starting with 4bc1b1627ba0aaf3af3c9f5f6d1e1d4c956b78ed1f87c20b3c197955a2e00e05 not found: ID does not exist" containerID="4bc1b1627ba0aaf3af3c9f5f6d1e1d4c956b78ed1f87c20b3c197955a2e00e05" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.505657 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bc1b1627ba0aaf3af3c9f5f6d1e1d4c956b78ed1f87c20b3c197955a2e00e05"} err="failed to get container status \"4bc1b1627ba0aaf3af3c9f5f6d1e1d4c956b78ed1f87c20b3c197955a2e00e05\": rpc error: code = NotFound desc = could not find container \"4bc1b1627ba0aaf3af3c9f5f6d1e1d4c956b78ed1f87c20b3c197955a2e00e05\": container with ID starting with 4bc1b1627ba0aaf3af3c9f5f6d1e1d4c956b78ed1f87c20b3c197955a2e00e05 not found: ID does not exist" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.505701 4805 scope.go:117] "RemoveContainer" containerID="f74cd03523a7658add610343484f110bf6d064e020411cae410e73d3f4efc332" Feb 26 17:43:20 crc kubenswrapper[4805]: E0226 17:43:20.509346 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f74cd03523a7658add610343484f110bf6d064e020411cae410e73d3f4efc332\": container with ID starting with f74cd03523a7658add610343484f110bf6d064e020411cae410e73d3f4efc332 not found: ID does not exist" containerID="f74cd03523a7658add610343484f110bf6d064e020411cae410e73d3f4efc332" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.509399 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f74cd03523a7658add610343484f110bf6d064e020411cae410e73d3f4efc332"} err="failed to get container status \"f74cd03523a7658add610343484f110bf6d064e020411cae410e73d3f4efc332\": rpc error: code = NotFound desc = could not find container \"f74cd03523a7658add610343484f110bf6d064e020411cae410e73d3f4efc332\": container with ID starting with f74cd03523a7658add610343484f110bf6d064e020411cae410e73d3f4efc332 not found: ID does not exist" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.534814 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.989286 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69411227-14e0-40b4-a753-f2178bfbdd2a" path="/var/lib/kubelet/pods/69411227-14e0-40b4-a753-f2178bfbdd2a/volumes" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.991445 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72a9ed44-dc10-4f81-be61-b6ba20c83548" path="/var/lib/kubelet/pods/72a9ed44-dc10-4f81-be61-b6ba20c83548/volumes" Feb 26 17:43:20 crc kubenswrapper[4805]: I0226 17:43:20.993071 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82935132-2a23-4b0c-86c5-be40089b7e0b" path="/var/lib/kubelet/pods/82935132-2a23-4b0c-86c5-be40089b7e0b/volumes" Feb 26 17:43:21 crc kubenswrapper[4805]: I0226 17:43:21.008557 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c793c17-a107-4006-9e15-5a2ac2afa296" path="/var/lib/kubelet/pods/9c793c17-a107-4006-9e15-5a2ac2afa296/volumes" Feb 26 17:43:21 crc kubenswrapper[4805]: I0226 17:43:21.035311 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 26 17:43:21 crc kubenswrapper[4805]: I0226 17:43:21.044689 4805 generic.go:334] "Generic (PLEG): container finished" podID="12581950-1ba2-48cc-ace1-798bfc3c6a54" containerID="519c9f37e17b085f4c73241c86dd7faa6d01b175b0ad5f9b91e12f949ad9acd1" exitCode=0 Feb 26 17:43:21 crc kubenswrapper[4805]: I0226 17:43:21.044778 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j998b" event={"ID":"12581950-1ba2-48cc-ace1-798bfc3c6a54","Type":"ContainerDied","Data":"519c9f37e17b085f4c73241c86dd7faa6d01b175b0ad5f9b91e12f949ad9acd1"} Feb 26 17:43:21 crc kubenswrapper[4805]: I0226 17:43:21.077032 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2","Type":"ContainerStarted","Data":"60c95b425fe3a542ad9bc42c3a0f097565cc7ad719dcacae6bcbb77129f4ef3c"} Feb 26 17:43:21 crc kubenswrapper[4805]: I0226 17:43:21.077947 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 26 17:43:21 crc kubenswrapper[4805]: I0226 17:43:21.113127 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 26 17:43:21 crc kubenswrapper[4805]: I0226 17:43:21.123585 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.691373195 podStartE2EDuration="11.123555631s" podCreationTimestamp="2026-02-26 17:43:10 +0000 UTC" firstStartedPulling="2026-02-26 17:43:11.026887945 +0000 UTC m=+1705.588642284" lastFinishedPulling="2026-02-26 17:43:19.459070381 +0000 UTC m=+1714.020824720" observedRunningTime="2026-02-26 17:43:21.103446523 +0000 UTC m=+1715.665200862" watchObservedRunningTime="2026-02-26 17:43:21.123555631 +0000 UTC m=+1715.685309970" Feb 26 17:43:21 crc kubenswrapper[4805]: I0226 17:43:21.322792 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 26 17:43:21 crc kubenswrapper[4805]: I0226 17:43:21.495992 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j998b" Feb 26 17:43:21 crc kubenswrapper[4805]: I0226 17:43:21.666561 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12581950-1ba2-48cc-ace1-798bfc3c6a54-utilities\") pod \"12581950-1ba2-48cc-ace1-798bfc3c6a54\" (UID: \"12581950-1ba2-48cc-ace1-798bfc3c6a54\") " Feb 26 17:43:21 crc kubenswrapper[4805]: I0226 17:43:21.666808 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12581950-1ba2-48cc-ace1-798bfc3c6a54-catalog-content\") pod \"12581950-1ba2-48cc-ace1-798bfc3c6a54\" (UID: \"12581950-1ba2-48cc-ace1-798bfc3c6a54\") " Feb 26 17:43:21 crc kubenswrapper[4805]: I0226 17:43:21.666873 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2vvh\" (UniqueName: \"kubernetes.io/projected/12581950-1ba2-48cc-ace1-798bfc3c6a54-kube-api-access-s2vvh\") pod \"12581950-1ba2-48cc-ace1-798bfc3c6a54\" (UID: \"12581950-1ba2-48cc-ace1-798bfc3c6a54\") " Feb 26 17:43:21 crc kubenswrapper[4805]: I0226 17:43:21.672464 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12581950-1ba2-48cc-ace1-798bfc3c6a54-utilities" (OuterVolumeSpecName: "utilities") pod "12581950-1ba2-48cc-ace1-798bfc3c6a54" (UID: "12581950-1ba2-48cc-ace1-798bfc3c6a54"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:43:21 crc kubenswrapper[4805]: I0226 17:43:21.676814 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12581950-1ba2-48cc-ace1-798bfc3c6a54-kube-api-access-s2vvh" (OuterVolumeSpecName: "kube-api-access-s2vvh") pod "12581950-1ba2-48cc-ace1-798bfc3c6a54" (UID: "12581950-1ba2-48cc-ace1-798bfc3c6a54"). InnerVolumeSpecName "kube-api-access-s2vvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:43:21 crc kubenswrapper[4805]: I0226 17:43:21.748446 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12581950-1ba2-48cc-ace1-798bfc3c6a54-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "12581950-1ba2-48cc-ace1-798bfc3c6a54" (UID: "12581950-1ba2-48cc-ace1-798bfc3c6a54"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:43:21 crc kubenswrapper[4805]: I0226 17:43:21.770238 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12581950-1ba2-48cc-ace1-798bfc3c6a54-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:21 crc kubenswrapper[4805]: I0226 17:43:21.770287 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2vvh\" (UniqueName: \"kubernetes.io/projected/12581950-1ba2-48cc-ace1-798bfc3c6a54-kube-api-access-s2vvh\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:21 crc kubenswrapper[4805]: I0226 17:43:21.770504 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12581950-1ba2-48cc-ace1-798bfc3c6a54-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:21 crc kubenswrapper[4805]: I0226 17:43:21.982598 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-s2hcq"] Feb 26 17:43:21 crc kubenswrapper[4805]: E0226 17:43:21.983001 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12581950-1ba2-48cc-ace1-798bfc3c6a54" containerName="registry-server" Feb 26 17:43:21 crc kubenswrapper[4805]: I0226 17:43:21.983148 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="12581950-1ba2-48cc-ace1-798bfc3c6a54" containerName="registry-server" Feb 26 17:43:21 crc kubenswrapper[4805]: E0226 17:43:21.983185 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12581950-1ba2-48cc-ace1-798bfc3c6a54" containerName="extract-content" Feb 26 17:43:21 crc kubenswrapper[4805]: I0226 17:43:21.983192 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="12581950-1ba2-48cc-ace1-798bfc3c6a54" containerName="extract-content" Feb 26 17:43:21 crc kubenswrapper[4805]: E0226 17:43:21.983209 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12581950-1ba2-48cc-ace1-798bfc3c6a54" containerName="extract-utilities" Feb 26 17:43:21 crc kubenswrapper[4805]: I0226 17:43:21.983216 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="12581950-1ba2-48cc-ace1-798bfc3c6a54" containerName="extract-utilities" Feb 26 17:43:21 crc kubenswrapper[4805]: I0226 17:43:21.983407 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="12581950-1ba2-48cc-ace1-798bfc3c6a54" containerName="registry-server" Feb 26 17:43:21 crc kubenswrapper[4805]: I0226 17:43:21.984627 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc7c944bf-s2hcq" Feb 26 17:43:21 crc kubenswrapper[4805]: I0226 17:43:21.989112 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.013066 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-s2hcq"] Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.080415 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44h4n\" (UniqueName: \"kubernetes.io/projected/464c1481-f210-4c07-90ca-215492bf4ebe-kube-api-access-44h4n\") pod \"dnsmasq-dns-dc7c944bf-s2hcq\" (UID: \"464c1481-f210-4c07-90ca-215492bf4ebe\") " pod="openstack/dnsmasq-dns-dc7c944bf-s2hcq" Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.080499 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/464c1481-f210-4c07-90ca-215492bf4ebe-dns-swift-storage-0\") pod \"dnsmasq-dns-dc7c944bf-s2hcq\" (UID: \"464c1481-f210-4c07-90ca-215492bf4ebe\") " pod="openstack/dnsmasq-dns-dc7c944bf-s2hcq" Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.080615 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/464c1481-f210-4c07-90ca-215492bf4ebe-config\") pod \"dnsmasq-dns-dc7c944bf-s2hcq\" (UID: \"464c1481-f210-4c07-90ca-215492bf4ebe\") " pod="openstack/dnsmasq-dns-dc7c944bf-s2hcq" Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.080714 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/464c1481-f210-4c07-90ca-215492bf4ebe-ovsdbserver-sb\") pod \"dnsmasq-dns-dc7c944bf-s2hcq\" (UID: \"464c1481-f210-4c07-90ca-215492bf4ebe\") " pod="openstack/dnsmasq-dns-dc7c944bf-s2hcq" Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.080775 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/464c1481-f210-4c07-90ca-215492bf4ebe-ovsdbserver-nb\") pod \"dnsmasq-dns-dc7c944bf-s2hcq\" (UID: \"464c1481-f210-4c07-90ca-215492bf4ebe\") " pod="openstack/dnsmasq-dns-dc7c944bf-s2hcq" Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.080812 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/464c1481-f210-4c07-90ca-215492bf4ebe-dns-svc\") pod \"dnsmasq-dns-dc7c944bf-s2hcq\" (UID: \"464c1481-f210-4c07-90ca-215492bf4ebe\") " pod="openstack/dnsmasq-dns-dc7c944bf-s2hcq" Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.080890 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/464c1481-f210-4c07-90ca-215492bf4ebe-openstack-edpm-ipam\") pod \"dnsmasq-dns-dc7c944bf-s2hcq\" (UID: \"464c1481-f210-4c07-90ca-215492bf4ebe\") " pod="openstack/dnsmasq-dns-dc7c944bf-s2hcq" Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.102330 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"b6cb0afe-79a7-421d-a18f-b42cebd4398f","Type":"ContainerStarted","Data":"5243033a47953a85c65619b54cd97b7b2aa26ff7f3855c349112af3948c83cbc"} Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.102384 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"b6cb0afe-79a7-421d-a18f-b42cebd4398f","Type":"ContainerStarted","Data":"38e0a0cec8ce88221c416aaa3423dda301504e5b1b35a5f7d5449435bfbaaa19"} Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.102397 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"b6cb0afe-79a7-421d-a18f-b42cebd4398f","Type":"ContainerStarted","Data":"b872ad2f1a3ed7ddc9688ba8af0249f7bd3c39095820b981afd98090ee898d3f"} Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.103161 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.117515 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j998b" event={"ID":"12581950-1ba2-48cc-ace1-798bfc3c6a54","Type":"ContainerDied","Data":"ed11d3d99ab0e875837de59ad108695eaf372a14a99cedc0209de27b7ad4afe9"} Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.117584 4805 scope.go:117] "RemoveContainer" containerID="519c9f37e17b085f4c73241c86dd7faa6d01b175b0ad5f9b91e12f949ad9acd1" Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.117735 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j998b" Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.132801 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"a048b7a1-414b-4465-931c-cf987921d7e6","Type":"ContainerStarted","Data":"1de7445849c8bd372bf28b2d95fdb53007fd93845ccd9aef72ec7ff425ea610d"} Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.132855 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"a048b7a1-414b-4465-931c-cf987921d7e6","Type":"ContainerStarted","Data":"58d8fe19d0949d29ae6ce1ec6a98a4fa78c65a95548472739bfd52923c6671f9"} Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.139631 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a","Type":"ContainerStarted","Data":"a577680c0ce41a27a8a9c9a8787f7fca74244cace78c364ac84c8e8f971ff7cf"} Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.154759 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=2.154737009 podStartE2EDuration="2.154737009s" podCreationTimestamp="2026-02-26 17:43:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:43:22.126182188 +0000 UTC m=+1716.687936527" watchObservedRunningTime="2026-02-26 17:43:22.154737009 +0000 UTC m=+1716.716491348" Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.159271 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=1.9266393499999999 podStartE2EDuration="2.159251683s" podCreationTimestamp="2026-02-26 17:43:20 +0000 UTC" firstStartedPulling="2026-02-26 17:43:21.342240353 +0000 UTC m=+1715.903994682" lastFinishedPulling="2026-02-26 17:43:21.574852686 +0000 UTC m=+1716.136607015" observedRunningTime="2026-02-26 17:43:22.157117899 +0000 UTC m=+1716.718872238" watchObservedRunningTime="2026-02-26 17:43:22.159251683 +0000 UTC m=+1716.721006022" Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.160576 4805 scope.go:117] "RemoveContainer" containerID="388b7afd6386cf53cf706c499cbcf735ab922cf555b331bc9a7bb8ef4263026a" Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.183943 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/464c1481-f210-4c07-90ca-215492bf4ebe-ovsdbserver-nb\") pod \"dnsmasq-dns-dc7c944bf-s2hcq\" (UID: \"464c1481-f210-4c07-90ca-215492bf4ebe\") " pod="openstack/dnsmasq-dns-dc7c944bf-s2hcq" Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.184007 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/464c1481-f210-4c07-90ca-215492bf4ebe-dns-svc\") pod \"dnsmasq-dns-dc7c944bf-s2hcq\" (UID: \"464c1481-f210-4c07-90ca-215492bf4ebe\") " pod="openstack/dnsmasq-dns-dc7c944bf-s2hcq" Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.184091 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/464c1481-f210-4c07-90ca-215492bf4ebe-openstack-edpm-ipam\") pod \"dnsmasq-dns-dc7c944bf-s2hcq\" (UID: \"464c1481-f210-4c07-90ca-215492bf4ebe\") " pod="openstack/dnsmasq-dns-dc7c944bf-s2hcq" Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.184199 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44h4n\" (UniqueName: \"kubernetes.io/projected/464c1481-f210-4c07-90ca-215492bf4ebe-kube-api-access-44h4n\") pod \"dnsmasq-dns-dc7c944bf-s2hcq\" (UID: \"464c1481-f210-4c07-90ca-215492bf4ebe\") " pod="openstack/dnsmasq-dns-dc7c944bf-s2hcq" Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.184245 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/464c1481-f210-4c07-90ca-215492bf4ebe-dns-swift-storage-0\") pod \"dnsmasq-dns-dc7c944bf-s2hcq\" (UID: \"464c1481-f210-4c07-90ca-215492bf4ebe\") " pod="openstack/dnsmasq-dns-dc7c944bf-s2hcq" Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.184355 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/464c1481-f210-4c07-90ca-215492bf4ebe-config\") pod \"dnsmasq-dns-dc7c944bf-s2hcq\" (UID: \"464c1481-f210-4c07-90ca-215492bf4ebe\") " pod="openstack/dnsmasq-dns-dc7c944bf-s2hcq" Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.184467 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/464c1481-f210-4c07-90ca-215492bf4ebe-ovsdbserver-sb\") pod \"dnsmasq-dns-dc7c944bf-s2hcq\" (UID: \"464c1481-f210-4c07-90ca-215492bf4ebe\") " pod="openstack/dnsmasq-dns-dc7c944bf-s2hcq" Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.185292 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/464c1481-f210-4c07-90ca-215492bf4ebe-dns-svc\") pod \"dnsmasq-dns-dc7c944bf-s2hcq\" (UID: \"464c1481-f210-4c07-90ca-215492bf4ebe\") " pod="openstack/dnsmasq-dns-dc7c944bf-s2hcq" Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.191052 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/464c1481-f210-4c07-90ca-215492bf4ebe-config\") pod \"dnsmasq-dns-dc7c944bf-s2hcq\" (UID: \"464c1481-f210-4c07-90ca-215492bf4ebe\") " pod="openstack/dnsmasq-dns-dc7c944bf-s2hcq" Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.191683 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/464c1481-f210-4c07-90ca-215492bf4ebe-ovsdbserver-nb\") pod \"dnsmasq-dns-dc7c944bf-s2hcq\" (UID: \"464c1481-f210-4c07-90ca-215492bf4ebe\") " pod="openstack/dnsmasq-dns-dc7c944bf-s2hcq" Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.192287 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/464c1481-f210-4c07-90ca-215492bf4ebe-ovsdbserver-sb\") pod \"dnsmasq-dns-dc7c944bf-s2hcq\" (UID: \"464c1481-f210-4c07-90ca-215492bf4ebe\") " pod="openstack/dnsmasq-dns-dc7c944bf-s2hcq" Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.191650 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/464c1481-f210-4c07-90ca-215492bf4ebe-dns-swift-storage-0\") pod \"dnsmasq-dns-dc7c944bf-s2hcq\" (UID: \"464c1481-f210-4c07-90ca-215492bf4ebe\") " pod="openstack/dnsmasq-dns-dc7c944bf-s2hcq" Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.193171 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/464c1481-f210-4c07-90ca-215492bf4ebe-openstack-edpm-ipam\") pod \"dnsmasq-dns-dc7c944bf-s2hcq\" (UID: \"464c1481-f210-4c07-90ca-215492bf4ebe\") " pod="openstack/dnsmasq-dns-dc7c944bf-s2hcq" Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.211228 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j998b"] Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.214827 4805 scope.go:117] "RemoveContainer" containerID="b978c7ec5d4e9264412cea98e8cc5ffe83518114c97dacbd5f556951b1d14c70" Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.232154 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44h4n\" (UniqueName: \"kubernetes.io/projected/464c1481-f210-4c07-90ca-215492bf4ebe-kube-api-access-44h4n\") pod \"dnsmasq-dns-dc7c944bf-s2hcq\" (UID: \"464c1481-f210-4c07-90ca-215492bf4ebe\") " pod="openstack/dnsmasq-dns-dc7c944bf-s2hcq" Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.261825 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-j998b"] Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.307444 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc7c944bf-s2hcq" Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.991893 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12581950-1ba2-48cc-ace1-798bfc3c6a54" path="/var/lib/kubelet/pods/12581950-1ba2-48cc-ace1-798bfc3c6a54/volumes" Feb 26 17:43:22 crc kubenswrapper[4805]: I0226 17:43:22.999601 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-s2hcq"] Feb 26 17:43:23 crc kubenswrapper[4805]: I0226 17:43:23.186869 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a","Type":"ContainerStarted","Data":"399b267b3880ed3d50351be914f8adef603de010d4bc7c0a87b652ab46a07e88"} Feb 26 17:43:23 crc kubenswrapper[4805]: I0226 17:43:23.224892 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2","Type":"ContainerStarted","Data":"d4af62ad3006a069f1c9dab004db8b26d73585dd07904caf685c7c82d5ce96b2"} Feb 26 17:43:23 crc kubenswrapper[4805]: I0226 17:43:23.233090 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc7c944bf-s2hcq" event={"ID":"464c1481-f210-4c07-90ca-215492bf4ebe","Type":"ContainerStarted","Data":"9b95a0f3f854249de455511c1d3eff473ea6b05a523b2848df3894ac78417f29"} Feb 26 17:43:23 crc kubenswrapper[4805]: E0226 17:43:23.647546 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod464c1481_f210_4c07_90ca_215492bf4ebe.slice/crio-conmon-246cd023dd306afe55c663b32bd7540b1b438506c40b55ef862304c0c669e59e.scope\": RecentStats: unable to find data in memory cache]" Feb 26 17:43:24 crc kubenswrapper[4805]: I0226 17:43:24.242927 4805 generic.go:334] "Generic (PLEG): container finished" podID="464c1481-f210-4c07-90ca-215492bf4ebe" containerID="246cd023dd306afe55c663b32bd7540b1b438506c40b55ef862304c0c669e59e" exitCode=0 Feb 26 17:43:24 crc kubenswrapper[4805]: I0226 17:43:24.243029 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc7c944bf-s2hcq" event={"ID":"464c1481-f210-4c07-90ca-215492bf4ebe","Type":"ContainerDied","Data":"246cd023dd306afe55c663b32bd7540b1b438506c40b55ef862304c0c669e59e"} Feb 26 17:43:26 crc kubenswrapper[4805]: I0226 17:43:26.276510 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc7c944bf-s2hcq" event={"ID":"464c1481-f210-4c07-90ca-215492bf4ebe","Type":"ContainerStarted","Data":"d069ec6a9d718abc73ef47ab096dd7f9db0a3a4df552b0714b3e23625207830a"} Feb 26 17:43:26 crc kubenswrapper[4805]: I0226 17:43:26.276853 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-dc7c944bf-s2hcq" Feb 26 17:43:26 crc kubenswrapper[4805]: I0226 17:43:26.313934 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-dc7c944bf-s2hcq" podStartSLOduration=5.31390572 podStartE2EDuration="5.31390572s" podCreationTimestamp="2026-02-26 17:43:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:43:26.300554173 +0000 UTC m=+1720.862308522" watchObservedRunningTime="2026-02-26 17:43:26.31390572 +0000 UTC m=+1720.875660069" Feb 26 17:43:32 crc kubenswrapper[4805]: I0226 17:43:32.309243 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-dc7c944bf-s2hcq" Feb 26 17:43:32 crc kubenswrapper[4805]: I0226 17:43:32.415902 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54dd998c-nw4mj"] Feb 26 17:43:32 crc kubenswrapper[4805]: I0226 17:43:32.416146 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-54dd998c-nw4mj" podUID="a8874dd5-500c-4544-acd3-2749ede9ef23" containerName="dnsmasq-dns" containerID="cri-o://6f0253600401e12d562144f981f5564e5a121a2464c9bab9b590eaa5b5ca180d" gracePeriod=10 Feb 26 17:43:32 crc kubenswrapper[4805]: I0226 17:43:32.772993 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c4b758ff5-4znt7"] Feb 26 17:43:32 crc kubenswrapper[4805]: I0226 17:43:32.775907 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c4b758ff5-4znt7" Feb 26 17:43:32 crc kubenswrapper[4805]: I0226 17:43:32.807180 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c4b758ff5-4znt7"] Feb 26 17:43:32 crc kubenswrapper[4805]: I0226 17:43:32.945398 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40a64ea1-bab4-4761-a37f-865fdcf16fc6-dns-svc\") pod \"dnsmasq-dns-c4b758ff5-4znt7\" (UID: \"40a64ea1-bab4-4761-a37f-865fdcf16fc6\") " pod="openstack/dnsmasq-dns-c4b758ff5-4znt7" Feb 26 17:43:32 crc kubenswrapper[4805]: I0226 17:43:32.945489 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/40a64ea1-bab4-4761-a37f-865fdcf16fc6-openstack-edpm-ipam\") pod \"dnsmasq-dns-c4b758ff5-4znt7\" (UID: \"40a64ea1-bab4-4761-a37f-865fdcf16fc6\") " pod="openstack/dnsmasq-dns-c4b758ff5-4znt7" Feb 26 17:43:32 crc kubenswrapper[4805]: I0226 17:43:32.945534 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/40a64ea1-bab4-4761-a37f-865fdcf16fc6-dns-swift-storage-0\") pod \"dnsmasq-dns-c4b758ff5-4znt7\" (UID: \"40a64ea1-bab4-4761-a37f-865fdcf16fc6\") " pod="openstack/dnsmasq-dns-c4b758ff5-4znt7" Feb 26 17:43:32 crc kubenswrapper[4805]: I0226 17:43:32.945662 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40a64ea1-bab4-4761-a37f-865fdcf16fc6-ovsdbserver-sb\") pod \"dnsmasq-dns-c4b758ff5-4znt7\" (UID: \"40a64ea1-bab4-4761-a37f-865fdcf16fc6\") " pod="openstack/dnsmasq-dns-c4b758ff5-4znt7" Feb 26 17:43:32 crc kubenswrapper[4805]: I0226 17:43:32.945753 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40a64ea1-bab4-4761-a37f-865fdcf16fc6-config\") pod \"dnsmasq-dns-c4b758ff5-4znt7\" (UID: \"40a64ea1-bab4-4761-a37f-865fdcf16fc6\") " pod="openstack/dnsmasq-dns-c4b758ff5-4znt7" Feb 26 17:43:32 crc kubenswrapper[4805]: I0226 17:43:32.945769 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40a64ea1-bab4-4761-a37f-865fdcf16fc6-ovsdbserver-nb\") pod \"dnsmasq-dns-c4b758ff5-4znt7\" (UID: \"40a64ea1-bab4-4761-a37f-865fdcf16fc6\") " pod="openstack/dnsmasq-dns-c4b758ff5-4znt7" Feb 26 17:43:32 crc kubenswrapper[4805]: I0226 17:43:32.945792 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrf7z\" (UniqueName: \"kubernetes.io/projected/40a64ea1-bab4-4761-a37f-865fdcf16fc6-kube-api-access-rrf7z\") pod \"dnsmasq-dns-c4b758ff5-4znt7\" (UID: \"40a64ea1-bab4-4761-a37f-865fdcf16fc6\") " pod="openstack/dnsmasq-dns-c4b758ff5-4znt7" Feb 26 17:43:32 crc kubenswrapper[4805]: I0226 17:43:32.977504 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 17:43:32 crc kubenswrapper[4805]: I0226 17:43:32.977549 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.048435 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40a64ea1-bab4-4761-a37f-865fdcf16fc6-config\") pod \"dnsmasq-dns-c4b758ff5-4znt7\" (UID: \"40a64ea1-bab4-4761-a37f-865fdcf16fc6\") " pod="openstack/dnsmasq-dns-c4b758ff5-4znt7" Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.048489 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40a64ea1-bab4-4761-a37f-865fdcf16fc6-ovsdbserver-nb\") pod \"dnsmasq-dns-c4b758ff5-4znt7\" (UID: \"40a64ea1-bab4-4761-a37f-865fdcf16fc6\") " pod="openstack/dnsmasq-dns-c4b758ff5-4znt7" Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.048519 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrf7z\" (UniqueName: \"kubernetes.io/projected/40a64ea1-bab4-4761-a37f-865fdcf16fc6-kube-api-access-rrf7z\") pod \"dnsmasq-dns-c4b758ff5-4znt7\" (UID: \"40a64ea1-bab4-4761-a37f-865fdcf16fc6\") " pod="openstack/dnsmasq-dns-c4b758ff5-4znt7" Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.048559 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40a64ea1-bab4-4761-a37f-865fdcf16fc6-dns-svc\") pod \"dnsmasq-dns-c4b758ff5-4znt7\" (UID: \"40a64ea1-bab4-4761-a37f-865fdcf16fc6\") " pod="openstack/dnsmasq-dns-c4b758ff5-4znt7" Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.048593 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/40a64ea1-bab4-4761-a37f-865fdcf16fc6-openstack-edpm-ipam\") pod \"dnsmasq-dns-c4b758ff5-4znt7\" (UID: \"40a64ea1-bab4-4761-a37f-865fdcf16fc6\") " pod="openstack/dnsmasq-dns-c4b758ff5-4znt7" Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.048612 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/40a64ea1-bab4-4761-a37f-865fdcf16fc6-dns-swift-storage-0\") pod \"dnsmasq-dns-c4b758ff5-4znt7\" (UID: \"40a64ea1-bab4-4761-a37f-865fdcf16fc6\") " pod="openstack/dnsmasq-dns-c4b758ff5-4znt7" Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.048700 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40a64ea1-bab4-4761-a37f-865fdcf16fc6-ovsdbserver-sb\") pod \"dnsmasq-dns-c4b758ff5-4znt7\" (UID: \"40a64ea1-bab4-4761-a37f-865fdcf16fc6\") " pod="openstack/dnsmasq-dns-c4b758ff5-4znt7" Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.049508 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/40a64ea1-bab4-4761-a37f-865fdcf16fc6-ovsdbserver-sb\") pod \"dnsmasq-dns-c4b758ff5-4znt7\" (UID: \"40a64ea1-bab4-4761-a37f-865fdcf16fc6\") " pod="openstack/dnsmasq-dns-c4b758ff5-4znt7" Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.049606 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40a64ea1-bab4-4761-a37f-865fdcf16fc6-config\") pod \"dnsmasq-dns-c4b758ff5-4znt7\" (UID: \"40a64ea1-bab4-4761-a37f-865fdcf16fc6\") " pod="openstack/dnsmasq-dns-c4b758ff5-4znt7" Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.049948 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/40a64ea1-bab4-4761-a37f-865fdcf16fc6-dns-svc\") pod \"dnsmasq-dns-c4b758ff5-4znt7\" (UID: \"40a64ea1-bab4-4761-a37f-865fdcf16fc6\") " pod="openstack/dnsmasq-dns-c4b758ff5-4znt7" Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.050204 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/40a64ea1-bab4-4761-a37f-865fdcf16fc6-openstack-edpm-ipam\") pod \"dnsmasq-dns-c4b758ff5-4znt7\" (UID: \"40a64ea1-bab4-4761-a37f-865fdcf16fc6\") " pod="openstack/dnsmasq-dns-c4b758ff5-4znt7" Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.050827 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/40a64ea1-bab4-4761-a37f-865fdcf16fc6-ovsdbserver-nb\") pod \"dnsmasq-dns-c4b758ff5-4znt7\" (UID: \"40a64ea1-bab4-4761-a37f-865fdcf16fc6\") " pod="openstack/dnsmasq-dns-c4b758ff5-4znt7" Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.051011 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/40a64ea1-bab4-4761-a37f-865fdcf16fc6-dns-swift-storage-0\") pod \"dnsmasq-dns-c4b758ff5-4znt7\" (UID: \"40a64ea1-bab4-4761-a37f-865fdcf16fc6\") " pod="openstack/dnsmasq-dns-c4b758ff5-4znt7" Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.069965 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrf7z\" (UniqueName: \"kubernetes.io/projected/40a64ea1-bab4-4761-a37f-865fdcf16fc6-kube-api-access-rrf7z\") pod \"dnsmasq-dns-c4b758ff5-4znt7\" (UID: \"40a64ea1-bab4-4761-a37f-865fdcf16fc6\") " pod="openstack/dnsmasq-dns-c4b758ff5-4znt7" Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.095985 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c4b758ff5-4znt7" Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.245837 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54dd998c-nw4mj" Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.354888 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8clm8\" (UniqueName: \"kubernetes.io/projected/a8874dd5-500c-4544-acd3-2749ede9ef23-kube-api-access-8clm8\") pod \"a8874dd5-500c-4544-acd3-2749ede9ef23\" (UID: \"a8874dd5-500c-4544-acd3-2749ede9ef23\") " Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.354965 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a8874dd5-500c-4544-acd3-2749ede9ef23-ovsdbserver-nb\") pod \"a8874dd5-500c-4544-acd3-2749ede9ef23\" (UID: \"a8874dd5-500c-4544-acd3-2749ede9ef23\") " Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.355084 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a8874dd5-500c-4544-acd3-2749ede9ef23-ovsdbserver-sb\") pod \"a8874dd5-500c-4544-acd3-2749ede9ef23\" (UID: \"a8874dd5-500c-4544-acd3-2749ede9ef23\") " Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.355145 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8874dd5-500c-4544-acd3-2749ede9ef23-config\") pod \"a8874dd5-500c-4544-acd3-2749ede9ef23\" (UID: \"a8874dd5-500c-4544-acd3-2749ede9ef23\") " Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.355299 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8874dd5-500c-4544-acd3-2749ede9ef23-dns-svc\") pod \"a8874dd5-500c-4544-acd3-2749ede9ef23\" (UID: \"a8874dd5-500c-4544-acd3-2749ede9ef23\") " Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.355339 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a8874dd5-500c-4544-acd3-2749ede9ef23-dns-swift-storage-0\") pod \"a8874dd5-500c-4544-acd3-2749ede9ef23\" (UID: \"a8874dd5-500c-4544-acd3-2749ede9ef23\") " Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.362487 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8874dd5-500c-4544-acd3-2749ede9ef23-kube-api-access-8clm8" (OuterVolumeSpecName: "kube-api-access-8clm8") pod "a8874dd5-500c-4544-acd3-2749ede9ef23" (UID: "a8874dd5-500c-4544-acd3-2749ede9ef23"). InnerVolumeSpecName "kube-api-access-8clm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.409525 4805 generic.go:334] "Generic (PLEG): container finished" podID="a8874dd5-500c-4544-acd3-2749ede9ef23" containerID="6f0253600401e12d562144f981f5564e5a121a2464c9bab9b590eaa5b5ca180d" exitCode=0 Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.409571 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dd998c-nw4mj" event={"ID":"a8874dd5-500c-4544-acd3-2749ede9ef23","Type":"ContainerDied","Data":"6f0253600401e12d562144f981f5564e5a121a2464c9bab9b590eaa5b5ca180d"} Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.409601 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54dd998c-nw4mj" event={"ID":"a8874dd5-500c-4544-acd3-2749ede9ef23","Type":"ContainerDied","Data":"6f5444bfbaaf92e678e327794a20525a96ac4e2aad619cec51aa438a8eaf9012"} Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.409622 4805 scope.go:117] "RemoveContainer" containerID="6f0253600401e12d562144f981f5564e5a121a2464c9bab9b590eaa5b5ca180d" Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.409779 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54dd998c-nw4mj" Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.432320 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8874dd5-500c-4544-acd3-2749ede9ef23-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a8874dd5-500c-4544-acd3-2749ede9ef23" (UID: "a8874dd5-500c-4544-acd3-2749ede9ef23"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.441334 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8874dd5-500c-4544-acd3-2749ede9ef23-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a8874dd5-500c-4544-acd3-2749ede9ef23" (UID: "a8874dd5-500c-4544-acd3-2749ede9ef23"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.443302 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8874dd5-500c-4544-acd3-2749ede9ef23-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a8874dd5-500c-4544-acd3-2749ede9ef23" (UID: "a8874dd5-500c-4544-acd3-2749ede9ef23"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.448952 4805 scope.go:117] "RemoveContainer" containerID="1d259d445bf5bf78c2a34d7677fb278d05ac0d8b41e64c32b2ce7ee2bf0dcf1c" Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.450034 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8874dd5-500c-4544-acd3-2749ede9ef23-config" (OuterVolumeSpecName: "config") pod "a8874dd5-500c-4544-acd3-2749ede9ef23" (UID: "a8874dd5-500c-4544-acd3-2749ede9ef23"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.461609 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8874dd5-500c-4544-acd3-2749ede9ef23-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.461646 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8874dd5-500c-4544-acd3-2749ede9ef23-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.461660 4805 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a8874dd5-500c-4544-acd3-2749ede9ef23-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.461675 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8clm8\" (UniqueName: \"kubernetes.io/projected/a8874dd5-500c-4544-acd3-2749ede9ef23-kube-api-access-8clm8\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.461687 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a8874dd5-500c-4544-acd3-2749ede9ef23-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.468451 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8874dd5-500c-4544-acd3-2749ede9ef23-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a8874dd5-500c-4544-acd3-2749ede9ef23" (UID: "a8874dd5-500c-4544-acd3-2749ede9ef23"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.483152 4805 scope.go:117] "RemoveContainer" containerID="6f0253600401e12d562144f981f5564e5a121a2464c9bab9b590eaa5b5ca180d" Feb 26 17:43:33 crc kubenswrapper[4805]: E0226 17:43:33.489093 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f0253600401e12d562144f981f5564e5a121a2464c9bab9b590eaa5b5ca180d\": container with ID starting with 6f0253600401e12d562144f981f5564e5a121a2464c9bab9b590eaa5b5ca180d not found: ID does not exist" containerID="6f0253600401e12d562144f981f5564e5a121a2464c9bab9b590eaa5b5ca180d" Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.489394 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f0253600401e12d562144f981f5564e5a121a2464c9bab9b590eaa5b5ca180d"} err="failed to get container status \"6f0253600401e12d562144f981f5564e5a121a2464c9bab9b590eaa5b5ca180d\": rpc error: code = NotFound desc = could not find container \"6f0253600401e12d562144f981f5564e5a121a2464c9bab9b590eaa5b5ca180d\": container with ID starting with 6f0253600401e12d562144f981f5564e5a121a2464c9bab9b590eaa5b5ca180d not found: ID does not exist" Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.489486 4805 scope.go:117] "RemoveContainer" containerID="1d259d445bf5bf78c2a34d7677fb278d05ac0d8b41e64c32b2ce7ee2bf0dcf1c" Feb 26 17:43:33 crc kubenswrapper[4805]: E0226 17:43:33.493387 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d259d445bf5bf78c2a34d7677fb278d05ac0d8b41e64c32b2ce7ee2bf0dcf1c\": container with ID starting with 1d259d445bf5bf78c2a34d7677fb278d05ac0d8b41e64c32b2ce7ee2bf0dcf1c not found: ID does not exist" containerID="1d259d445bf5bf78c2a34d7677fb278d05ac0d8b41e64c32b2ce7ee2bf0dcf1c" Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.493458 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d259d445bf5bf78c2a34d7677fb278d05ac0d8b41e64c32b2ce7ee2bf0dcf1c"} err="failed to get container status \"1d259d445bf5bf78c2a34d7677fb278d05ac0d8b41e64c32b2ce7ee2bf0dcf1c\": rpc error: code = NotFound desc = could not find container \"1d259d445bf5bf78c2a34d7677fb278d05ac0d8b41e64c32b2ce7ee2bf0dcf1c\": container with ID starting with 1d259d445bf5bf78c2a34d7677fb278d05ac0d8b41e64c32b2ce7ee2bf0dcf1c not found: ID does not exist" Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.563446 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a8874dd5-500c-4544-acd3-2749ede9ef23-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.664605 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c4b758ff5-4znt7"] Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.935443 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54dd998c-nw4mj"] Feb 26 17:43:33 crc kubenswrapper[4805]: I0226 17:43:33.945104 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54dd998c-nw4mj"] Feb 26 17:43:34 crc kubenswrapper[4805]: I0226 17:43:34.421231 4805 generic.go:334] "Generic (PLEG): container finished" podID="40a64ea1-bab4-4761-a37f-865fdcf16fc6" containerID="eec473c049ccbcf83cd24316ae7c18bc5bd1818bc79ac1b487445ed38aa95e44" exitCode=0 Feb 26 17:43:34 crc kubenswrapper[4805]: I0226 17:43:34.421509 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c4b758ff5-4znt7" event={"ID":"40a64ea1-bab4-4761-a37f-865fdcf16fc6","Type":"ContainerDied","Data":"eec473c049ccbcf83cd24316ae7c18bc5bd1818bc79ac1b487445ed38aa95e44"} Feb 26 17:43:34 crc kubenswrapper[4805]: I0226 17:43:34.421534 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c4b758ff5-4znt7" event={"ID":"40a64ea1-bab4-4761-a37f-865fdcf16fc6","Type":"ContainerStarted","Data":"a696c1c5cb6c8a8a83755b95d44e825e6cd6ff92ce11637949838cd8c6cea569"} Feb 26 17:43:34 crc kubenswrapper[4805]: I0226 17:43:34.968355 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8874dd5-500c-4544-acd3-2749ede9ef23" path="/var/lib/kubelet/pods/a8874dd5-500c-4544-acd3-2749ede9ef23/volumes" Feb 26 17:43:35 crc kubenswrapper[4805]: I0226 17:43:35.440948 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c4b758ff5-4znt7" event={"ID":"40a64ea1-bab4-4761-a37f-865fdcf16fc6","Type":"ContainerStarted","Data":"cfcd554ac42eafd5da06816f49d5adfb36f8b6cf2cfaddb0fbebf531a263819e"} Feb 26 17:43:35 crc kubenswrapper[4805]: I0226 17:43:35.441212 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-c4b758ff5-4znt7" Feb 26 17:43:35 crc kubenswrapper[4805]: I0226 17:43:35.475966 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-c4b758ff5-4znt7" podStartSLOduration=3.475935406 podStartE2EDuration="3.475935406s" podCreationTimestamp="2026-02-26 17:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:43:35.473242558 +0000 UTC m=+1730.034996917" watchObservedRunningTime="2026-02-26 17:43:35.475935406 +0000 UTC m=+1730.037689745" Feb 26 17:43:40 crc kubenswrapper[4805]: I0226 17:43:40.415417 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 26 17:43:41 crc kubenswrapper[4805]: I0226 17:43:41.674328 4805 scope.go:117] "RemoveContainer" containerID="0a6fc87c6e1358119cfe76bc0ee3e0b170d487e30111a2a88de2ff8cb335f1b0" Feb 26 17:43:41 crc kubenswrapper[4805]: I0226 17:43:41.700200 4805 scope.go:117] "RemoveContainer" containerID="d240c37e4c6840bafea545fc49876c1ff4731b8d719f5aeb77752f049fd123a6" Feb 26 17:43:41 crc kubenswrapper[4805]: I0226 17:43:41.721580 4805 scope.go:117] "RemoveContainer" containerID="cfd7f4516b1e377f9ae6dd83a9a71bee2ee6a82f821e09f6f6ccff2dcb87d2e6" Feb 26 17:43:41 crc kubenswrapper[4805]: I0226 17:43:41.756489 4805 scope.go:117] "RemoveContainer" containerID="759f92c87f54e0f15f61adc82ebf04171eead97bed3a72586349d2316cd0318a" Feb 26 17:43:41 crc kubenswrapper[4805]: I0226 17:43:41.798137 4805 scope.go:117] "RemoveContainer" containerID="347128673f3264a96be15b8a58cdae6efd8f975ee77be9595613db09f28ec470" Feb 26 17:43:41 crc kubenswrapper[4805]: I0226 17:43:41.834642 4805 scope.go:117] "RemoveContainer" containerID="ac296c00f1e4df2dd1548a3e576398ad79c94db94c72b83b73cbb3332ea6f8ed" Feb 26 17:43:41 crc kubenswrapper[4805]: I0226 17:43:41.900784 4805 scope.go:117] "RemoveContainer" containerID="864b253ac289bfcf14b1df01e94f89382aecb3608168be88f389f72a35370aae" Feb 26 17:43:41 crc kubenswrapper[4805]: I0226 17:43:41.995613 4805 scope.go:117] "RemoveContainer" containerID="f610594ac0862580585db48e5977b66ff34a84e46ec0eabaf617e75c90e0ef23" Feb 26 17:43:42 crc kubenswrapper[4805]: I0226 17:43:42.088159 4805 scope.go:117] "RemoveContainer" containerID="9b1d804fe2c74f3e360988241c924b22f92f93ac52ec093add6b80e0ea81450c" Feb 26 17:43:42 crc kubenswrapper[4805]: I0226 17:43:42.135590 4805 scope.go:117] "RemoveContainer" containerID="d552197cadf847ffe729fef5968af668f3ef5785b74c02b27d1baec1d29c447b" Feb 26 17:43:43 crc kubenswrapper[4805]: I0226 17:43:43.099058 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-c4b758ff5-4znt7" Feb 26 17:43:43 crc kubenswrapper[4805]: I0226 17:43:43.169726 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-s2hcq"] Feb 26 17:43:43 crc kubenswrapper[4805]: I0226 17:43:43.170244 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-dc7c944bf-s2hcq" podUID="464c1481-f210-4c07-90ca-215492bf4ebe" containerName="dnsmasq-dns" containerID="cri-o://d069ec6a9d718abc73ef47ab096dd7f9db0a3a4df552b0714b3e23625207830a" gracePeriod=10 Feb 26 17:43:43 crc kubenswrapper[4805]: I0226 17:43:43.617552 4805 generic.go:334] "Generic (PLEG): container finished" podID="464c1481-f210-4c07-90ca-215492bf4ebe" containerID="d069ec6a9d718abc73ef47ab096dd7f9db0a3a4df552b0714b3e23625207830a" exitCode=0 Feb 26 17:43:43 crc kubenswrapper[4805]: I0226 17:43:43.618256 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc7c944bf-s2hcq" event={"ID":"464c1481-f210-4c07-90ca-215492bf4ebe","Type":"ContainerDied","Data":"d069ec6a9d718abc73ef47ab096dd7f9db0a3a4df552b0714b3e23625207830a"} Feb 26 17:43:43 crc kubenswrapper[4805]: I0226 17:43:43.808112 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc7c944bf-s2hcq" Feb 26 17:43:43 crc kubenswrapper[4805]: I0226 17:43:43.913606 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/464c1481-f210-4c07-90ca-215492bf4ebe-ovsdbserver-nb\") pod \"464c1481-f210-4c07-90ca-215492bf4ebe\" (UID: \"464c1481-f210-4c07-90ca-215492bf4ebe\") " Feb 26 17:43:43 crc kubenswrapper[4805]: I0226 17:43:43.913686 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44h4n\" (UniqueName: \"kubernetes.io/projected/464c1481-f210-4c07-90ca-215492bf4ebe-kube-api-access-44h4n\") pod \"464c1481-f210-4c07-90ca-215492bf4ebe\" (UID: \"464c1481-f210-4c07-90ca-215492bf4ebe\") " Feb 26 17:43:43 crc kubenswrapper[4805]: I0226 17:43:43.913718 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/464c1481-f210-4c07-90ca-215492bf4ebe-ovsdbserver-sb\") pod \"464c1481-f210-4c07-90ca-215492bf4ebe\" (UID: \"464c1481-f210-4c07-90ca-215492bf4ebe\") " Feb 26 17:43:43 crc kubenswrapper[4805]: I0226 17:43:43.913740 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/464c1481-f210-4c07-90ca-215492bf4ebe-openstack-edpm-ipam\") pod \"464c1481-f210-4c07-90ca-215492bf4ebe\" (UID: \"464c1481-f210-4c07-90ca-215492bf4ebe\") " Feb 26 17:43:43 crc kubenswrapper[4805]: I0226 17:43:43.913813 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/464c1481-f210-4c07-90ca-215492bf4ebe-dns-svc\") pod \"464c1481-f210-4c07-90ca-215492bf4ebe\" (UID: \"464c1481-f210-4c07-90ca-215492bf4ebe\") " Feb 26 17:43:43 crc kubenswrapper[4805]: I0226 17:43:43.913875 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/464c1481-f210-4c07-90ca-215492bf4ebe-config\") pod \"464c1481-f210-4c07-90ca-215492bf4ebe\" (UID: \"464c1481-f210-4c07-90ca-215492bf4ebe\") " Feb 26 17:43:43 crc kubenswrapper[4805]: I0226 17:43:43.913901 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/464c1481-f210-4c07-90ca-215492bf4ebe-dns-swift-storage-0\") pod \"464c1481-f210-4c07-90ca-215492bf4ebe\" (UID: \"464c1481-f210-4c07-90ca-215492bf4ebe\") " Feb 26 17:43:43 crc kubenswrapper[4805]: I0226 17:43:43.941674 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/464c1481-f210-4c07-90ca-215492bf4ebe-kube-api-access-44h4n" (OuterVolumeSpecName: "kube-api-access-44h4n") pod "464c1481-f210-4c07-90ca-215492bf4ebe" (UID: "464c1481-f210-4c07-90ca-215492bf4ebe"). InnerVolumeSpecName "kube-api-access-44h4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:43:43 crc kubenswrapper[4805]: I0226 17:43:43.978552 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/464c1481-f210-4c07-90ca-215492bf4ebe-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "464c1481-f210-4c07-90ca-215492bf4ebe" (UID: "464c1481-f210-4c07-90ca-215492bf4ebe"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:43:43 crc kubenswrapper[4805]: I0226 17:43:43.985257 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/464c1481-f210-4c07-90ca-215492bf4ebe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "464c1481-f210-4c07-90ca-215492bf4ebe" (UID: "464c1481-f210-4c07-90ca-215492bf4ebe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:43:43 crc kubenswrapper[4805]: I0226 17:43:43.987884 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/464c1481-f210-4c07-90ca-215492bf4ebe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "464c1481-f210-4c07-90ca-215492bf4ebe" (UID: "464c1481-f210-4c07-90ca-215492bf4ebe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:43:43 crc kubenswrapper[4805]: I0226 17:43:43.993866 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/464c1481-f210-4c07-90ca-215492bf4ebe-config" (OuterVolumeSpecName: "config") pod "464c1481-f210-4c07-90ca-215492bf4ebe" (UID: "464c1481-f210-4c07-90ca-215492bf4ebe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:43:43 crc kubenswrapper[4805]: I0226 17:43:43.994734 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/464c1481-f210-4c07-90ca-215492bf4ebe-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "464c1481-f210-4c07-90ca-215492bf4ebe" (UID: "464c1481-f210-4c07-90ca-215492bf4ebe"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:43:43 crc kubenswrapper[4805]: I0226 17:43:43.996409 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/464c1481-f210-4c07-90ca-215492bf4ebe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "464c1481-f210-4c07-90ca-215492bf4ebe" (UID: "464c1481-f210-4c07-90ca-215492bf4ebe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:43:44 crc kubenswrapper[4805]: I0226 17:43:44.016464 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/464c1481-f210-4c07-90ca-215492bf4ebe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:44 crc kubenswrapper[4805]: I0226 17:43:44.016523 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44h4n\" (UniqueName: \"kubernetes.io/projected/464c1481-f210-4c07-90ca-215492bf4ebe-kube-api-access-44h4n\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:44 crc kubenswrapper[4805]: I0226 17:43:44.016535 4805 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/464c1481-f210-4c07-90ca-215492bf4ebe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:44 crc kubenswrapper[4805]: I0226 17:43:44.016544 4805 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/464c1481-f210-4c07-90ca-215492bf4ebe-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:44 crc kubenswrapper[4805]: I0226 17:43:44.016554 4805 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/464c1481-f210-4c07-90ca-215492bf4ebe-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:44 crc kubenswrapper[4805]: I0226 17:43:44.016564 4805 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/464c1481-f210-4c07-90ca-215492bf4ebe-config\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:44 crc kubenswrapper[4805]: I0226 17:43:44.016573 4805 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/464c1481-f210-4c07-90ca-215492bf4ebe-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 26 17:43:44 crc kubenswrapper[4805]: I0226 17:43:44.680596 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc7c944bf-s2hcq" event={"ID":"464c1481-f210-4c07-90ca-215492bf4ebe","Type":"ContainerDied","Data":"9b95a0f3f854249de455511c1d3eff473ea6b05a523b2848df3894ac78417f29"} Feb 26 17:43:44 crc kubenswrapper[4805]: I0226 17:43:44.681007 4805 scope.go:117] "RemoveContainer" containerID="d069ec6a9d718abc73ef47ab096dd7f9db0a3a4df552b0714b3e23625207830a" Feb 26 17:43:44 crc kubenswrapper[4805]: I0226 17:43:44.680721 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc7c944bf-s2hcq" Feb 26 17:43:44 crc kubenswrapper[4805]: I0226 17:43:44.736609 4805 scope.go:117] "RemoveContainer" containerID="246cd023dd306afe55c663b32bd7540b1b438506c40b55ef862304c0c669e59e" Feb 26 17:43:44 crc kubenswrapper[4805]: I0226 17:43:44.773463 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-s2hcq"] Feb 26 17:43:44 crc kubenswrapper[4805]: I0226 17:43:44.787391 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-dc7c944bf-s2hcq"] Feb 26 17:43:45 crc kubenswrapper[4805]: I0226 17:43:45.253695 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="464c1481-f210-4c07-90ca-215492bf4ebe" path="/var/lib/kubelet/pods/464c1481-f210-4c07-90ca-215492bf4ebe/volumes" Feb 26 17:43:51 crc kubenswrapper[4805]: I0226 17:43:51.765480 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cvhz2"] Feb 26 17:43:51 crc kubenswrapper[4805]: E0226 17:43:51.767655 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8874dd5-500c-4544-acd3-2749ede9ef23" containerName="init" Feb 26 17:43:51 crc kubenswrapper[4805]: I0226 17:43:51.767678 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8874dd5-500c-4544-acd3-2749ede9ef23" containerName="init" Feb 26 17:43:51 crc kubenswrapper[4805]: E0226 17:43:51.767713 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8874dd5-500c-4544-acd3-2749ede9ef23" containerName="dnsmasq-dns" Feb 26 17:43:51 crc kubenswrapper[4805]: I0226 17:43:51.767724 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8874dd5-500c-4544-acd3-2749ede9ef23" containerName="dnsmasq-dns" Feb 26 17:43:51 crc kubenswrapper[4805]: E0226 17:43:51.767748 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="464c1481-f210-4c07-90ca-215492bf4ebe" containerName="init" Feb 26 17:43:51 crc kubenswrapper[4805]: I0226 17:43:51.767756 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="464c1481-f210-4c07-90ca-215492bf4ebe" containerName="init" Feb 26 17:43:51 crc kubenswrapper[4805]: E0226 17:43:51.767778 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="464c1481-f210-4c07-90ca-215492bf4ebe" containerName="dnsmasq-dns" Feb 26 17:43:51 crc kubenswrapper[4805]: I0226 17:43:51.767786 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="464c1481-f210-4c07-90ca-215492bf4ebe" containerName="dnsmasq-dns" Feb 26 17:43:51 crc kubenswrapper[4805]: I0226 17:43:51.768116 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8874dd5-500c-4544-acd3-2749ede9ef23" containerName="dnsmasq-dns" Feb 26 17:43:51 crc kubenswrapper[4805]: I0226 17:43:51.768146 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="464c1481-f210-4c07-90ca-215492bf4ebe" containerName="dnsmasq-dns" Feb 26 17:43:51 crc kubenswrapper[4805]: I0226 17:43:51.769490 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cvhz2" Feb 26 17:43:51 crc kubenswrapper[4805]: I0226 17:43:51.779750 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 17:43:51 crc kubenswrapper[4805]: I0226 17:43:51.780293 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 17:43:51 crc kubenswrapper[4805]: I0226 17:43:51.781097 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sc2xs" Feb 26 17:43:51 crc kubenswrapper[4805]: I0226 17:43:51.781471 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 17:43:51 crc kubenswrapper[4805]: I0226 17:43:51.789568 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cvhz2"] Feb 26 17:43:51 crc kubenswrapper[4805]: I0226 17:43:51.956759 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b789c4f-f811-4d44-8337-115a3a9d1ca7-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-cvhz2\" (UID: \"2b789c4f-f811-4d44-8337-115a3a9d1ca7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cvhz2" Feb 26 17:43:51 crc kubenswrapper[4805]: I0226 17:43:51.957063 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mrzx\" (UniqueName: \"kubernetes.io/projected/2b789c4f-f811-4d44-8337-115a3a9d1ca7-kube-api-access-8mrzx\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-cvhz2\" (UID: \"2b789c4f-f811-4d44-8337-115a3a9d1ca7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cvhz2" Feb 26 17:43:51 crc kubenswrapper[4805]: I0226 17:43:51.957374 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b789c4f-f811-4d44-8337-115a3a9d1ca7-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-cvhz2\" (UID: \"2b789c4f-f811-4d44-8337-115a3a9d1ca7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cvhz2" Feb 26 17:43:51 crc kubenswrapper[4805]: I0226 17:43:51.957542 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b789c4f-f811-4d44-8337-115a3a9d1ca7-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-cvhz2\" (UID: \"2b789c4f-f811-4d44-8337-115a3a9d1ca7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cvhz2" Feb 26 17:43:52 crc kubenswrapper[4805]: I0226 17:43:52.059903 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b789c4f-f811-4d44-8337-115a3a9d1ca7-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-cvhz2\" (UID: \"2b789c4f-f811-4d44-8337-115a3a9d1ca7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cvhz2" Feb 26 17:43:52 crc kubenswrapper[4805]: I0226 17:43:52.060042 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b789c4f-f811-4d44-8337-115a3a9d1ca7-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-cvhz2\" (UID: \"2b789c4f-f811-4d44-8337-115a3a9d1ca7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cvhz2" Feb 26 17:43:52 crc kubenswrapper[4805]: I0226 17:43:52.060157 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b789c4f-f811-4d44-8337-115a3a9d1ca7-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-cvhz2\" (UID: \"2b789c4f-f811-4d44-8337-115a3a9d1ca7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cvhz2" Feb 26 17:43:52 crc kubenswrapper[4805]: I0226 17:43:52.060185 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mrzx\" (UniqueName: \"kubernetes.io/projected/2b789c4f-f811-4d44-8337-115a3a9d1ca7-kube-api-access-8mrzx\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-cvhz2\" (UID: \"2b789c4f-f811-4d44-8337-115a3a9d1ca7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cvhz2" Feb 26 17:43:52 crc kubenswrapper[4805]: I0226 17:43:52.069852 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b789c4f-f811-4d44-8337-115a3a9d1ca7-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-cvhz2\" (UID: \"2b789c4f-f811-4d44-8337-115a3a9d1ca7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cvhz2" Feb 26 17:43:52 crc kubenswrapper[4805]: I0226 17:43:52.069868 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b789c4f-f811-4d44-8337-115a3a9d1ca7-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-cvhz2\" (UID: \"2b789c4f-f811-4d44-8337-115a3a9d1ca7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cvhz2" Feb 26 17:43:52 crc kubenswrapper[4805]: I0226 17:43:52.070520 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b789c4f-f811-4d44-8337-115a3a9d1ca7-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-cvhz2\" (UID: \"2b789c4f-f811-4d44-8337-115a3a9d1ca7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cvhz2" Feb 26 17:43:52 crc kubenswrapper[4805]: I0226 17:43:52.079076 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mrzx\" (UniqueName: \"kubernetes.io/projected/2b789c4f-f811-4d44-8337-115a3a9d1ca7-kube-api-access-8mrzx\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-cvhz2\" (UID: \"2b789c4f-f811-4d44-8337-115a3a9d1ca7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cvhz2" Feb 26 17:43:52 crc kubenswrapper[4805]: I0226 17:43:52.090816 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cvhz2" Feb 26 17:43:52 crc kubenswrapper[4805]: I0226 17:43:52.892461 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cvhz2"] Feb 26 17:43:53 crc kubenswrapper[4805]: I0226 17:43:53.789119 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cvhz2" event={"ID":"2b789c4f-f811-4d44-8337-115a3a9d1ca7","Type":"ContainerStarted","Data":"3600b94fb3bd93cbc39bbc346a603215df8811af0db269cef87e14b5c2f1b87e"} Feb 26 17:43:54 crc kubenswrapper[4805]: I0226 17:43:54.811923 4805 generic.go:334] "Generic (PLEG): container finished" podID="3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2" containerID="d4af62ad3006a069f1c9dab004db8b26d73585dd07904caf685c7c82d5ce96b2" exitCode=0 Feb 26 17:43:54 crc kubenswrapper[4805]: I0226 17:43:54.812052 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2","Type":"ContainerDied","Data":"d4af62ad3006a069f1c9dab004db8b26d73585dd07904caf685c7c82d5ce96b2"} Feb 26 17:43:55 crc kubenswrapper[4805]: I0226 17:43:55.824822 4805 generic.go:334] "Generic (PLEG): container finished" podID="0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a" containerID="399b267b3880ed3d50351be914f8adef603de010d4bc7c0a87b652ab46a07e88" exitCode=0 Feb 26 17:43:55 crc kubenswrapper[4805]: I0226 17:43:55.825215 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a","Type":"ContainerDied","Data":"399b267b3880ed3d50351be914f8adef603de010d4bc7c0a87b652ab46a07e88"} Feb 26 17:43:56 crc kubenswrapper[4805]: I0226 17:43:56.840696 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2","Type":"ContainerStarted","Data":"3efca8d3f2a791b0ec115724bf638ce250f5ec2afcf83ee65877f59122262774"} Feb 26 17:43:56 crc kubenswrapper[4805]: I0226 17:43:56.841406 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 26 17:43:56 crc kubenswrapper[4805]: I0226 17:43:56.843283 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a","Type":"ContainerStarted","Data":"b09b8b67e3737ee3a91585e01da608facc901707b71750326a06050856780e67"} Feb 26 17:43:56 crc kubenswrapper[4805]: I0226 17:43:56.844237 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:43:56 crc kubenswrapper[4805]: I0226 17:43:56.882783 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.882728774 podStartE2EDuration="37.882728774s" podCreationTimestamp="2026-02-26 17:43:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:43:56.864904344 +0000 UTC m=+1751.426658693" watchObservedRunningTime="2026-02-26 17:43:56.882728774 +0000 UTC m=+1751.444483143" Feb 26 17:43:56 crc kubenswrapper[4805]: I0226 17:43:56.908950 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.908923475 podStartE2EDuration="37.908923475s" podCreationTimestamp="2026-02-26 17:43:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:43:56.887388251 +0000 UTC m=+1751.449142590" watchObservedRunningTime="2026-02-26 17:43:56.908923475 +0000 UTC m=+1751.470677814" Feb 26 17:43:58 crc kubenswrapper[4805]: I0226 17:43:58.291842 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-api-0" Feb 26 17:44:00 crc kubenswrapper[4805]: I0226 17:44:00.170369 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535464-7bbhk"] Feb 26 17:44:00 crc kubenswrapper[4805]: I0226 17:44:00.173000 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535464-7bbhk" Feb 26 17:44:00 crc kubenswrapper[4805]: I0226 17:44:00.176675 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 17:44:00 crc kubenswrapper[4805]: I0226 17:44:00.177084 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 17:44:00 crc kubenswrapper[4805]: I0226 17:44:00.177139 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 17:44:00 crc kubenswrapper[4805]: I0226 17:44:00.193511 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535464-7bbhk"] Feb 26 17:44:00 crc kubenswrapper[4805]: I0226 17:44:00.292330 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jjtj\" (UniqueName: \"kubernetes.io/projected/aa023ecd-19d5-44fa-89da-193222978970-kube-api-access-7jjtj\") pod \"auto-csr-approver-29535464-7bbhk\" (UID: \"aa023ecd-19d5-44fa-89da-193222978970\") " pod="openshift-infra/auto-csr-approver-29535464-7bbhk" Feb 26 17:44:00 crc kubenswrapper[4805]: I0226 17:44:00.394469 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jjtj\" (UniqueName: \"kubernetes.io/projected/aa023ecd-19d5-44fa-89da-193222978970-kube-api-access-7jjtj\") pod \"auto-csr-approver-29535464-7bbhk\" (UID: \"aa023ecd-19d5-44fa-89da-193222978970\") " pod="openshift-infra/auto-csr-approver-29535464-7bbhk" Feb 26 17:44:00 crc kubenswrapper[4805]: I0226 17:44:00.418722 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jjtj\" (UniqueName: \"kubernetes.io/projected/aa023ecd-19d5-44fa-89da-193222978970-kube-api-access-7jjtj\") pod \"auto-csr-approver-29535464-7bbhk\" (UID: \"aa023ecd-19d5-44fa-89da-193222978970\") " pod="openshift-infra/auto-csr-approver-29535464-7bbhk" Feb 26 17:44:00 crc kubenswrapper[4805]: I0226 17:44:00.499926 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535464-7bbhk" Feb 26 17:44:02 crc kubenswrapper[4805]: I0226 17:44:02.978172 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 17:44:02 crc kubenswrapper[4805]: I0226 17:44:02.979217 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 17:44:06 crc kubenswrapper[4805]: I0226 17:44:06.010038 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535464-7bbhk"] Feb 26 17:44:06 crc kubenswrapper[4805]: I0226 17:44:06.990386 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535464-7bbhk" event={"ID":"aa023ecd-19d5-44fa-89da-193222978970","Type":"ContainerStarted","Data":"884c08811394a4ff5b17b3c230ad3d8a8b19eb7383b51577d576eb88d9b23aef"} Feb 26 17:44:06 crc kubenswrapper[4805]: I0226 17:44:06.991157 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cvhz2" event={"ID":"2b789c4f-f811-4d44-8337-115a3a9d1ca7","Type":"ContainerStarted","Data":"a14905a765ea39ca590d5913d345adb8157c21442e1360468a9eb5247f530e12"} Feb 26 17:44:07 crc kubenswrapper[4805]: I0226 17:44:07.026651 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cvhz2" podStartSLOduration=2.901177059 podStartE2EDuration="16.026621632s" podCreationTimestamp="2026-02-26 17:43:51 +0000 UTC" firstStartedPulling="2026-02-26 17:43:52.895377841 +0000 UTC m=+1747.457132180" lastFinishedPulling="2026-02-26 17:44:06.020822414 +0000 UTC m=+1760.582576753" observedRunningTime="2026-02-26 17:44:07.014887455 +0000 UTC m=+1761.576641804" watchObservedRunningTime="2026-02-26 17:44:07.026621632 +0000 UTC m=+1761.588375971" Feb 26 17:44:07 crc kubenswrapper[4805]: I0226 17:44:07.990061 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535464-7bbhk" event={"ID":"aa023ecd-19d5-44fa-89da-193222978970","Type":"ContainerStarted","Data":"7d516b9507afe0f04a90352e764e81cf937be244bb4d282e8177b35814eb1be7"} Feb 26 17:44:08 crc kubenswrapper[4805]: I0226 17:44:08.014455 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535464-7bbhk" podStartSLOduration=6.866198801 podStartE2EDuration="8.014432275s" podCreationTimestamp="2026-02-26 17:44:00 +0000 UTC" firstStartedPulling="2026-02-26 17:44:06.0071927 +0000 UTC m=+1760.568947039" lastFinishedPulling="2026-02-26 17:44:07.155426174 +0000 UTC m=+1761.717180513" observedRunningTime="2026-02-26 17:44:08.010164007 +0000 UTC m=+1762.571918356" watchObservedRunningTime="2026-02-26 17:44:08.014432275 +0000 UTC m=+1762.576186624" Feb 26 17:44:08 crc kubenswrapper[4805]: I0226 17:44:08.998196 4805 generic.go:334] "Generic (PLEG): container finished" podID="aa023ecd-19d5-44fa-89da-193222978970" containerID="7d516b9507afe0f04a90352e764e81cf937be244bb4d282e8177b35814eb1be7" exitCode=0 Feb 26 17:44:08 crc kubenswrapper[4805]: I0226 17:44:08.998248 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535464-7bbhk" event={"ID":"aa023ecd-19d5-44fa-89da-193222978970","Type":"ContainerDied","Data":"7d516b9507afe0f04a90352e764e81cf937be244bb4d282e8177b35814eb1be7"} Feb 26 17:44:09 crc kubenswrapper[4805]: I0226 17:44:09.873682 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.245:5671: connect: connection refused" Feb 26 17:44:10 crc kubenswrapper[4805]: I0226 17:44:10.247998 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.246:5671: connect: connection refused" Feb 26 17:44:10 crc kubenswrapper[4805]: I0226 17:44:10.409451 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535464-7bbhk" Feb 26 17:44:10 crc kubenswrapper[4805]: I0226 17:44:10.470453 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjtj\" (UniqueName: \"kubernetes.io/projected/aa023ecd-19d5-44fa-89da-193222978970-kube-api-access-7jjtj\") pod \"aa023ecd-19d5-44fa-89da-193222978970\" (UID: \"aa023ecd-19d5-44fa-89da-193222978970\") " Feb 26 17:44:10 crc kubenswrapper[4805]: I0226 17:44:10.476645 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa023ecd-19d5-44fa-89da-193222978970-kube-api-access-7jjtj" (OuterVolumeSpecName: "kube-api-access-7jjtj") pod "aa023ecd-19d5-44fa-89da-193222978970" (UID: "aa023ecd-19d5-44fa-89da-193222978970"). InnerVolumeSpecName "kube-api-access-7jjtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:44:10 crc kubenswrapper[4805]: I0226 17:44:10.575496 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jjtj\" (UniqueName: \"kubernetes.io/projected/aa023ecd-19d5-44fa-89da-193222978970-kube-api-access-7jjtj\") on node \"crc\" DevicePath \"\"" Feb 26 17:44:11 crc kubenswrapper[4805]: I0226 17:44:11.019602 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535464-7bbhk" event={"ID":"aa023ecd-19d5-44fa-89da-193222978970","Type":"ContainerDied","Data":"884c08811394a4ff5b17b3c230ad3d8a8b19eb7383b51577d576eb88d9b23aef"} Feb 26 17:44:11 crc kubenswrapper[4805]: I0226 17:44:11.019651 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="884c08811394a4ff5b17b3c230ad3d8a8b19eb7383b51577d576eb88d9b23aef" Feb 26 17:44:11 crc kubenswrapper[4805]: I0226 17:44:11.019706 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535464-7bbhk" Feb 26 17:44:11 crc kubenswrapper[4805]: I0226 17:44:11.093211 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535458-b6pvp"] Feb 26 17:44:11 crc kubenswrapper[4805]: I0226 17:44:11.104696 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535458-b6pvp"] Feb 26 17:44:12 crc kubenswrapper[4805]: I0226 17:44:12.969349 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ea12120-9c03-4819-a2ae-61bf83333dea" path="/var/lib/kubelet/pods/3ea12120-9c03-4819-a2ae-61bf83333dea/volumes" Feb 26 17:44:19 crc kubenswrapper[4805]: I0226 17:44:19.114647 4805 generic.go:334] "Generic (PLEG): container finished" podID="2b789c4f-f811-4d44-8337-115a3a9d1ca7" containerID="a14905a765ea39ca590d5913d345adb8157c21442e1360468a9eb5247f530e12" exitCode=0 Feb 26 17:44:19 crc kubenswrapper[4805]: I0226 17:44:19.114733 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cvhz2" event={"ID":"2b789c4f-f811-4d44-8337-115a3a9d1ca7","Type":"ContainerDied","Data":"a14905a765ea39ca590d5913d345adb8157c21442e1360468a9eb5247f530e12"} Feb 26 17:44:19 crc kubenswrapper[4805]: I0226 17:44:19.873658 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 26 17:44:20 crc kubenswrapper[4805]: I0226 17:44:20.248305 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 26 17:44:20 crc kubenswrapper[4805]: I0226 17:44:20.950775 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cvhz2" Feb 26 17:44:21 crc kubenswrapper[4805]: I0226 17:44:21.023937 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b789c4f-f811-4d44-8337-115a3a9d1ca7-inventory\") pod \"2b789c4f-f811-4d44-8337-115a3a9d1ca7\" (UID: \"2b789c4f-f811-4d44-8337-115a3a9d1ca7\") " Feb 26 17:44:21 crc kubenswrapper[4805]: I0226 17:44:21.024129 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b789c4f-f811-4d44-8337-115a3a9d1ca7-repo-setup-combined-ca-bundle\") pod \"2b789c4f-f811-4d44-8337-115a3a9d1ca7\" (UID: \"2b789c4f-f811-4d44-8337-115a3a9d1ca7\") " Feb 26 17:44:21 crc kubenswrapper[4805]: I0226 17:44:21.024328 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mrzx\" (UniqueName: \"kubernetes.io/projected/2b789c4f-f811-4d44-8337-115a3a9d1ca7-kube-api-access-8mrzx\") pod \"2b789c4f-f811-4d44-8337-115a3a9d1ca7\" (UID: \"2b789c4f-f811-4d44-8337-115a3a9d1ca7\") " Feb 26 17:44:21 crc kubenswrapper[4805]: I0226 17:44:21.024438 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b789c4f-f811-4d44-8337-115a3a9d1ca7-ssh-key-openstack-edpm-ipam\") pod \"2b789c4f-f811-4d44-8337-115a3a9d1ca7\" (UID: \"2b789c4f-f811-4d44-8337-115a3a9d1ca7\") " Feb 26 17:44:21 crc kubenswrapper[4805]: I0226 17:44:21.050288 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b789c4f-f811-4d44-8337-115a3a9d1ca7-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "2b789c4f-f811-4d44-8337-115a3a9d1ca7" (UID: "2b789c4f-f811-4d44-8337-115a3a9d1ca7"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:44:21 crc kubenswrapper[4805]: I0226 17:44:21.061491 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b789c4f-f811-4d44-8337-115a3a9d1ca7-kube-api-access-8mrzx" (OuterVolumeSpecName: "kube-api-access-8mrzx") pod "2b789c4f-f811-4d44-8337-115a3a9d1ca7" (UID: "2b789c4f-f811-4d44-8337-115a3a9d1ca7"). InnerVolumeSpecName "kube-api-access-8mrzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:44:21 crc kubenswrapper[4805]: I0226 17:44:21.116270 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b789c4f-f811-4d44-8337-115a3a9d1ca7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2b789c4f-f811-4d44-8337-115a3a9d1ca7" (UID: "2b789c4f-f811-4d44-8337-115a3a9d1ca7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:44:21 crc kubenswrapper[4805]: I0226 17:44:21.118820 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b789c4f-f811-4d44-8337-115a3a9d1ca7-inventory" (OuterVolumeSpecName: "inventory") pod "2b789c4f-f811-4d44-8337-115a3a9d1ca7" (UID: "2b789c4f-f811-4d44-8337-115a3a9d1ca7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:44:21 crc kubenswrapper[4805]: I0226 17:44:21.127591 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b789c4f-f811-4d44-8337-115a3a9d1ca7-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 17:44:21 crc kubenswrapper[4805]: I0226 17:44:21.127626 4805 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b789c4f-f811-4d44-8337-115a3a9d1ca7-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:44:21 crc kubenswrapper[4805]: I0226 17:44:21.127638 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mrzx\" (UniqueName: \"kubernetes.io/projected/2b789c4f-f811-4d44-8337-115a3a9d1ca7-kube-api-access-8mrzx\") on node \"crc\" DevicePath \"\"" Feb 26 17:44:21 crc kubenswrapper[4805]: I0226 17:44:21.127648 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b789c4f-f811-4d44-8337-115a3a9d1ca7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 17:44:21 crc kubenswrapper[4805]: I0226 17:44:21.160595 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cvhz2" event={"ID":"2b789c4f-f811-4d44-8337-115a3a9d1ca7","Type":"ContainerDied","Data":"3600b94fb3bd93cbc39bbc346a603215df8811af0db269cef87e14b5c2f1b87e"} Feb 26 17:44:21 crc kubenswrapper[4805]: I0226 17:44:21.160650 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3600b94fb3bd93cbc39bbc346a603215df8811af0db269cef87e14b5c2f1b87e" Feb 26 17:44:21 crc kubenswrapper[4805]: I0226 17:44:21.160821 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-cvhz2" Feb 26 17:44:21 crc kubenswrapper[4805]: I0226 17:44:21.363477 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-pllc8"] Feb 26 17:44:21 crc kubenswrapper[4805]: E0226 17:44:21.363993 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa023ecd-19d5-44fa-89da-193222978970" containerName="oc" Feb 26 17:44:21 crc kubenswrapper[4805]: I0226 17:44:21.364010 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa023ecd-19d5-44fa-89da-193222978970" containerName="oc" Feb 26 17:44:21 crc kubenswrapper[4805]: E0226 17:44:21.364076 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b789c4f-f811-4d44-8337-115a3a9d1ca7" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 26 17:44:21 crc kubenswrapper[4805]: I0226 17:44:21.364087 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b789c4f-f811-4d44-8337-115a3a9d1ca7" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 26 17:44:21 crc kubenswrapper[4805]: I0226 17:44:21.364357 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b789c4f-f811-4d44-8337-115a3a9d1ca7" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 26 17:44:21 crc kubenswrapper[4805]: I0226 17:44:21.364392 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa023ecd-19d5-44fa-89da-193222978970" containerName="oc" Feb 26 17:44:21 crc kubenswrapper[4805]: I0226 17:44:21.365342 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pllc8" Feb 26 17:44:21 crc kubenswrapper[4805]: I0226 17:44:21.378393 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 17:44:21 crc kubenswrapper[4805]: I0226 17:44:21.378973 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sc2xs" Feb 26 17:44:21 crc kubenswrapper[4805]: I0226 17:44:21.381255 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 17:44:21 crc kubenswrapper[4805]: I0226 17:44:21.382327 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 17:44:21 crc kubenswrapper[4805]: I0226 17:44:21.387941 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf741ef5-d678-40c2-99b1-7e2f4db7787a-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pllc8\" (UID: \"bf741ef5-d678-40c2-99b1-7e2f4db7787a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pllc8" Feb 26 17:44:21 crc kubenswrapper[4805]: I0226 17:44:21.388693 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj6jq\" (UniqueName: \"kubernetes.io/projected/bf741ef5-d678-40c2-99b1-7e2f4db7787a-kube-api-access-pj6jq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pllc8\" (UID: \"bf741ef5-d678-40c2-99b1-7e2f4db7787a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pllc8" Feb 26 17:44:21 crc kubenswrapper[4805]: I0226 17:44:21.388811 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf741ef5-d678-40c2-99b1-7e2f4db7787a-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pllc8\" (UID: \"bf741ef5-d678-40c2-99b1-7e2f4db7787a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pllc8" Feb 26 17:44:21 crc kubenswrapper[4805]: I0226 17:44:21.393801 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-pllc8"] Feb 26 17:44:21 crc kubenswrapper[4805]: I0226 17:44:21.491092 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj6jq\" (UniqueName: \"kubernetes.io/projected/bf741ef5-d678-40c2-99b1-7e2f4db7787a-kube-api-access-pj6jq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pllc8\" (UID: \"bf741ef5-d678-40c2-99b1-7e2f4db7787a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pllc8" Feb 26 17:44:21 crc kubenswrapper[4805]: I0226 17:44:21.491400 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf741ef5-d678-40c2-99b1-7e2f4db7787a-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pllc8\" (UID: \"bf741ef5-d678-40c2-99b1-7e2f4db7787a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pllc8" Feb 26 17:44:21 crc kubenswrapper[4805]: I0226 17:44:21.491721 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf741ef5-d678-40c2-99b1-7e2f4db7787a-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pllc8\" (UID: \"bf741ef5-d678-40c2-99b1-7e2f4db7787a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pllc8" Feb 26 17:44:21 crc kubenswrapper[4805]: I0226 17:44:21.495667 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf741ef5-d678-40c2-99b1-7e2f4db7787a-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pllc8\" (UID: \"bf741ef5-d678-40c2-99b1-7e2f4db7787a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pllc8" Feb 26 17:44:21 crc kubenswrapper[4805]: I0226 17:44:21.495892 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf741ef5-d678-40c2-99b1-7e2f4db7787a-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pllc8\" (UID: \"bf741ef5-d678-40c2-99b1-7e2f4db7787a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pllc8" Feb 26 17:44:21 crc kubenswrapper[4805]: I0226 17:44:21.515432 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj6jq\" (UniqueName: \"kubernetes.io/projected/bf741ef5-d678-40c2-99b1-7e2f4db7787a-kube-api-access-pj6jq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pllc8\" (UID: \"bf741ef5-d678-40c2-99b1-7e2f4db7787a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pllc8" Feb 26 17:44:21 crc kubenswrapper[4805]: I0226 17:44:21.689359 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pllc8" Feb 26 17:44:22 crc kubenswrapper[4805]: W0226 17:44:22.330989 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf741ef5_d678_40c2_99b1_7e2f4db7787a.slice/crio-36fcadc0bb184d5d625534f01381534b72b27eb8ea7c0624d4bb52d934503ddf WatchSource:0}: Error finding container 36fcadc0bb184d5d625534f01381534b72b27eb8ea7c0624d4bb52d934503ddf: Status 404 returned error can't find the container with id 36fcadc0bb184d5d625534f01381534b72b27eb8ea7c0624d4bb52d934503ddf Feb 26 17:44:22 crc kubenswrapper[4805]: I0226 17:44:22.338646 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-pllc8"] Feb 26 17:44:23 crc kubenswrapper[4805]: I0226 17:44:23.185217 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pllc8" event={"ID":"bf741ef5-d678-40c2-99b1-7e2f4db7787a","Type":"ContainerStarted","Data":"bf4b3e05c74b11c49ffb32054bc31a660b0b243e7f9ff8555ced373d2d660d5a"} Feb 26 17:44:23 crc kubenswrapper[4805]: I0226 17:44:23.185558 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pllc8" event={"ID":"bf741ef5-d678-40c2-99b1-7e2f4db7787a","Type":"ContainerStarted","Data":"36fcadc0bb184d5d625534f01381534b72b27eb8ea7c0624d4bb52d934503ddf"} Feb 26 17:44:24 crc kubenswrapper[4805]: I0226 17:44:24.215104 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pllc8" podStartSLOduration=2.6666285690000002 podStartE2EDuration="3.215081988s" podCreationTimestamp="2026-02-26 17:44:21 +0000 UTC" firstStartedPulling="2026-02-26 17:44:22.333733183 +0000 UTC m=+1776.895487522" lastFinishedPulling="2026-02-26 17:44:22.882186612 +0000 UTC m=+1777.443940941" observedRunningTime="2026-02-26 17:44:24.20882436 +0000 UTC m=+1778.770578699" watchObservedRunningTime="2026-02-26 17:44:24.215081988 +0000 UTC m=+1778.776836327" Feb 26 17:44:26 crc kubenswrapper[4805]: I0226 17:44:26.218190 4805 generic.go:334] "Generic (PLEG): container finished" podID="bf741ef5-d678-40c2-99b1-7e2f4db7787a" containerID="bf4b3e05c74b11c49ffb32054bc31a660b0b243e7f9ff8555ced373d2d660d5a" exitCode=0 Feb 26 17:44:26 crc kubenswrapper[4805]: I0226 17:44:26.218298 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pllc8" event={"ID":"bf741ef5-d678-40c2-99b1-7e2f4db7787a","Type":"ContainerDied","Data":"bf4b3e05c74b11c49ffb32054bc31a660b0b243e7f9ff8555ced373d2d660d5a"} Feb 26 17:44:27 crc kubenswrapper[4805]: I0226 17:44:27.720072 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pllc8" Feb 26 17:44:27 crc kubenswrapper[4805]: I0226 17:44:27.832672 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf741ef5-d678-40c2-99b1-7e2f4db7787a-ssh-key-openstack-edpm-ipam\") pod \"bf741ef5-d678-40c2-99b1-7e2f4db7787a\" (UID: \"bf741ef5-d678-40c2-99b1-7e2f4db7787a\") " Feb 26 17:44:27 crc kubenswrapper[4805]: I0226 17:44:27.832828 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj6jq\" (UniqueName: \"kubernetes.io/projected/bf741ef5-d678-40c2-99b1-7e2f4db7787a-kube-api-access-pj6jq\") pod \"bf741ef5-d678-40c2-99b1-7e2f4db7787a\" (UID: \"bf741ef5-d678-40c2-99b1-7e2f4db7787a\") " Feb 26 17:44:27 crc kubenswrapper[4805]: I0226 17:44:27.832860 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf741ef5-d678-40c2-99b1-7e2f4db7787a-inventory\") pod \"bf741ef5-d678-40c2-99b1-7e2f4db7787a\" (UID: \"bf741ef5-d678-40c2-99b1-7e2f4db7787a\") " Feb 26 17:44:27 crc kubenswrapper[4805]: I0226 17:44:27.839005 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf741ef5-d678-40c2-99b1-7e2f4db7787a-kube-api-access-pj6jq" (OuterVolumeSpecName: "kube-api-access-pj6jq") pod "bf741ef5-d678-40c2-99b1-7e2f4db7787a" (UID: "bf741ef5-d678-40c2-99b1-7e2f4db7787a"). InnerVolumeSpecName "kube-api-access-pj6jq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:44:27 crc kubenswrapper[4805]: I0226 17:44:27.868003 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf741ef5-d678-40c2-99b1-7e2f4db7787a-inventory" (OuterVolumeSpecName: "inventory") pod "bf741ef5-d678-40c2-99b1-7e2f4db7787a" (UID: "bf741ef5-d678-40c2-99b1-7e2f4db7787a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:44:27 crc kubenswrapper[4805]: I0226 17:44:27.872819 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf741ef5-d678-40c2-99b1-7e2f4db7787a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bf741ef5-d678-40c2-99b1-7e2f4db7787a" (UID: "bf741ef5-d678-40c2-99b1-7e2f4db7787a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:44:27 crc kubenswrapper[4805]: I0226 17:44:27.936490 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bf741ef5-d678-40c2-99b1-7e2f4db7787a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 17:44:27 crc kubenswrapper[4805]: I0226 17:44:27.936535 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj6jq\" (UniqueName: \"kubernetes.io/projected/bf741ef5-d678-40c2-99b1-7e2f4db7787a-kube-api-access-pj6jq\") on node \"crc\" DevicePath \"\"" Feb 26 17:44:27 crc kubenswrapper[4805]: I0226 17:44:27.936549 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf741ef5-d678-40c2-99b1-7e2f4db7787a-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 17:44:28 crc kubenswrapper[4805]: I0226 17:44:28.240919 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pllc8" event={"ID":"bf741ef5-d678-40c2-99b1-7e2f4db7787a","Type":"ContainerDied","Data":"36fcadc0bb184d5d625534f01381534b72b27eb8ea7c0624d4bb52d934503ddf"} Feb 26 17:44:28 crc kubenswrapper[4805]: I0226 17:44:28.243135 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36fcadc0bb184d5d625534f01381534b72b27eb8ea7c0624d4bb52d934503ddf" Feb 26 17:44:28 crc kubenswrapper[4805]: I0226 17:44:28.241713 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pllc8" Feb 26 17:44:28 crc kubenswrapper[4805]: I0226 17:44:28.318458 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4pbh"] Feb 26 17:44:28 crc kubenswrapper[4805]: E0226 17:44:28.319113 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf741ef5-d678-40c2-99b1-7e2f4db7787a" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 26 17:44:28 crc kubenswrapper[4805]: I0226 17:44:28.319138 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf741ef5-d678-40c2-99b1-7e2f4db7787a" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 26 17:44:28 crc kubenswrapper[4805]: I0226 17:44:28.319349 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf741ef5-d678-40c2-99b1-7e2f4db7787a" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 26 17:44:28 crc kubenswrapper[4805]: I0226 17:44:28.320313 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4pbh" Feb 26 17:44:28 crc kubenswrapper[4805]: I0226 17:44:28.322633 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 17:44:28 crc kubenswrapper[4805]: I0226 17:44:28.322644 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 17:44:28 crc kubenswrapper[4805]: I0226 17:44:28.323072 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 17:44:28 crc kubenswrapper[4805]: I0226 17:44:28.323100 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sc2xs" Feb 26 17:44:28 crc kubenswrapper[4805]: I0226 17:44:28.349110 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4pbh"] Feb 26 17:44:28 crc kubenswrapper[4805]: I0226 17:44:28.446286 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ea8e6080-7eee-41dd-a4f6-6753bb1cc0de-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-g4pbh\" (UID: \"ea8e6080-7eee-41dd-a4f6-6753bb1cc0de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4pbh" Feb 26 17:44:28 crc kubenswrapper[4805]: I0226 17:44:28.446507 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea8e6080-7eee-41dd-a4f6-6753bb1cc0de-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-g4pbh\" (UID: \"ea8e6080-7eee-41dd-a4f6-6753bb1cc0de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4pbh" Feb 26 17:44:28 crc kubenswrapper[4805]: I0226 17:44:28.446569 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ea8e6080-7eee-41dd-a4f6-6753bb1cc0de-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-g4pbh\" (UID: \"ea8e6080-7eee-41dd-a4f6-6753bb1cc0de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4pbh" Feb 26 17:44:28 crc kubenswrapper[4805]: I0226 17:44:28.446630 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x6xj\" (UniqueName: \"kubernetes.io/projected/ea8e6080-7eee-41dd-a4f6-6753bb1cc0de-kube-api-access-2x6xj\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-g4pbh\" (UID: \"ea8e6080-7eee-41dd-a4f6-6753bb1cc0de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4pbh" Feb 26 17:44:28 crc kubenswrapper[4805]: I0226 17:44:28.549145 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea8e6080-7eee-41dd-a4f6-6753bb1cc0de-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-g4pbh\" (UID: \"ea8e6080-7eee-41dd-a4f6-6753bb1cc0de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4pbh" Feb 26 17:44:28 crc kubenswrapper[4805]: I0226 17:44:28.549207 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ea8e6080-7eee-41dd-a4f6-6753bb1cc0de-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-g4pbh\" (UID: \"ea8e6080-7eee-41dd-a4f6-6753bb1cc0de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4pbh" Feb 26 17:44:28 crc kubenswrapper[4805]: I0226 17:44:28.549237 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x6xj\" (UniqueName: \"kubernetes.io/projected/ea8e6080-7eee-41dd-a4f6-6753bb1cc0de-kube-api-access-2x6xj\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-g4pbh\" (UID: \"ea8e6080-7eee-41dd-a4f6-6753bb1cc0de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4pbh" Feb 26 17:44:28 crc kubenswrapper[4805]: I0226 17:44:28.549309 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ea8e6080-7eee-41dd-a4f6-6753bb1cc0de-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-g4pbh\" (UID: \"ea8e6080-7eee-41dd-a4f6-6753bb1cc0de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4pbh" Feb 26 17:44:28 crc kubenswrapper[4805]: I0226 17:44:28.554380 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ea8e6080-7eee-41dd-a4f6-6753bb1cc0de-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-g4pbh\" (UID: \"ea8e6080-7eee-41dd-a4f6-6753bb1cc0de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4pbh" Feb 26 17:44:28 crc kubenswrapper[4805]: I0226 17:44:28.554609 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea8e6080-7eee-41dd-a4f6-6753bb1cc0de-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-g4pbh\" (UID: \"ea8e6080-7eee-41dd-a4f6-6753bb1cc0de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4pbh" Feb 26 17:44:28 crc kubenswrapper[4805]: I0226 17:44:28.554625 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ea8e6080-7eee-41dd-a4f6-6753bb1cc0de-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-g4pbh\" (UID: \"ea8e6080-7eee-41dd-a4f6-6753bb1cc0de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4pbh" Feb 26 17:44:28 crc kubenswrapper[4805]: I0226 17:44:28.566390 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x6xj\" (UniqueName: \"kubernetes.io/projected/ea8e6080-7eee-41dd-a4f6-6753bb1cc0de-kube-api-access-2x6xj\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-g4pbh\" (UID: \"ea8e6080-7eee-41dd-a4f6-6753bb1cc0de\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4pbh" Feb 26 17:44:28 crc kubenswrapper[4805]: I0226 17:44:28.641240 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4pbh" Feb 26 17:44:29 crc kubenswrapper[4805]: I0226 17:44:29.194679 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4pbh"] Feb 26 17:44:29 crc kubenswrapper[4805]: I0226 17:44:29.198661 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 17:44:29 crc kubenswrapper[4805]: I0226 17:44:29.251526 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4pbh" event={"ID":"ea8e6080-7eee-41dd-a4f6-6753bb1cc0de","Type":"ContainerStarted","Data":"6430d7c1a86bfdcf964bf140da220011b79d6d51a77512e217e174bba6ca1855"} Feb 26 17:44:30 crc kubenswrapper[4805]: I0226 17:44:30.262973 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4pbh" event={"ID":"ea8e6080-7eee-41dd-a4f6-6753bb1cc0de","Type":"ContainerStarted","Data":"349b2172f64e54ec974e8001e4cf1865ad74f8975676e9692b7e128705d6084d"} Feb 26 17:44:30 crc kubenswrapper[4805]: I0226 17:44:30.290545 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4pbh" podStartSLOduration=1.889288895 podStartE2EDuration="2.290518666s" podCreationTimestamp="2026-02-26 17:44:28 +0000 UTC" firstStartedPulling="2026-02-26 17:44:29.19843019 +0000 UTC m=+1783.760184519" lastFinishedPulling="2026-02-26 17:44:29.599659951 +0000 UTC m=+1784.161414290" observedRunningTime="2026-02-26 17:44:30.277951169 +0000 UTC m=+1784.839705518" watchObservedRunningTime="2026-02-26 17:44:30.290518666 +0000 UTC m=+1784.852273005" Feb 26 17:44:32 crc kubenswrapper[4805]: I0226 17:44:32.978870 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 17:44:32 crc kubenswrapper[4805]: I0226 17:44:32.979397 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 17:44:32 crc kubenswrapper[4805]: I0226 17:44:32.979454 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" Feb 26 17:44:32 crc kubenswrapper[4805]: I0226 17:44:32.980418 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8805aea259eb85c371de7bf3781a3208df65fcf541775082ddfaaaa543988732"} pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 17:44:32 crc kubenswrapper[4805]: I0226 17:44:32.980492 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" containerID="cri-o://8805aea259eb85c371de7bf3781a3208df65fcf541775082ddfaaaa543988732" gracePeriod=600 Feb 26 17:44:33 crc kubenswrapper[4805]: E0226 17:44:33.119409 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:44:33 crc kubenswrapper[4805]: I0226 17:44:33.303476 4805 generic.go:334] "Generic (PLEG): container finished" podID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerID="8805aea259eb85c371de7bf3781a3208df65fcf541775082ddfaaaa543988732" exitCode=0 Feb 26 17:44:33 crc kubenswrapper[4805]: I0226 17:44:33.303522 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" event={"ID":"25e83477-65d0-41be-8e55-fdacfc5871a8","Type":"ContainerDied","Data":"8805aea259eb85c371de7bf3781a3208df65fcf541775082ddfaaaa543988732"} Feb 26 17:44:33 crc kubenswrapper[4805]: I0226 17:44:33.303589 4805 scope.go:117] "RemoveContainer" containerID="fda738fe0407aa3e4e71cd0054243c0ef019a44dbbf48701bf838c7b50aeb1e6" Feb 26 17:44:33 crc kubenswrapper[4805]: I0226 17:44:33.304462 4805 scope.go:117] "RemoveContainer" containerID="8805aea259eb85c371de7bf3781a3208df65fcf541775082ddfaaaa543988732" Feb 26 17:44:33 crc kubenswrapper[4805]: E0226 17:44:33.304753 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:44:42 crc kubenswrapper[4805]: I0226 17:44:42.887241 4805 scope.go:117] "RemoveContainer" containerID="ba092aa320bc7b5360b30987fa561519721cdf764e9d7169e2e3f7bce8d0c26d" Feb 26 17:44:44 crc kubenswrapper[4805]: I0226 17:44:44.956130 4805 scope.go:117] "RemoveContainer" containerID="8805aea259eb85c371de7bf3781a3208df65fcf541775082ddfaaaa543988732" Feb 26 17:44:44 crc kubenswrapper[4805]: E0226 17:44:44.956700 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:44:56 crc kubenswrapper[4805]: I0226 17:44:56.961549 4805 scope.go:117] "RemoveContainer" containerID="8805aea259eb85c371de7bf3781a3208df65fcf541775082ddfaaaa543988732" Feb 26 17:44:56 crc kubenswrapper[4805]: E0226 17:44:56.962551 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:45:00 crc kubenswrapper[4805]: I0226 17:45:00.148699 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535465-7j9hb"] Feb 26 17:45:00 crc kubenswrapper[4805]: I0226 17:45:00.150899 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535465-7j9hb" Feb 26 17:45:00 crc kubenswrapper[4805]: I0226 17:45:00.154491 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwnh9\" (UniqueName: \"kubernetes.io/projected/2abd567d-6626-47ee-8469-0e70cff16a4d-kube-api-access-zwnh9\") pod \"collect-profiles-29535465-7j9hb\" (UID: \"2abd567d-6626-47ee-8469-0e70cff16a4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535465-7j9hb" Feb 26 17:45:00 crc kubenswrapper[4805]: I0226 17:45:00.154616 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2abd567d-6626-47ee-8469-0e70cff16a4d-secret-volume\") pod \"collect-profiles-29535465-7j9hb\" (UID: \"2abd567d-6626-47ee-8469-0e70cff16a4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535465-7j9hb" Feb 26 17:45:00 crc kubenswrapper[4805]: I0226 17:45:00.154813 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 17:45:00 crc kubenswrapper[4805]: I0226 17:45:00.154957 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2abd567d-6626-47ee-8469-0e70cff16a4d-config-volume\") pod \"collect-profiles-29535465-7j9hb\" (UID: \"2abd567d-6626-47ee-8469-0e70cff16a4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535465-7j9hb" Feb 26 17:45:00 crc kubenswrapper[4805]: I0226 17:45:00.155407 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 17:45:00 crc kubenswrapper[4805]: I0226 17:45:00.159100 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535465-7j9hb"] Feb 26 17:45:00 crc kubenswrapper[4805]: I0226 17:45:00.257653 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2abd567d-6626-47ee-8469-0e70cff16a4d-config-volume\") pod \"collect-profiles-29535465-7j9hb\" (UID: \"2abd567d-6626-47ee-8469-0e70cff16a4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535465-7j9hb" Feb 26 17:45:00 crc kubenswrapper[4805]: I0226 17:45:00.257832 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwnh9\" (UniqueName: \"kubernetes.io/projected/2abd567d-6626-47ee-8469-0e70cff16a4d-kube-api-access-zwnh9\") pod \"collect-profiles-29535465-7j9hb\" (UID: \"2abd567d-6626-47ee-8469-0e70cff16a4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535465-7j9hb" Feb 26 17:45:00 crc kubenswrapper[4805]: I0226 17:45:00.257888 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2abd567d-6626-47ee-8469-0e70cff16a4d-secret-volume\") pod \"collect-profiles-29535465-7j9hb\" (UID: \"2abd567d-6626-47ee-8469-0e70cff16a4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535465-7j9hb" Feb 26 17:45:00 crc kubenswrapper[4805]: I0226 17:45:00.258744 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2abd567d-6626-47ee-8469-0e70cff16a4d-config-volume\") pod \"collect-profiles-29535465-7j9hb\" (UID: \"2abd567d-6626-47ee-8469-0e70cff16a4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535465-7j9hb" Feb 26 17:45:00 crc kubenswrapper[4805]: I0226 17:45:00.265077 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2abd567d-6626-47ee-8469-0e70cff16a4d-secret-volume\") pod \"collect-profiles-29535465-7j9hb\" (UID: \"2abd567d-6626-47ee-8469-0e70cff16a4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535465-7j9hb" Feb 26 17:45:00 crc kubenswrapper[4805]: I0226 17:45:00.279005 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwnh9\" (UniqueName: \"kubernetes.io/projected/2abd567d-6626-47ee-8469-0e70cff16a4d-kube-api-access-zwnh9\") pod \"collect-profiles-29535465-7j9hb\" (UID: \"2abd567d-6626-47ee-8469-0e70cff16a4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535465-7j9hb" Feb 26 17:45:00 crc kubenswrapper[4805]: I0226 17:45:00.476859 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535465-7j9hb" Feb 26 17:45:00 crc kubenswrapper[4805]: I0226 17:45:00.967308 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535465-7j9hb"] Feb 26 17:45:01 crc kubenswrapper[4805]: I0226 17:45:01.612422 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535465-7j9hb" event={"ID":"2abd567d-6626-47ee-8469-0e70cff16a4d","Type":"ContainerStarted","Data":"c5d675fd4f23b5836b516950fd0440ed74fd2b004d03765d871bf6b708eebd27"} Feb 26 17:45:01 crc kubenswrapper[4805]: I0226 17:45:01.612754 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535465-7j9hb" event={"ID":"2abd567d-6626-47ee-8469-0e70cff16a4d","Type":"ContainerStarted","Data":"cb20d6384bae7803676968fc5ff3583c42fa1e28374c47ad028f9d285626af99"} Feb 26 17:45:01 crc kubenswrapper[4805]: I0226 17:45:01.636037 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29535465-7j9hb" podStartSLOduration=1.6360001830000002 podStartE2EDuration="1.636000183s" podCreationTimestamp="2026-02-26 17:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 17:45:01.628964906 +0000 UTC m=+1816.190719245" watchObservedRunningTime="2026-02-26 17:45:01.636000183 +0000 UTC m=+1816.197754522" Feb 26 17:45:02 crc kubenswrapper[4805]: I0226 17:45:02.627039 4805 generic.go:334] "Generic (PLEG): container finished" podID="2abd567d-6626-47ee-8469-0e70cff16a4d" containerID="c5d675fd4f23b5836b516950fd0440ed74fd2b004d03765d871bf6b708eebd27" exitCode=0 Feb 26 17:45:02 crc kubenswrapper[4805]: I0226 17:45:02.627106 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535465-7j9hb" event={"ID":"2abd567d-6626-47ee-8469-0e70cff16a4d","Type":"ContainerDied","Data":"c5d675fd4f23b5836b516950fd0440ed74fd2b004d03765d871bf6b708eebd27"} Feb 26 17:45:04 crc kubenswrapper[4805]: I0226 17:45:04.061296 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535465-7j9hb" Feb 26 17:45:04 crc kubenswrapper[4805]: I0226 17:45:04.157739 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2abd567d-6626-47ee-8469-0e70cff16a4d-secret-volume\") pod \"2abd567d-6626-47ee-8469-0e70cff16a4d\" (UID: \"2abd567d-6626-47ee-8469-0e70cff16a4d\") " Feb 26 17:45:04 crc kubenswrapper[4805]: I0226 17:45:04.158275 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2abd567d-6626-47ee-8469-0e70cff16a4d-config-volume\") pod \"2abd567d-6626-47ee-8469-0e70cff16a4d\" (UID: \"2abd567d-6626-47ee-8469-0e70cff16a4d\") " Feb 26 17:45:04 crc kubenswrapper[4805]: I0226 17:45:04.158422 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwnh9\" (UniqueName: \"kubernetes.io/projected/2abd567d-6626-47ee-8469-0e70cff16a4d-kube-api-access-zwnh9\") pod \"2abd567d-6626-47ee-8469-0e70cff16a4d\" (UID: \"2abd567d-6626-47ee-8469-0e70cff16a4d\") " Feb 26 17:45:04 crc kubenswrapper[4805]: I0226 17:45:04.158844 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2abd567d-6626-47ee-8469-0e70cff16a4d-config-volume" (OuterVolumeSpecName: "config-volume") pod "2abd567d-6626-47ee-8469-0e70cff16a4d" (UID: "2abd567d-6626-47ee-8469-0e70cff16a4d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:45:04 crc kubenswrapper[4805]: I0226 17:45:04.159062 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2abd567d-6626-47ee-8469-0e70cff16a4d-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 17:45:04 crc kubenswrapper[4805]: I0226 17:45:04.163798 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2abd567d-6626-47ee-8469-0e70cff16a4d-kube-api-access-zwnh9" (OuterVolumeSpecName: "kube-api-access-zwnh9") pod "2abd567d-6626-47ee-8469-0e70cff16a4d" (UID: "2abd567d-6626-47ee-8469-0e70cff16a4d"). InnerVolumeSpecName "kube-api-access-zwnh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:45:04 crc kubenswrapper[4805]: I0226 17:45:04.173826 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2abd567d-6626-47ee-8469-0e70cff16a4d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2abd567d-6626-47ee-8469-0e70cff16a4d" (UID: "2abd567d-6626-47ee-8469-0e70cff16a4d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:45:04 crc kubenswrapper[4805]: I0226 17:45:04.261663 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwnh9\" (UniqueName: \"kubernetes.io/projected/2abd567d-6626-47ee-8469-0e70cff16a4d-kube-api-access-zwnh9\") on node \"crc\" DevicePath \"\"" Feb 26 17:45:04 crc kubenswrapper[4805]: I0226 17:45:04.261708 4805 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2abd567d-6626-47ee-8469-0e70cff16a4d-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 17:45:04 crc kubenswrapper[4805]: I0226 17:45:04.663167 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535465-7j9hb" event={"ID":"2abd567d-6626-47ee-8469-0e70cff16a4d","Type":"ContainerDied","Data":"cb20d6384bae7803676968fc5ff3583c42fa1e28374c47ad028f9d285626af99"} Feb 26 17:45:04 crc kubenswrapper[4805]: I0226 17:45:04.663485 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb20d6384bae7803676968fc5ff3583c42fa1e28374c47ad028f9d285626af99" Feb 26 17:45:04 crc kubenswrapper[4805]: I0226 17:45:04.663206 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535465-7j9hb" Feb 26 17:45:10 crc kubenswrapper[4805]: I0226 17:45:10.953526 4805 scope.go:117] "RemoveContainer" containerID="8805aea259eb85c371de7bf3781a3208df65fcf541775082ddfaaaa543988732" Feb 26 17:45:10 crc kubenswrapper[4805]: E0226 17:45:10.956130 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:45:21 crc kubenswrapper[4805]: I0226 17:45:21.953983 4805 scope.go:117] "RemoveContainer" containerID="8805aea259eb85c371de7bf3781a3208df65fcf541775082ddfaaaa543988732" Feb 26 17:45:21 crc kubenswrapper[4805]: E0226 17:45:21.954791 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:45:32 crc kubenswrapper[4805]: I0226 17:45:32.953422 4805 scope.go:117] "RemoveContainer" containerID="8805aea259eb85c371de7bf3781a3208df65fcf541775082ddfaaaa543988732" Feb 26 17:45:32 crc kubenswrapper[4805]: E0226 17:45:32.954313 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:45:43 crc kubenswrapper[4805]: I0226 17:45:43.047846 4805 scope.go:117] "RemoveContainer" containerID="69f3081626bc75e68bbd59df574ddf15ced84bc4a9e4f0d15bd4b6b3e5c2490b" Feb 26 17:45:43 crc kubenswrapper[4805]: I0226 17:45:43.077375 4805 scope.go:117] "RemoveContainer" containerID="3a6f647bb63e20e9c1916667cba8f847a673b3c9f5180dbc11f6fe19a8817066" Feb 26 17:45:43 crc kubenswrapper[4805]: I0226 17:45:43.140243 4805 scope.go:117] "RemoveContainer" containerID="812aaa90d7c85aefa95f8af2f204c58b96a105e80a0926d93ef0839a20cb6c0a" Feb 26 17:45:43 crc kubenswrapper[4805]: I0226 17:45:43.178295 4805 scope.go:117] "RemoveContainer" containerID="5d3e2c52590370cb3bb0c91f5dc57c28c1677079f74b3e277f915a0266c9046b" Feb 26 17:45:43 crc kubenswrapper[4805]: I0226 17:45:43.228257 4805 scope.go:117] "RemoveContainer" containerID="a26931c8694fe1c33ac5d7e1bcfb562f5c502f20f4d2afd41e4c61a2bcbb4889" Feb 26 17:45:43 crc kubenswrapper[4805]: I0226 17:45:43.332680 4805 scope.go:117] "RemoveContainer" containerID="a3bcd7d0409f3b9a2bcc40555c21dd12fc3259f4315ff85210cd1622306063ac" Feb 26 17:45:43 crc kubenswrapper[4805]: I0226 17:45:43.368564 4805 scope.go:117] "RemoveContainer" containerID="4c9f921177f9e86148eead87a649e0cedb1d4ce62e84e748155467e4e85f58ef" Feb 26 17:45:47 crc kubenswrapper[4805]: I0226 17:45:47.953866 4805 scope.go:117] "RemoveContainer" containerID="8805aea259eb85c371de7bf3781a3208df65fcf541775082ddfaaaa543988732" Feb 26 17:45:47 crc kubenswrapper[4805]: E0226 17:45:47.955770 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:46:00 crc kubenswrapper[4805]: I0226 17:46:00.161290 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535466-kwdlq"] Feb 26 17:46:00 crc kubenswrapper[4805]: E0226 17:46:00.162393 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2abd567d-6626-47ee-8469-0e70cff16a4d" containerName="collect-profiles" Feb 26 17:46:00 crc kubenswrapper[4805]: I0226 17:46:00.162411 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2abd567d-6626-47ee-8469-0e70cff16a4d" containerName="collect-profiles" Feb 26 17:46:00 crc kubenswrapper[4805]: I0226 17:46:00.162713 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2abd567d-6626-47ee-8469-0e70cff16a4d" containerName="collect-profiles" Feb 26 17:46:00 crc kubenswrapper[4805]: I0226 17:46:00.163636 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535466-kwdlq" Feb 26 17:46:00 crc kubenswrapper[4805]: I0226 17:46:00.165930 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 17:46:00 crc kubenswrapper[4805]: I0226 17:46:00.168637 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 17:46:00 crc kubenswrapper[4805]: I0226 17:46:00.169219 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 17:46:00 crc kubenswrapper[4805]: I0226 17:46:00.176429 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535466-kwdlq"] Feb 26 17:46:00 crc kubenswrapper[4805]: I0226 17:46:00.210581 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrq4z\" (UniqueName: \"kubernetes.io/projected/472fb3d5-0b1c-4668-93f8-06f6da00569f-kube-api-access-vrq4z\") pod \"auto-csr-approver-29535466-kwdlq\" (UID: \"472fb3d5-0b1c-4668-93f8-06f6da00569f\") " pod="openshift-infra/auto-csr-approver-29535466-kwdlq" Feb 26 17:46:00 crc kubenswrapper[4805]: I0226 17:46:00.311696 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrq4z\" (UniqueName: \"kubernetes.io/projected/472fb3d5-0b1c-4668-93f8-06f6da00569f-kube-api-access-vrq4z\") pod \"auto-csr-approver-29535466-kwdlq\" (UID: \"472fb3d5-0b1c-4668-93f8-06f6da00569f\") " pod="openshift-infra/auto-csr-approver-29535466-kwdlq" Feb 26 17:46:00 crc kubenswrapper[4805]: I0226 17:46:00.329640 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrq4z\" (UniqueName: \"kubernetes.io/projected/472fb3d5-0b1c-4668-93f8-06f6da00569f-kube-api-access-vrq4z\") pod \"auto-csr-approver-29535466-kwdlq\" (UID: \"472fb3d5-0b1c-4668-93f8-06f6da00569f\") " pod="openshift-infra/auto-csr-approver-29535466-kwdlq" Feb 26 17:46:00 crc kubenswrapper[4805]: I0226 17:46:00.509465 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535466-kwdlq" Feb 26 17:46:00 crc kubenswrapper[4805]: I0226 17:46:00.971401 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535466-kwdlq"] Feb 26 17:46:01 crc kubenswrapper[4805]: I0226 17:46:01.257804 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535466-kwdlq" event={"ID":"472fb3d5-0b1c-4668-93f8-06f6da00569f","Type":"ContainerStarted","Data":"27633cef26db3ceba359235dd8c5618917fcf83c5c95797d594699e1a9ad3fe9"} Feb 26 17:46:02 crc kubenswrapper[4805]: I0226 17:46:02.268457 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535466-kwdlq" event={"ID":"472fb3d5-0b1c-4668-93f8-06f6da00569f","Type":"ContainerStarted","Data":"77b7861274b136b1a52e08d8d6651e01bd31d33e2e59523c1c4424105d63df2a"} Feb 26 17:46:02 crc kubenswrapper[4805]: I0226 17:46:02.286353 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535466-kwdlq" podStartSLOduration=1.341702908 podStartE2EDuration="2.286335211s" podCreationTimestamp="2026-02-26 17:46:00 +0000 UTC" firstStartedPulling="2026-02-26 17:46:00.978436265 +0000 UTC m=+1875.540190604" lastFinishedPulling="2026-02-26 17:46:01.923068568 +0000 UTC m=+1876.484822907" observedRunningTime="2026-02-26 17:46:02.281515689 +0000 UTC m=+1876.843270028" watchObservedRunningTime="2026-02-26 17:46:02.286335211 +0000 UTC m=+1876.848089550" Feb 26 17:46:02 crc kubenswrapper[4805]: I0226 17:46:02.953186 4805 scope.go:117] "RemoveContainer" containerID="8805aea259eb85c371de7bf3781a3208df65fcf541775082ddfaaaa543988732" Feb 26 17:46:02 crc kubenswrapper[4805]: E0226 17:46:02.954172 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:46:03 crc kubenswrapper[4805]: I0226 17:46:03.298729 4805 generic.go:334] "Generic (PLEG): container finished" podID="472fb3d5-0b1c-4668-93f8-06f6da00569f" containerID="77b7861274b136b1a52e08d8d6651e01bd31d33e2e59523c1c4424105d63df2a" exitCode=0 Feb 26 17:46:03 crc kubenswrapper[4805]: I0226 17:46:03.298786 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535466-kwdlq" event={"ID":"472fb3d5-0b1c-4668-93f8-06f6da00569f","Type":"ContainerDied","Data":"77b7861274b136b1a52e08d8d6651e01bd31d33e2e59523c1c4424105d63df2a"} Feb 26 17:46:04 crc kubenswrapper[4805]: I0226 17:46:04.705622 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535466-kwdlq" Feb 26 17:46:04 crc kubenswrapper[4805]: I0226 17:46:04.716476 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrq4z\" (UniqueName: \"kubernetes.io/projected/472fb3d5-0b1c-4668-93f8-06f6da00569f-kube-api-access-vrq4z\") pod \"472fb3d5-0b1c-4668-93f8-06f6da00569f\" (UID: \"472fb3d5-0b1c-4668-93f8-06f6da00569f\") " Feb 26 17:46:04 crc kubenswrapper[4805]: I0226 17:46:04.723296 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/472fb3d5-0b1c-4668-93f8-06f6da00569f-kube-api-access-vrq4z" (OuterVolumeSpecName: "kube-api-access-vrq4z") pod "472fb3d5-0b1c-4668-93f8-06f6da00569f" (UID: "472fb3d5-0b1c-4668-93f8-06f6da00569f"). InnerVolumeSpecName "kube-api-access-vrq4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:46:04 crc kubenswrapper[4805]: I0226 17:46:04.818466 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrq4z\" (UniqueName: \"kubernetes.io/projected/472fb3d5-0b1c-4668-93f8-06f6da00569f-kube-api-access-vrq4z\") on node \"crc\" DevicePath \"\"" Feb 26 17:46:05 crc kubenswrapper[4805]: I0226 17:46:05.319379 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535466-kwdlq" event={"ID":"472fb3d5-0b1c-4668-93f8-06f6da00569f","Type":"ContainerDied","Data":"27633cef26db3ceba359235dd8c5618917fcf83c5c95797d594699e1a9ad3fe9"} Feb 26 17:46:05 crc kubenswrapper[4805]: I0226 17:46:05.319741 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27633cef26db3ceba359235dd8c5618917fcf83c5c95797d594699e1a9ad3fe9" Feb 26 17:46:05 crc kubenswrapper[4805]: I0226 17:46:05.319836 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535466-kwdlq" Feb 26 17:46:05 crc kubenswrapper[4805]: I0226 17:46:05.364236 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535460-z5f9r"] Feb 26 17:46:05 crc kubenswrapper[4805]: I0226 17:46:05.374757 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535460-z5f9r"] Feb 26 17:46:06 crc kubenswrapper[4805]: I0226 17:46:06.966639 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b33cba2-320e-4c3b-986b-9d7e3225d30e" path="/var/lib/kubelet/pods/4b33cba2-320e-4c3b-986b-9d7e3225d30e/volumes" Feb 26 17:46:17 crc kubenswrapper[4805]: I0226 17:46:17.952886 4805 scope.go:117] "RemoveContainer" containerID="8805aea259eb85c371de7bf3781a3208df65fcf541775082ddfaaaa543988732" Feb 26 17:46:17 crc kubenswrapper[4805]: E0226 17:46:17.953844 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:46:31 crc kubenswrapper[4805]: I0226 17:46:31.953249 4805 scope.go:117] "RemoveContainer" containerID="8805aea259eb85c371de7bf3781a3208df65fcf541775082ddfaaaa543988732" Feb 26 17:46:31 crc kubenswrapper[4805]: E0226 17:46:31.953995 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:46:42 crc kubenswrapper[4805]: I0226 17:46:42.954188 4805 scope.go:117] "RemoveContainer" containerID="8805aea259eb85c371de7bf3781a3208df65fcf541775082ddfaaaa543988732" Feb 26 17:46:42 crc kubenswrapper[4805]: E0226 17:46:42.955346 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:46:43 crc kubenswrapper[4805]: I0226 17:46:43.518048 4805 scope.go:117] "RemoveContainer" containerID="bc41c763572f16446ceb6bcb0b0103b7f75d518389412a0e07eceae1b928ee7c" Feb 26 17:46:53 crc kubenswrapper[4805]: I0226 17:46:53.955132 4805 scope.go:117] "RemoveContainer" containerID="8805aea259eb85c371de7bf3781a3208df65fcf541775082ddfaaaa543988732" Feb 26 17:46:53 crc kubenswrapper[4805]: E0226 17:46:53.956717 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:47:01 crc kubenswrapper[4805]: I0226 17:47:01.048610 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-2b40-account-create-update-w85pf"] Feb 26 17:47:01 crc kubenswrapper[4805]: I0226 17:47:01.058549 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-m5z27"] Feb 26 17:47:01 crc kubenswrapper[4805]: I0226 17:47:01.069050 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-2b40-account-create-update-w85pf"] Feb 26 17:47:01 crc kubenswrapper[4805]: I0226 17:47:01.083816 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-m5z27"] Feb 26 17:47:02 crc kubenswrapper[4805]: I0226 17:47:02.967952 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1416491f-df93-494f-81a0-21e27134ce2f" path="/var/lib/kubelet/pods/1416491f-df93-494f-81a0-21e27134ce2f/volumes" Feb 26 17:47:02 crc kubenswrapper[4805]: I0226 17:47:02.970391 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3aaec7e4-5667-4efd-9506-ab97bd392d78" path="/var/lib/kubelet/pods/3aaec7e4-5667-4efd-9506-ab97bd392d78/volumes" Feb 26 17:47:07 crc kubenswrapper[4805]: I0226 17:47:07.953144 4805 scope.go:117] "RemoveContainer" containerID="8805aea259eb85c371de7bf3781a3208df65fcf541775082ddfaaaa543988732" Feb 26 17:47:07 crc kubenswrapper[4805]: E0226 17:47:07.953974 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:47:16 crc kubenswrapper[4805]: I0226 17:47:16.042859 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8349-account-create-update-7td5j"] Feb 26 17:47:16 crc kubenswrapper[4805]: I0226 17:47:16.059814 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-a0e5-account-create-update-w7vr5"] Feb 26 17:47:16 crc kubenswrapper[4805]: I0226 17:47:16.093702 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-52f5k"] Feb 26 17:47:16 crc kubenswrapper[4805]: I0226 17:47:16.108292 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-mnplr"] Feb 26 17:47:16 crc kubenswrapper[4805]: I0226 17:47:16.118921 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-a0e5-account-create-update-w7vr5"] Feb 26 17:47:16 crc kubenswrapper[4805]: I0226 17:47:16.131542 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-mnplr"] Feb 26 17:47:16 crc kubenswrapper[4805]: I0226 17:47:16.143290 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-8349-account-create-update-7td5j"] Feb 26 17:47:16 crc kubenswrapper[4805]: I0226 17:47:16.154054 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-52f5k"] Feb 26 17:47:16 crc kubenswrapper[4805]: I0226 17:47:16.968220 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33a31864-4394-433d-9f97-c52b9d9984e5" path="/var/lib/kubelet/pods/33a31864-4394-433d-9f97-c52b9d9984e5/volumes" Feb 26 17:47:16 crc kubenswrapper[4805]: I0226 17:47:16.969678 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4393bdca-06c3-4243-abef-be4d46c5f0b3" path="/var/lib/kubelet/pods/4393bdca-06c3-4243-abef-be4d46c5f0b3/volumes" Feb 26 17:47:16 crc kubenswrapper[4805]: I0226 17:47:16.971626 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5024aeeb-12a7-40df-9ded-fa42366d647e" path="/var/lib/kubelet/pods/5024aeeb-12a7-40df-9ded-fa42366d647e/volumes" Feb 26 17:47:16 crc kubenswrapper[4805]: I0226 17:47:16.972311 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83faf020-4a1d-4f62-846b-0d94b1eeabd1" path="/var/lib/kubelet/pods/83faf020-4a1d-4f62-846b-0d94b1eeabd1/volumes" Feb 26 17:47:22 crc kubenswrapper[4805]: I0226 17:47:22.953741 4805 scope.go:117] "RemoveContainer" containerID="8805aea259eb85c371de7bf3781a3208df65fcf541775082ddfaaaa543988732" Feb 26 17:47:22 crc kubenswrapper[4805]: E0226 17:47:22.956786 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:47:35 crc kubenswrapper[4805]: I0226 17:47:35.037427 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-vnpwb"] Feb 26 17:47:35 crc kubenswrapper[4805]: I0226 17:47:35.048585 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-vnpwb"] Feb 26 17:47:36 crc kubenswrapper[4805]: I0226 17:47:36.036674 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-create-rdwzk"] Feb 26 17:47:36 crc kubenswrapper[4805]: I0226 17:47:36.050191 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-0480-account-create-update-pxmst"] Feb 26 17:47:36 crc kubenswrapper[4805]: I0226 17:47:36.063490 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-2632-account-create-update-p9bx4"] Feb 26 17:47:36 crc kubenswrapper[4805]: I0226 17:47:36.073503 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-9dzgt"] Feb 26 17:47:36 crc kubenswrapper[4805]: I0226 17:47:36.083053 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-create-rdwzk"] Feb 26 17:47:36 crc kubenswrapper[4805]: I0226 17:47:36.092770 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-5wk8w"] Feb 26 17:47:36 crc kubenswrapper[4805]: I0226 17:47:36.102654 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-0480-account-create-update-pxmst"] Feb 26 17:47:36 crc kubenswrapper[4805]: I0226 17:47:36.112666 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-9dzgt"] Feb 26 17:47:36 crc kubenswrapper[4805]: I0226 17:47:36.121872 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-2632-account-create-update-p9bx4"] Feb 26 17:47:36 crc kubenswrapper[4805]: I0226 17:47:36.131508 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-5wk8w"] Feb 26 17:47:36 crc kubenswrapper[4805]: I0226 17:47:36.973851 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10b4ec10-7e0f-4640-be48-7ed1584ff69f" path="/var/lib/kubelet/pods/10b4ec10-7e0f-4640-be48-7ed1584ff69f/volumes" Feb 26 17:47:36 crc kubenswrapper[4805]: I0226 17:47:36.976322 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5db511e1-e71e-41b9-98d7-e2e3b3a74f8c" path="/var/lib/kubelet/pods/5db511e1-e71e-41b9-98d7-e2e3b3a74f8c/volumes" Feb 26 17:47:36 crc kubenswrapper[4805]: I0226 17:47:36.990293 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92d622ea-77ad-4464-8359-f8e53216ebe8" path="/var/lib/kubelet/pods/92d622ea-77ad-4464-8359-f8e53216ebe8/volumes" Feb 26 17:47:36 crc kubenswrapper[4805]: I0226 17:47:36.994044 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d63385b-f9af-42fb-b2eb-b1e3e8975f44" path="/var/lib/kubelet/pods/9d63385b-f9af-42fb-b2eb-b1e3e8975f44/volumes" Feb 26 17:47:36 crc kubenswrapper[4805]: I0226 17:47:36.995311 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7f037b7-d4fa-44cd-a90e-3ba04fd196dc" path="/var/lib/kubelet/pods/a7f037b7-d4fa-44cd-a90e-3ba04fd196dc/volumes" Feb 26 17:47:36 crc kubenswrapper[4805]: I0226 17:47:36.996706 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcb840ba-94cc-4a7a-beea-6505c3f54f5d" path="/var/lib/kubelet/pods/bcb840ba-94cc-4a7a-beea-6505c3f54f5d/volumes" Feb 26 17:47:37 crc kubenswrapper[4805]: I0226 17:47:37.043041 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-6bb0-account-create-update-kjgd9"] Feb 26 17:47:37 crc kubenswrapper[4805]: I0226 17:47:37.063672 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5832-account-create-update-dk78q"] Feb 26 17:47:37 crc kubenswrapper[4805]: I0226 17:47:37.074956 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-6bb0-account-create-update-kjgd9"] Feb 26 17:47:37 crc kubenswrapper[4805]: I0226 17:47:37.084729 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5832-account-create-update-dk78q"] Feb 26 17:47:37 crc kubenswrapper[4805]: I0226 17:47:37.953592 4805 scope.go:117] "RemoveContainer" containerID="8805aea259eb85c371de7bf3781a3208df65fcf541775082ddfaaaa543988732" Feb 26 17:47:37 crc kubenswrapper[4805]: E0226 17:47:37.953913 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:47:38 crc kubenswrapper[4805]: I0226 17:47:38.966053 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78b81ab7-3249-4960-b05c-7c75f88ed845" path="/var/lib/kubelet/pods/78b81ab7-3249-4960-b05c-7c75f88ed845/volumes" Feb 26 17:47:38 crc kubenswrapper[4805]: I0226 17:47:38.966931 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="febba13f-b5af-4cb3-be2a-38f379c5b1aa" path="/var/lib/kubelet/pods/febba13f-b5af-4cb3-be2a-38f379c5b1aa/volumes" Feb 26 17:47:43 crc kubenswrapper[4805]: I0226 17:47:43.610656 4805 scope.go:117] "RemoveContainer" containerID="4e9eae4198f9f533ae9235bb059fbbf9fea594ad5c96c387c19e641fbe0ec573" Feb 26 17:47:43 crc kubenswrapper[4805]: I0226 17:47:43.640496 4805 scope.go:117] "RemoveContainer" containerID="b281079e92ba07b5a4acc4789b4da704a0649a11d8f9d3d6a039e799e5ae8e41" Feb 26 17:47:43 crc kubenswrapper[4805]: I0226 17:47:43.864105 4805 scope.go:117] "RemoveContainer" containerID="2dda3c25afac783c6d780332cbc08c804a3ec2abd063b47691a80e7f8395bb35" Feb 26 17:47:43 crc kubenswrapper[4805]: I0226 17:47:43.928073 4805 scope.go:117] "RemoveContainer" containerID="abef1a1f78868bf97dea85233723ee6a0167853b8ad667efaba274f18715dea5" Feb 26 17:47:43 crc kubenswrapper[4805]: I0226 17:47:43.974337 4805 scope.go:117] "RemoveContainer" containerID="2d3b4f81d72c2cdc692c7d533cb82e0290bde2c965cdd60167423057c63f2172" Feb 26 17:47:44 crc kubenswrapper[4805]: I0226 17:47:44.003128 4805 scope.go:117] "RemoveContainer" containerID="a4f52cce3dd3079a2cb59f93565d630d462a129694dca31bada8abe97dfef090" Feb 26 17:47:44 crc kubenswrapper[4805]: I0226 17:47:44.055453 4805 scope.go:117] "RemoveContainer" containerID="59d7445bf75aecfb3b80dae3da2d2fc0dd8f32a73db55fda4c59c5049b3b50ed" Feb 26 17:47:44 crc kubenswrapper[4805]: I0226 17:47:44.085543 4805 scope.go:117] "RemoveContainer" containerID="ff547a727c3c968f2699fede59b737bedf7fbd78e91116040f3ad2d9290b7f6a" Feb 26 17:47:44 crc kubenswrapper[4805]: I0226 17:47:44.109411 4805 scope.go:117] "RemoveContainer" containerID="63252f9c61caf239d73cd6d4b132187d9c44755e88bab88df84bd73f65ead253" Feb 26 17:47:44 crc kubenswrapper[4805]: I0226 17:47:44.143517 4805 scope.go:117] "RemoveContainer" containerID="36c0183f2698f80ffe4d28383cabae18a68abf130af834a15aeb87225b2371c5" Feb 26 17:47:44 crc kubenswrapper[4805]: I0226 17:47:44.169237 4805 scope.go:117] "RemoveContainer" containerID="4aa4e1421aad85bae0d2ce9ae9802e3c6bd70d7283d0a45441de19e55988499b" Feb 26 17:47:44 crc kubenswrapper[4805]: I0226 17:47:44.195398 4805 scope.go:117] "RemoveContainer" containerID="0a44610bcbb0a3478571154c788d395245df50b17c2df5a8002c8afd5d19109a" Feb 26 17:47:44 crc kubenswrapper[4805]: I0226 17:47:44.223971 4805 scope.go:117] "RemoveContainer" containerID="417d3862f6ad7e6ddf3f2faeb530fe737a3fa35cb01317972a3ea0c206e65d07" Feb 26 17:47:44 crc kubenswrapper[4805]: I0226 17:47:44.256522 4805 scope.go:117] "RemoveContainer" containerID="4694545c248ad1de1e60a1831ee1465573606cb1e0f393a7c340ba2ff54365bc" Feb 26 17:47:48 crc kubenswrapper[4805]: I0226 17:47:48.953450 4805 scope.go:117] "RemoveContainer" containerID="8805aea259eb85c371de7bf3781a3208df65fcf541775082ddfaaaa543988732" Feb 26 17:47:48 crc kubenswrapper[4805]: E0226 17:47:48.954007 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:47:51 crc kubenswrapper[4805]: I0226 17:47:51.049790 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-fqghq"] Feb 26 17:47:51 crc kubenswrapper[4805]: I0226 17:47:51.061440 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-fqghq"] Feb 26 17:47:52 crc kubenswrapper[4805]: I0226 17:47:52.566544 4805 generic.go:334] "Generic (PLEG): container finished" podID="ea8e6080-7eee-41dd-a4f6-6753bb1cc0de" containerID="349b2172f64e54ec974e8001e4cf1865ad74f8975676e9692b7e128705d6084d" exitCode=0 Feb 26 17:47:52 crc kubenswrapper[4805]: I0226 17:47:52.566623 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4pbh" event={"ID":"ea8e6080-7eee-41dd-a4f6-6753bb1cc0de","Type":"ContainerDied","Data":"349b2172f64e54ec974e8001e4cf1865ad74f8975676e9692b7e128705d6084d"} Feb 26 17:47:52 crc kubenswrapper[4805]: I0226 17:47:52.967534 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89ecd16e-4963-4856-88c2-5920a4d78948" path="/var/lib/kubelet/pods/89ecd16e-4963-4856-88c2-5920a4d78948/volumes" Feb 26 17:47:54 crc kubenswrapper[4805]: I0226 17:47:54.231522 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4pbh" Feb 26 17:47:54 crc kubenswrapper[4805]: I0226 17:47:54.361913 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ea8e6080-7eee-41dd-a4f6-6753bb1cc0de-inventory\") pod \"ea8e6080-7eee-41dd-a4f6-6753bb1cc0de\" (UID: \"ea8e6080-7eee-41dd-a4f6-6753bb1cc0de\") " Feb 26 17:47:54 crc kubenswrapper[4805]: I0226 17:47:54.362471 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea8e6080-7eee-41dd-a4f6-6753bb1cc0de-bootstrap-combined-ca-bundle\") pod \"ea8e6080-7eee-41dd-a4f6-6753bb1cc0de\" (UID: \"ea8e6080-7eee-41dd-a4f6-6753bb1cc0de\") " Feb 26 17:47:54 crc kubenswrapper[4805]: I0226 17:47:54.362503 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ea8e6080-7eee-41dd-a4f6-6753bb1cc0de-ssh-key-openstack-edpm-ipam\") pod \"ea8e6080-7eee-41dd-a4f6-6753bb1cc0de\" (UID: \"ea8e6080-7eee-41dd-a4f6-6753bb1cc0de\") " Feb 26 17:47:54 crc kubenswrapper[4805]: I0226 17:47:54.362576 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2x6xj\" (UniqueName: \"kubernetes.io/projected/ea8e6080-7eee-41dd-a4f6-6753bb1cc0de-kube-api-access-2x6xj\") pod \"ea8e6080-7eee-41dd-a4f6-6753bb1cc0de\" (UID: \"ea8e6080-7eee-41dd-a4f6-6753bb1cc0de\") " Feb 26 17:47:54 crc kubenswrapper[4805]: I0226 17:47:54.371799 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea8e6080-7eee-41dd-a4f6-6753bb1cc0de-kube-api-access-2x6xj" (OuterVolumeSpecName: "kube-api-access-2x6xj") pod "ea8e6080-7eee-41dd-a4f6-6753bb1cc0de" (UID: "ea8e6080-7eee-41dd-a4f6-6753bb1cc0de"). InnerVolumeSpecName "kube-api-access-2x6xj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:47:54 crc kubenswrapper[4805]: I0226 17:47:54.374680 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea8e6080-7eee-41dd-a4f6-6753bb1cc0de-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "ea8e6080-7eee-41dd-a4f6-6753bb1cc0de" (UID: "ea8e6080-7eee-41dd-a4f6-6753bb1cc0de"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:47:54 crc kubenswrapper[4805]: I0226 17:47:54.429214 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea8e6080-7eee-41dd-a4f6-6753bb1cc0de-inventory" (OuterVolumeSpecName: "inventory") pod "ea8e6080-7eee-41dd-a4f6-6753bb1cc0de" (UID: "ea8e6080-7eee-41dd-a4f6-6753bb1cc0de"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:47:54 crc kubenswrapper[4805]: I0226 17:47:54.439268 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea8e6080-7eee-41dd-a4f6-6753bb1cc0de-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ea8e6080-7eee-41dd-a4f6-6753bb1cc0de" (UID: "ea8e6080-7eee-41dd-a4f6-6753bb1cc0de"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:47:54 crc kubenswrapper[4805]: I0226 17:47:54.465314 4805 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea8e6080-7eee-41dd-a4f6-6753bb1cc0de-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:47:54 crc kubenswrapper[4805]: I0226 17:47:54.465354 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ea8e6080-7eee-41dd-a4f6-6753bb1cc0de-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 17:47:54 crc kubenswrapper[4805]: I0226 17:47:54.465367 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2x6xj\" (UniqueName: \"kubernetes.io/projected/ea8e6080-7eee-41dd-a4f6-6753bb1cc0de-kube-api-access-2x6xj\") on node \"crc\" DevicePath \"\"" Feb 26 17:47:54 crc kubenswrapper[4805]: I0226 17:47:54.465378 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ea8e6080-7eee-41dd-a4f6-6753bb1cc0de-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 17:47:54 crc kubenswrapper[4805]: I0226 17:47:54.588392 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4pbh" event={"ID":"ea8e6080-7eee-41dd-a4f6-6753bb1cc0de","Type":"ContainerDied","Data":"6430d7c1a86bfdcf964bf140da220011b79d6d51a77512e217e174bba6ca1855"} Feb 26 17:47:54 crc kubenswrapper[4805]: I0226 17:47:54.588430 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6430d7c1a86bfdcf964bf140da220011b79d6d51a77512e217e174bba6ca1855" Feb 26 17:47:54 crc kubenswrapper[4805]: I0226 17:47:54.588447 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g4pbh" Feb 26 17:47:54 crc kubenswrapper[4805]: I0226 17:47:54.694224 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xjqb9"] Feb 26 17:47:54 crc kubenswrapper[4805]: E0226 17:47:54.694981 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="472fb3d5-0b1c-4668-93f8-06f6da00569f" containerName="oc" Feb 26 17:47:54 crc kubenswrapper[4805]: I0226 17:47:54.695004 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="472fb3d5-0b1c-4668-93f8-06f6da00569f" containerName="oc" Feb 26 17:47:54 crc kubenswrapper[4805]: E0226 17:47:54.695044 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea8e6080-7eee-41dd-a4f6-6753bb1cc0de" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 26 17:47:54 crc kubenswrapper[4805]: I0226 17:47:54.695054 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea8e6080-7eee-41dd-a4f6-6753bb1cc0de" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 26 17:47:54 crc kubenswrapper[4805]: I0226 17:47:54.695299 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea8e6080-7eee-41dd-a4f6-6753bb1cc0de" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 26 17:47:54 crc kubenswrapper[4805]: I0226 17:47:54.695338 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="472fb3d5-0b1c-4668-93f8-06f6da00569f" containerName="oc" Feb 26 17:47:54 crc kubenswrapper[4805]: I0226 17:47:54.696517 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xjqb9" Feb 26 17:47:54 crc kubenswrapper[4805]: I0226 17:47:54.699042 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 17:47:54 crc kubenswrapper[4805]: I0226 17:47:54.699043 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 17:47:54 crc kubenswrapper[4805]: I0226 17:47:54.702520 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sc2xs" Feb 26 17:47:54 crc kubenswrapper[4805]: I0226 17:47:54.702714 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 17:47:54 crc kubenswrapper[4805]: I0226 17:47:54.712798 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xjqb9"] Feb 26 17:47:54 crc kubenswrapper[4805]: I0226 17:47:54.771083 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c450535-29d9-4f24-9d80-e1059a310a12-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xjqb9\" (UID: \"8c450535-29d9-4f24-9d80-e1059a310a12\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xjqb9" Feb 26 17:47:54 crc kubenswrapper[4805]: I0226 17:47:54.771239 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c450535-29d9-4f24-9d80-e1059a310a12-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xjqb9\" (UID: \"8c450535-29d9-4f24-9d80-e1059a310a12\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xjqb9" Feb 26 17:47:54 crc kubenswrapper[4805]: I0226 17:47:54.771447 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pr6z\" (UniqueName: \"kubernetes.io/projected/8c450535-29d9-4f24-9d80-e1059a310a12-kube-api-access-8pr6z\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xjqb9\" (UID: \"8c450535-29d9-4f24-9d80-e1059a310a12\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xjqb9" Feb 26 17:47:54 crc kubenswrapper[4805]: I0226 17:47:54.873764 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pr6z\" (UniqueName: \"kubernetes.io/projected/8c450535-29d9-4f24-9d80-e1059a310a12-kube-api-access-8pr6z\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xjqb9\" (UID: \"8c450535-29d9-4f24-9d80-e1059a310a12\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xjqb9" Feb 26 17:47:54 crc kubenswrapper[4805]: I0226 17:47:54.873856 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c450535-29d9-4f24-9d80-e1059a310a12-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xjqb9\" (UID: \"8c450535-29d9-4f24-9d80-e1059a310a12\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xjqb9" Feb 26 17:47:54 crc kubenswrapper[4805]: I0226 17:47:54.874279 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c450535-29d9-4f24-9d80-e1059a310a12-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xjqb9\" (UID: \"8c450535-29d9-4f24-9d80-e1059a310a12\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xjqb9" Feb 26 17:47:54 crc kubenswrapper[4805]: I0226 17:47:54.879321 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c450535-29d9-4f24-9d80-e1059a310a12-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xjqb9\" (UID: \"8c450535-29d9-4f24-9d80-e1059a310a12\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xjqb9" Feb 26 17:47:54 crc kubenswrapper[4805]: I0226 17:47:54.880490 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c450535-29d9-4f24-9d80-e1059a310a12-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xjqb9\" (UID: \"8c450535-29d9-4f24-9d80-e1059a310a12\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xjqb9" Feb 26 17:47:54 crc kubenswrapper[4805]: I0226 17:47:54.890431 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pr6z\" (UniqueName: \"kubernetes.io/projected/8c450535-29d9-4f24-9d80-e1059a310a12-kube-api-access-8pr6z\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xjqb9\" (UID: \"8c450535-29d9-4f24-9d80-e1059a310a12\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xjqb9" Feb 26 17:47:55 crc kubenswrapper[4805]: I0226 17:47:55.015583 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xjqb9" Feb 26 17:47:55 crc kubenswrapper[4805]: I0226 17:47:55.050543 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-xbnw6"] Feb 26 17:47:55 crc kubenswrapper[4805]: I0226 17:47:55.067680 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-xbnw6"] Feb 26 17:47:55 crc kubenswrapper[4805]: I0226 17:47:55.643399 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xjqb9"] Feb 26 17:47:55 crc kubenswrapper[4805]: W0226 17:47:55.645231 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c450535_29d9_4f24_9d80_e1059a310a12.slice/crio-495d2c33c2dca291faab7c5d606e88f49aeea9f9f81d6174f935b5a1bf6e6ff4 WatchSource:0}: Error finding container 495d2c33c2dca291faab7c5d606e88f49aeea9f9f81d6174f935b5a1bf6e6ff4: Status 404 returned error can't find the container with id 495d2c33c2dca291faab7c5d606e88f49aeea9f9f81d6174f935b5a1bf6e6ff4 Feb 26 17:47:56 crc kubenswrapper[4805]: I0226 17:47:56.611957 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xjqb9" event={"ID":"8c450535-29d9-4f24-9d80-e1059a310a12","Type":"ContainerStarted","Data":"6f92b4cbe46e580839ea3fdfc3d560a61c35f92409af836360c7fad3d7f4518b"} Feb 26 17:47:56 crc kubenswrapper[4805]: I0226 17:47:56.612366 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xjqb9" event={"ID":"8c450535-29d9-4f24-9d80-e1059a310a12","Type":"ContainerStarted","Data":"495d2c33c2dca291faab7c5d606e88f49aeea9f9f81d6174f935b5a1bf6e6ff4"} Feb 26 17:47:56 crc kubenswrapper[4805]: I0226 17:47:56.968121 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="570af5c6-aef6-4d10-93c9-439dfb9695ee" path="/var/lib/kubelet/pods/570af5c6-aef6-4d10-93c9-439dfb9695ee/volumes" Feb 26 17:48:00 crc kubenswrapper[4805]: I0226 17:48:00.129246 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xjqb9" podStartSLOduration=5.669943118 podStartE2EDuration="6.129224702s" podCreationTimestamp="2026-02-26 17:47:54 +0000 UTC" firstStartedPulling="2026-02-26 17:47:55.647494956 +0000 UTC m=+1990.209249285" lastFinishedPulling="2026-02-26 17:47:56.10677653 +0000 UTC m=+1990.668530869" observedRunningTime="2026-02-26 17:47:56.631918694 +0000 UTC m=+1991.193673033" watchObservedRunningTime="2026-02-26 17:48:00.129224702 +0000 UTC m=+1994.690979041" Feb 26 17:48:00 crc kubenswrapper[4805]: I0226 17:48:00.134574 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535468-k58kk"] Feb 26 17:48:00 crc kubenswrapper[4805]: I0226 17:48:00.136428 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535468-k58kk" Feb 26 17:48:00 crc kubenswrapper[4805]: I0226 17:48:00.139126 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 17:48:00 crc kubenswrapper[4805]: I0226 17:48:00.139139 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 17:48:00 crc kubenswrapper[4805]: I0226 17:48:00.139366 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 17:48:00 crc kubenswrapper[4805]: I0226 17:48:00.144286 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535468-k58kk"] Feb 26 17:48:00 crc kubenswrapper[4805]: I0226 17:48:00.196305 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsd5l\" (UniqueName: \"kubernetes.io/projected/9663523a-376a-46ac-a20e-a622baf88a96-kube-api-access-xsd5l\") pod \"auto-csr-approver-29535468-k58kk\" (UID: \"9663523a-376a-46ac-a20e-a622baf88a96\") " pod="openshift-infra/auto-csr-approver-29535468-k58kk" Feb 26 17:48:00 crc kubenswrapper[4805]: I0226 17:48:00.299356 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsd5l\" (UniqueName: \"kubernetes.io/projected/9663523a-376a-46ac-a20e-a622baf88a96-kube-api-access-xsd5l\") pod \"auto-csr-approver-29535468-k58kk\" (UID: \"9663523a-376a-46ac-a20e-a622baf88a96\") " pod="openshift-infra/auto-csr-approver-29535468-k58kk" Feb 26 17:48:00 crc kubenswrapper[4805]: I0226 17:48:00.319859 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsd5l\" (UniqueName: \"kubernetes.io/projected/9663523a-376a-46ac-a20e-a622baf88a96-kube-api-access-xsd5l\") pod \"auto-csr-approver-29535468-k58kk\" (UID: \"9663523a-376a-46ac-a20e-a622baf88a96\") " pod="openshift-infra/auto-csr-approver-29535468-k58kk" Feb 26 17:48:00 crc kubenswrapper[4805]: I0226 17:48:00.459245 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535468-k58kk" Feb 26 17:48:00 crc kubenswrapper[4805]: I0226 17:48:00.974355 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535468-k58kk"] Feb 26 17:48:00 crc kubenswrapper[4805]: W0226 17:48:00.979362 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9663523a_376a_46ac_a20e_a622baf88a96.slice/crio-85590aa7276db872bd06502fa41b75527d272d66788d23416fcad6cf2726c2c4 WatchSource:0}: Error finding container 85590aa7276db872bd06502fa41b75527d272d66788d23416fcad6cf2726c2c4: Status 404 returned error can't find the container with id 85590aa7276db872bd06502fa41b75527d272d66788d23416fcad6cf2726c2c4 Feb 26 17:48:01 crc kubenswrapper[4805]: I0226 17:48:01.671335 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535468-k58kk" event={"ID":"9663523a-376a-46ac-a20e-a622baf88a96","Type":"ContainerStarted","Data":"85590aa7276db872bd06502fa41b75527d272d66788d23416fcad6cf2726c2c4"} Feb 26 17:48:01 crc kubenswrapper[4805]: I0226 17:48:01.953471 4805 scope.go:117] "RemoveContainer" containerID="8805aea259eb85c371de7bf3781a3208df65fcf541775082ddfaaaa543988732" Feb 26 17:48:01 crc kubenswrapper[4805]: E0226 17:48:01.953864 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:48:02 crc kubenswrapper[4805]: I0226 17:48:02.681276 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535468-k58kk" event={"ID":"9663523a-376a-46ac-a20e-a622baf88a96","Type":"ContainerStarted","Data":"b48487eb648d56f3b12f255d3cc64a48919ce8d31be151e02bae28b1a39c8c25"} Feb 26 17:48:02 crc kubenswrapper[4805]: I0226 17:48:02.704665 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535468-k58kk" podStartSLOduration=1.391388853 podStartE2EDuration="2.704642947s" podCreationTimestamp="2026-02-26 17:48:00 +0000 UTC" firstStartedPulling="2026-02-26 17:48:00.982578268 +0000 UTC m=+1995.544332597" lastFinishedPulling="2026-02-26 17:48:02.295832352 +0000 UTC m=+1996.857586691" observedRunningTime="2026-02-26 17:48:02.701325163 +0000 UTC m=+1997.263079512" watchObservedRunningTime="2026-02-26 17:48:02.704642947 +0000 UTC m=+1997.266397276" Feb 26 17:48:03 crc kubenswrapper[4805]: I0226 17:48:03.694833 4805 generic.go:334] "Generic (PLEG): container finished" podID="9663523a-376a-46ac-a20e-a622baf88a96" containerID="b48487eb648d56f3b12f255d3cc64a48919ce8d31be151e02bae28b1a39c8c25" exitCode=0 Feb 26 17:48:03 crc kubenswrapper[4805]: I0226 17:48:03.694880 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535468-k58kk" event={"ID":"9663523a-376a-46ac-a20e-a622baf88a96","Type":"ContainerDied","Data":"b48487eb648d56f3b12f255d3cc64a48919ce8d31be151e02bae28b1a39c8c25"} Feb 26 17:48:05 crc kubenswrapper[4805]: I0226 17:48:05.142266 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535468-k58kk" Feb 26 17:48:05 crc kubenswrapper[4805]: I0226 17:48:05.215318 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsd5l\" (UniqueName: \"kubernetes.io/projected/9663523a-376a-46ac-a20e-a622baf88a96-kube-api-access-xsd5l\") pod \"9663523a-376a-46ac-a20e-a622baf88a96\" (UID: \"9663523a-376a-46ac-a20e-a622baf88a96\") " Feb 26 17:48:05 crc kubenswrapper[4805]: I0226 17:48:05.226612 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9663523a-376a-46ac-a20e-a622baf88a96-kube-api-access-xsd5l" (OuterVolumeSpecName: "kube-api-access-xsd5l") pod "9663523a-376a-46ac-a20e-a622baf88a96" (UID: "9663523a-376a-46ac-a20e-a622baf88a96"). InnerVolumeSpecName "kube-api-access-xsd5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:48:05 crc kubenswrapper[4805]: I0226 17:48:05.317626 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsd5l\" (UniqueName: \"kubernetes.io/projected/9663523a-376a-46ac-a20e-a622baf88a96-kube-api-access-xsd5l\") on node \"crc\" DevicePath \"\"" Feb 26 17:48:05 crc kubenswrapper[4805]: I0226 17:48:05.725050 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535468-k58kk" event={"ID":"9663523a-376a-46ac-a20e-a622baf88a96","Type":"ContainerDied","Data":"85590aa7276db872bd06502fa41b75527d272d66788d23416fcad6cf2726c2c4"} Feb 26 17:48:05 crc kubenswrapper[4805]: I0226 17:48:05.725100 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85590aa7276db872bd06502fa41b75527d272d66788d23416fcad6cf2726c2c4" Feb 26 17:48:05 crc kubenswrapper[4805]: I0226 17:48:05.725106 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535468-k58kk" Feb 26 17:48:05 crc kubenswrapper[4805]: I0226 17:48:05.796558 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535462-ks82t"] Feb 26 17:48:05 crc kubenswrapper[4805]: I0226 17:48:05.838236 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535462-ks82t"] Feb 26 17:48:06 crc kubenswrapper[4805]: I0226 17:48:06.966817 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50a09872-8737-49f7-97b5-6075717d1336" path="/var/lib/kubelet/pods/50a09872-8737-49f7-97b5-6075717d1336/volumes" Feb 26 17:48:07 crc kubenswrapper[4805]: I0226 17:48:07.030400 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-sqks9"] Feb 26 17:48:07 crc kubenswrapper[4805]: I0226 17:48:07.039641 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-sqks9"] Feb 26 17:48:08 crc kubenswrapper[4805]: I0226 17:48:08.968042 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f21b0b57-d027-42a1-a3c9-b4030f589db8" path="/var/lib/kubelet/pods/f21b0b57-d027-42a1-a3c9-b4030f589db8/volumes" Feb 26 17:48:15 crc kubenswrapper[4805]: I0226 17:48:15.953432 4805 scope.go:117] "RemoveContainer" containerID="8805aea259eb85c371de7bf3781a3208df65fcf541775082ddfaaaa543988732" Feb 26 17:48:15 crc kubenswrapper[4805]: E0226 17:48:15.954188 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:48:27 crc kubenswrapper[4805]: I0226 17:48:27.953329 4805 scope.go:117] "RemoveContainer" containerID="8805aea259eb85c371de7bf3781a3208df65fcf541775082ddfaaaa543988732" Feb 26 17:48:27 crc kubenswrapper[4805]: E0226 17:48:27.954068 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:48:31 crc kubenswrapper[4805]: I0226 17:48:31.038640 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-5v7qp"] Feb 26 17:48:31 crc kubenswrapper[4805]: I0226 17:48:31.051759 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-5v7qp"] Feb 26 17:48:32 crc kubenswrapper[4805]: I0226 17:48:32.985496 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b61d3acd-4133-4fe1-82b0-05036641ed78" path="/var/lib/kubelet/pods/b61d3acd-4133-4fe1-82b0-05036641ed78/volumes" Feb 26 17:48:38 crc kubenswrapper[4805]: I0226 17:48:38.953221 4805 scope.go:117] "RemoveContainer" containerID="8805aea259eb85c371de7bf3781a3208df65fcf541775082ddfaaaa543988732" Feb 26 17:48:38 crc kubenswrapper[4805]: E0226 17:48:38.953999 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:48:44 crc kubenswrapper[4805]: I0226 17:48:44.679670 4805 scope.go:117] "RemoveContainer" containerID="2c7cb108bd257ac156539abc7b21657be86edcb446efe7138b0ce22e88549bf2" Feb 26 17:48:44 crc kubenswrapper[4805]: I0226 17:48:44.714793 4805 scope.go:117] "RemoveContainer" containerID="2c7a2a2bccf1b8f758badbc3ec907abf4c7dc3d9d49779bf8bf90daec53c3b63" Feb 26 17:48:44 crc kubenswrapper[4805]: I0226 17:48:44.761087 4805 scope.go:117] "RemoveContainer" containerID="9faab5d1fbc11ac72a3512734c8b3416acd207ea773e248b500ca5d9782edac5" Feb 26 17:48:44 crc kubenswrapper[4805]: I0226 17:48:44.835009 4805 scope.go:117] "RemoveContainer" containerID="d9487905cdea19a9180ce1eef845149aed5d0db0b6a772bf776bd7871ed2b09a" Feb 26 17:48:44 crc kubenswrapper[4805]: I0226 17:48:44.884935 4805 scope.go:117] "RemoveContainer" containerID="5c3805d730f539b9aaabdacf6e1567a961008f7e3d540451d5b5e7f9e27ded8d" Feb 26 17:48:52 crc kubenswrapper[4805]: I0226 17:48:52.046116 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-jlsft"] Feb 26 17:48:52 crc kubenswrapper[4805]: I0226 17:48:52.056790 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-jlsft"] Feb 26 17:48:52 crc kubenswrapper[4805]: I0226 17:48:52.968176 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4" path="/var/lib/kubelet/pods/4a161dc5-9880-4ec1-a1cc-cd6abc30a9d4/volumes" Feb 26 17:48:53 crc kubenswrapper[4805]: I0226 17:48:53.954175 4805 scope.go:117] "RemoveContainer" containerID="8805aea259eb85c371de7bf3781a3208df65fcf541775082ddfaaaa543988732" Feb 26 17:48:53 crc kubenswrapper[4805]: E0226 17:48:53.954399 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:48:58 crc kubenswrapper[4805]: I0226 17:48:58.033495 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-79wtt"] Feb 26 17:48:58 crc kubenswrapper[4805]: I0226 17:48:58.044987 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-wg9r2"] Feb 26 17:48:58 crc kubenswrapper[4805]: I0226 17:48:58.054648 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-wg9r2"] Feb 26 17:48:58 crc kubenswrapper[4805]: I0226 17:48:58.063827 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-79wtt"] Feb 26 17:48:58 crc kubenswrapper[4805]: I0226 17:48:58.964950 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1627cc17-6ee2-4176-b719-aa04e00aa881" path="/var/lib/kubelet/pods/1627cc17-6ee2-4176-b719-aa04e00aa881/volumes" Feb 26 17:48:58 crc kubenswrapper[4805]: I0226 17:48:58.966896 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c0f3346-b7f3-48c3-b177-86d8adb7d190" path="/var/lib/kubelet/pods/5c0f3346-b7f3-48c3-b177-86d8adb7d190/volumes" Feb 26 17:49:04 crc kubenswrapper[4805]: I0226 17:49:04.953505 4805 scope.go:117] "RemoveContainer" containerID="8805aea259eb85c371de7bf3781a3208df65fcf541775082ddfaaaa543988732" Feb 26 17:49:04 crc kubenswrapper[4805]: E0226 17:49:04.954374 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:49:07 crc kubenswrapper[4805]: I0226 17:49:07.032164 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-dgxjz"] Feb 26 17:49:07 crc kubenswrapper[4805]: I0226 17:49:07.046823 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-dgxjz"] Feb 26 17:49:08 crc kubenswrapper[4805]: I0226 17:49:08.967343 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8969f13b-8f7b-4e4e-a891-eac8a978bb42" path="/var/lib/kubelet/pods/8969f13b-8f7b-4e4e-a891-eac8a978bb42/volumes" Feb 26 17:49:16 crc kubenswrapper[4805]: I0226 17:49:16.960820 4805 scope.go:117] "RemoveContainer" containerID="8805aea259eb85c371de7bf3781a3208df65fcf541775082ddfaaaa543988732" Feb 26 17:49:16 crc kubenswrapper[4805]: E0226 17:49:16.962925 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:49:23 crc kubenswrapper[4805]: I0226 17:49:23.485615 4805 generic.go:334] "Generic (PLEG): container finished" podID="8c450535-29d9-4f24-9d80-e1059a310a12" containerID="6f92b4cbe46e580839ea3fdfc3d560a61c35f92409af836360c7fad3d7f4518b" exitCode=0 Feb 26 17:49:23 crc kubenswrapper[4805]: I0226 17:49:23.485690 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xjqb9" event={"ID":"8c450535-29d9-4f24-9d80-e1059a310a12","Type":"ContainerDied","Data":"6f92b4cbe46e580839ea3fdfc3d560a61c35f92409af836360c7fad3d7f4518b"} Feb 26 17:49:25 crc kubenswrapper[4805]: I0226 17:49:25.119708 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xjqb9" Feb 26 17:49:25 crc kubenswrapper[4805]: I0226 17:49:25.244725 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c450535-29d9-4f24-9d80-e1059a310a12-ssh-key-openstack-edpm-ipam\") pod \"8c450535-29d9-4f24-9d80-e1059a310a12\" (UID: \"8c450535-29d9-4f24-9d80-e1059a310a12\") " Feb 26 17:49:25 crc kubenswrapper[4805]: I0226 17:49:25.245203 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pr6z\" (UniqueName: \"kubernetes.io/projected/8c450535-29d9-4f24-9d80-e1059a310a12-kube-api-access-8pr6z\") pod \"8c450535-29d9-4f24-9d80-e1059a310a12\" (UID: \"8c450535-29d9-4f24-9d80-e1059a310a12\") " Feb 26 17:49:25 crc kubenswrapper[4805]: I0226 17:49:25.245471 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c450535-29d9-4f24-9d80-e1059a310a12-inventory\") pod \"8c450535-29d9-4f24-9d80-e1059a310a12\" (UID: \"8c450535-29d9-4f24-9d80-e1059a310a12\") " Feb 26 17:49:25 crc kubenswrapper[4805]: I0226 17:49:25.252183 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c450535-29d9-4f24-9d80-e1059a310a12-kube-api-access-8pr6z" (OuterVolumeSpecName: "kube-api-access-8pr6z") pod "8c450535-29d9-4f24-9d80-e1059a310a12" (UID: "8c450535-29d9-4f24-9d80-e1059a310a12"). InnerVolumeSpecName "kube-api-access-8pr6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:49:25 crc kubenswrapper[4805]: I0226 17:49:25.274508 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c450535-29d9-4f24-9d80-e1059a310a12-inventory" (OuterVolumeSpecName: "inventory") pod "8c450535-29d9-4f24-9d80-e1059a310a12" (UID: "8c450535-29d9-4f24-9d80-e1059a310a12"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:49:25 crc kubenswrapper[4805]: I0226 17:49:25.284693 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c450535-29d9-4f24-9d80-e1059a310a12-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8c450535-29d9-4f24-9d80-e1059a310a12" (UID: "8c450535-29d9-4f24-9d80-e1059a310a12"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:49:25 crc kubenswrapper[4805]: I0226 17:49:25.348348 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c450535-29d9-4f24-9d80-e1059a310a12-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 17:49:25 crc kubenswrapper[4805]: I0226 17:49:25.348379 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c450535-29d9-4f24-9d80-e1059a310a12-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 17:49:25 crc kubenswrapper[4805]: I0226 17:49:25.348391 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pr6z\" (UniqueName: \"kubernetes.io/projected/8c450535-29d9-4f24-9d80-e1059a310a12-kube-api-access-8pr6z\") on node \"crc\" DevicePath \"\"" Feb 26 17:49:25 crc kubenswrapper[4805]: I0226 17:49:25.534375 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xjqb9" Feb 26 17:49:25 crc kubenswrapper[4805]: I0226 17:49:25.535167 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xjqb9" event={"ID":"8c450535-29d9-4f24-9d80-e1059a310a12","Type":"ContainerDied","Data":"495d2c33c2dca291faab7c5d606e88f49aeea9f9f81d6174f935b5a1bf6e6ff4"} Feb 26 17:49:25 crc kubenswrapper[4805]: I0226 17:49:25.535202 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="495d2c33c2dca291faab7c5d606e88f49aeea9f9f81d6174f935b5a1bf6e6ff4" Feb 26 17:49:25 crc kubenswrapper[4805]: I0226 17:49:25.599785 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hg6g"] Feb 26 17:49:25 crc kubenswrapper[4805]: E0226 17:49:25.600496 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c450535-29d9-4f24-9d80-e1059a310a12" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 26 17:49:25 crc kubenswrapper[4805]: I0226 17:49:25.600588 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c450535-29d9-4f24-9d80-e1059a310a12" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 26 17:49:25 crc kubenswrapper[4805]: E0226 17:49:25.600692 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9663523a-376a-46ac-a20e-a622baf88a96" containerName="oc" Feb 26 17:49:25 crc kubenswrapper[4805]: I0226 17:49:25.600758 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="9663523a-376a-46ac-a20e-a622baf88a96" containerName="oc" Feb 26 17:49:25 crc kubenswrapper[4805]: I0226 17:49:25.600990 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c450535-29d9-4f24-9d80-e1059a310a12" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 26 17:49:25 crc kubenswrapper[4805]: I0226 17:49:25.601087 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="9663523a-376a-46ac-a20e-a622baf88a96" containerName="oc" Feb 26 17:49:25 crc kubenswrapper[4805]: I0226 17:49:25.601879 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hg6g" Feb 26 17:49:25 crc kubenswrapper[4805]: I0226 17:49:25.603912 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 17:49:25 crc kubenswrapper[4805]: I0226 17:49:25.604448 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sc2xs" Feb 26 17:49:25 crc kubenswrapper[4805]: I0226 17:49:25.604511 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 17:49:25 crc kubenswrapper[4805]: I0226 17:49:25.604652 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 17:49:25 crc kubenswrapper[4805]: I0226 17:49:25.626214 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hg6g"] Feb 26 17:49:25 crc kubenswrapper[4805]: I0226 17:49:25.756372 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cefa4ec-85ad-4c95-a9dc-06978d1325c2-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5hg6g\" (UID: \"2cefa4ec-85ad-4c95-a9dc-06978d1325c2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hg6g" Feb 26 17:49:25 crc kubenswrapper[4805]: I0226 17:49:25.756771 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cefa4ec-85ad-4c95-a9dc-06978d1325c2-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5hg6g\" (UID: \"2cefa4ec-85ad-4c95-a9dc-06978d1325c2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hg6g" Feb 26 17:49:25 crc kubenswrapper[4805]: I0226 17:49:25.756900 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9dgs\" (UniqueName: \"kubernetes.io/projected/2cefa4ec-85ad-4c95-a9dc-06978d1325c2-kube-api-access-c9dgs\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5hg6g\" (UID: \"2cefa4ec-85ad-4c95-a9dc-06978d1325c2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hg6g" Feb 26 17:49:25 crc kubenswrapper[4805]: I0226 17:49:25.858882 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cefa4ec-85ad-4c95-a9dc-06978d1325c2-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5hg6g\" (UID: \"2cefa4ec-85ad-4c95-a9dc-06978d1325c2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hg6g" Feb 26 17:49:25 crc kubenswrapper[4805]: I0226 17:49:25.859045 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cefa4ec-85ad-4c95-a9dc-06978d1325c2-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5hg6g\" (UID: \"2cefa4ec-85ad-4c95-a9dc-06978d1325c2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hg6g" Feb 26 17:49:25 crc kubenswrapper[4805]: I0226 17:49:25.859083 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9dgs\" (UniqueName: \"kubernetes.io/projected/2cefa4ec-85ad-4c95-a9dc-06978d1325c2-kube-api-access-c9dgs\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5hg6g\" (UID: \"2cefa4ec-85ad-4c95-a9dc-06978d1325c2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hg6g" Feb 26 17:49:25 crc kubenswrapper[4805]: I0226 17:49:25.862886 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cefa4ec-85ad-4c95-a9dc-06978d1325c2-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5hg6g\" (UID: \"2cefa4ec-85ad-4c95-a9dc-06978d1325c2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hg6g" Feb 26 17:49:25 crc kubenswrapper[4805]: I0226 17:49:25.864443 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cefa4ec-85ad-4c95-a9dc-06978d1325c2-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5hg6g\" (UID: \"2cefa4ec-85ad-4c95-a9dc-06978d1325c2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hg6g" Feb 26 17:49:25 crc kubenswrapper[4805]: I0226 17:49:25.876287 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9dgs\" (UniqueName: \"kubernetes.io/projected/2cefa4ec-85ad-4c95-a9dc-06978d1325c2-kube-api-access-c9dgs\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-5hg6g\" (UID: \"2cefa4ec-85ad-4c95-a9dc-06978d1325c2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hg6g" Feb 26 17:49:25 crc kubenswrapper[4805]: I0226 17:49:25.929381 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hg6g" Feb 26 17:49:26 crc kubenswrapper[4805]: I0226 17:49:26.450213 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hg6g"] Feb 26 17:49:26 crc kubenswrapper[4805]: I0226 17:49:26.543386 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hg6g" event={"ID":"2cefa4ec-85ad-4c95-a9dc-06978d1325c2","Type":"ContainerStarted","Data":"822bdcf784925bcc144edf749f7edb153084e985212b17c30513a357b68a490d"} Feb 26 17:49:27 crc kubenswrapper[4805]: I0226 17:49:27.556373 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hg6g" event={"ID":"2cefa4ec-85ad-4c95-a9dc-06978d1325c2","Type":"ContainerStarted","Data":"d4a6035d7bd1744f11f95a3e0aa7b8c3bc9effe5271b8f66881671051e1dd31a"} Feb 26 17:49:27 crc kubenswrapper[4805]: I0226 17:49:27.602319 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hg6g" podStartSLOduration=2.186046838 podStartE2EDuration="2.602294171s" podCreationTimestamp="2026-02-26 17:49:25 +0000 UTC" firstStartedPulling="2026-02-26 17:49:26.468797903 +0000 UTC m=+2081.030552242" lastFinishedPulling="2026-02-26 17:49:26.885045236 +0000 UTC m=+2081.446799575" observedRunningTime="2026-02-26 17:49:27.595914019 +0000 UTC m=+2082.157668358" watchObservedRunningTime="2026-02-26 17:49:27.602294171 +0000 UTC m=+2082.164048530" Feb 26 17:49:31 crc kubenswrapper[4805]: I0226 17:49:31.953429 4805 scope.go:117] "RemoveContainer" containerID="8805aea259eb85c371de7bf3781a3208df65fcf541775082ddfaaaa543988732" Feb 26 17:49:31 crc kubenswrapper[4805]: E0226 17:49:31.954288 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:49:37 crc kubenswrapper[4805]: I0226 17:49:37.811255 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xr99r"] Feb 26 17:49:37 crc kubenswrapper[4805]: I0226 17:49:37.815649 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xr99r" Feb 26 17:49:37 crc kubenswrapper[4805]: I0226 17:49:37.831912 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xr99r"] Feb 26 17:49:37 crc kubenswrapper[4805]: I0226 17:49:37.952185 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf698393-1f18-4014-a5d7-cb2f8a0ac86f-catalog-content\") pod \"redhat-marketplace-xr99r\" (UID: \"bf698393-1f18-4014-a5d7-cb2f8a0ac86f\") " pod="openshift-marketplace/redhat-marketplace-xr99r" Feb 26 17:49:37 crc kubenswrapper[4805]: I0226 17:49:37.952317 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf698393-1f18-4014-a5d7-cb2f8a0ac86f-utilities\") pod \"redhat-marketplace-xr99r\" (UID: \"bf698393-1f18-4014-a5d7-cb2f8a0ac86f\") " pod="openshift-marketplace/redhat-marketplace-xr99r" Feb 26 17:49:37 crc kubenswrapper[4805]: I0226 17:49:37.952376 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp7c8\" (UniqueName: \"kubernetes.io/projected/bf698393-1f18-4014-a5d7-cb2f8a0ac86f-kube-api-access-bp7c8\") pod \"redhat-marketplace-xr99r\" (UID: \"bf698393-1f18-4014-a5d7-cb2f8a0ac86f\") " pod="openshift-marketplace/redhat-marketplace-xr99r" Feb 26 17:49:38 crc kubenswrapper[4805]: I0226 17:49:38.054450 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf698393-1f18-4014-a5d7-cb2f8a0ac86f-utilities\") pod \"redhat-marketplace-xr99r\" (UID: \"bf698393-1f18-4014-a5d7-cb2f8a0ac86f\") " pod="openshift-marketplace/redhat-marketplace-xr99r" Feb 26 17:49:38 crc kubenswrapper[4805]: I0226 17:49:38.054563 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp7c8\" (UniqueName: \"kubernetes.io/projected/bf698393-1f18-4014-a5d7-cb2f8a0ac86f-kube-api-access-bp7c8\") pod \"redhat-marketplace-xr99r\" (UID: \"bf698393-1f18-4014-a5d7-cb2f8a0ac86f\") " pod="openshift-marketplace/redhat-marketplace-xr99r" Feb 26 17:49:38 crc kubenswrapper[4805]: I0226 17:49:38.054789 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf698393-1f18-4014-a5d7-cb2f8a0ac86f-catalog-content\") pod \"redhat-marketplace-xr99r\" (UID: \"bf698393-1f18-4014-a5d7-cb2f8a0ac86f\") " pod="openshift-marketplace/redhat-marketplace-xr99r" Feb 26 17:49:38 crc kubenswrapper[4805]: I0226 17:49:38.055464 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf698393-1f18-4014-a5d7-cb2f8a0ac86f-catalog-content\") pod \"redhat-marketplace-xr99r\" (UID: \"bf698393-1f18-4014-a5d7-cb2f8a0ac86f\") " pod="openshift-marketplace/redhat-marketplace-xr99r" Feb 26 17:49:38 crc kubenswrapper[4805]: I0226 17:49:38.055464 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf698393-1f18-4014-a5d7-cb2f8a0ac86f-utilities\") pod \"redhat-marketplace-xr99r\" (UID: \"bf698393-1f18-4014-a5d7-cb2f8a0ac86f\") " pod="openshift-marketplace/redhat-marketplace-xr99r" Feb 26 17:49:38 crc kubenswrapper[4805]: I0226 17:49:38.080067 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp7c8\" (UniqueName: \"kubernetes.io/projected/bf698393-1f18-4014-a5d7-cb2f8a0ac86f-kube-api-access-bp7c8\") pod \"redhat-marketplace-xr99r\" (UID: \"bf698393-1f18-4014-a5d7-cb2f8a0ac86f\") " pod="openshift-marketplace/redhat-marketplace-xr99r" Feb 26 17:49:38 crc kubenswrapper[4805]: I0226 17:49:38.151026 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xr99r" Feb 26 17:49:38 crc kubenswrapper[4805]: I0226 17:49:38.717642 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xr99r"] Feb 26 17:49:38 crc kubenswrapper[4805]: W0226 17:49:38.723050 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf698393_1f18_4014_a5d7_cb2f8a0ac86f.slice/crio-015a6ce4699206d0fc06b56f618fa692002b8bb8127a3b83620c31130fd5a832 WatchSource:0}: Error finding container 015a6ce4699206d0fc06b56f618fa692002b8bb8127a3b83620c31130fd5a832: Status 404 returned error can't find the container with id 015a6ce4699206d0fc06b56f618fa692002b8bb8127a3b83620c31130fd5a832 Feb 26 17:49:39 crc kubenswrapper[4805]: I0226 17:49:39.675790 4805 generic.go:334] "Generic (PLEG): container finished" podID="bf698393-1f18-4014-a5d7-cb2f8a0ac86f" containerID="f5c971c6266096f3b1ec3a70bdf6dc4f8ec8bbf3d1c5e59c20768608f4e125ee" exitCode=0 Feb 26 17:49:39 crc kubenswrapper[4805]: I0226 17:49:39.675891 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xr99r" event={"ID":"bf698393-1f18-4014-a5d7-cb2f8a0ac86f","Type":"ContainerDied","Data":"f5c971c6266096f3b1ec3a70bdf6dc4f8ec8bbf3d1c5e59c20768608f4e125ee"} Feb 26 17:49:39 crc kubenswrapper[4805]: I0226 17:49:39.676170 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xr99r" event={"ID":"bf698393-1f18-4014-a5d7-cb2f8a0ac86f","Type":"ContainerStarted","Data":"015a6ce4699206d0fc06b56f618fa692002b8bb8127a3b83620c31130fd5a832"} Feb 26 17:49:39 crc kubenswrapper[4805]: I0226 17:49:39.678869 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 17:49:40 crc kubenswrapper[4805]: I0226 17:49:40.687003 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xr99r" event={"ID":"bf698393-1f18-4014-a5d7-cb2f8a0ac86f","Type":"ContainerStarted","Data":"cd1196a421d67894e51633084d7fea0ee948051438446743edf3a077fbbc9b32"} Feb 26 17:49:40 crc kubenswrapper[4805]: I0226 17:49:40.805568 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-264wp"] Feb 26 17:49:40 crc kubenswrapper[4805]: I0226 17:49:40.808771 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-264wp" Feb 26 17:49:40 crc kubenswrapper[4805]: I0226 17:49:40.824075 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-264wp"] Feb 26 17:49:40 crc kubenswrapper[4805]: I0226 17:49:40.931678 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21f18c98-d7ac-4790-85ff-604d8e560225-utilities\") pod \"redhat-operators-264wp\" (UID: \"21f18c98-d7ac-4790-85ff-604d8e560225\") " pod="openshift-marketplace/redhat-operators-264wp" Feb 26 17:49:40 crc kubenswrapper[4805]: I0226 17:49:40.931753 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w87z\" (UniqueName: \"kubernetes.io/projected/21f18c98-d7ac-4790-85ff-604d8e560225-kube-api-access-4w87z\") pod \"redhat-operators-264wp\" (UID: \"21f18c98-d7ac-4790-85ff-604d8e560225\") " pod="openshift-marketplace/redhat-operators-264wp" Feb 26 17:49:40 crc kubenswrapper[4805]: I0226 17:49:40.933590 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21f18c98-d7ac-4790-85ff-604d8e560225-catalog-content\") pod \"redhat-operators-264wp\" (UID: \"21f18c98-d7ac-4790-85ff-604d8e560225\") " pod="openshift-marketplace/redhat-operators-264wp" Feb 26 17:49:41 crc kubenswrapper[4805]: I0226 17:49:41.035360 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21f18c98-d7ac-4790-85ff-604d8e560225-catalog-content\") pod \"redhat-operators-264wp\" (UID: \"21f18c98-d7ac-4790-85ff-604d8e560225\") " pod="openshift-marketplace/redhat-operators-264wp" Feb 26 17:49:41 crc kubenswrapper[4805]: I0226 17:49:41.035509 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21f18c98-d7ac-4790-85ff-604d8e560225-utilities\") pod \"redhat-operators-264wp\" (UID: \"21f18c98-d7ac-4790-85ff-604d8e560225\") " pod="openshift-marketplace/redhat-operators-264wp" Feb 26 17:49:41 crc kubenswrapper[4805]: I0226 17:49:41.035541 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w87z\" (UniqueName: \"kubernetes.io/projected/21f18c98-d7ac-4790-85ff-604d8e560225-kube-api-access-4w87z\") pod \"redhat-operators-264wp\" (UID: \"21f18c98-d7ac-4790-85ff-604d8e560225\") " pod="openshift-marketplace/redhat-operators-264wp" Feb 26 17:49:41 crc kubenswrapper[4805]: I0226 17:49:41.035998 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21f18c98-d7ac-4790-85ff-604d8e560225-catalog-content\") pod \"redhat-operators-264wp\" (UID: \"21f18c98-d7ac-4790-85ff-604d8e560225\") " pod="openshift-marketplace/redhat-operators-264wp" Feb 26 17:49:41 crc kubenswrapper[4805]: I0226 17:49:41.036310 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21f18c98-d7ac-4790-85ff-604d8e560225-utilities\") pod \"redhat-operators-264wp\" (UID: \"21f18c98-d7ac-4790-85ff-604d8e560225\") " pod="openshift-marketplace/redhat-operators-264wp" Feb 26 17:49:41 crc kubenswrapper[4805]: I0226 17:49:41.062232 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w87z\" (UniqueName: \"kubernetes.io/projected/21f18c98-d7ac-4790-85ff-604d8e560225-kube-api-access-4w87z\") pod \"redhat-operators-264wp\" (UID: \"21f18c98-d7ac-4790-85ff-604d8e560225\") " pod="openshift-marketplace/redhat-operators-264wp" Feb 26 17:49:41 crc kubenswrapper[4805]: I0226 17:49:41.166986 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-264wp" Feb 26 17:49:41 crc kubenswrapper[4805]: W0226 17:49:41.697322 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21f18c98_d7ac_4790_85ff_604d8e560225.slice/crio-8f88bd820d90087052190d23a26993248eafedf03b55a03618508482b007a605 WatchSource:0}: Error finding container 8f88bd820d90087052190d23a26993248eafedf03b55a03618508482b007a605: Status 404 returned error can't find the container with id 8f88bd820d90087052190d23a26993248eafedf03b55a03618508482b007a605 Feb 26 17:49:41 crc kubenswrapper[4805]: I0226 17:49:41.699405 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-264wp"] Feb 26 17:49:42 crc kubenswrapper[4805]: I0226 17:49:42.715339 4805 generic.go:334] "Generic (PLEG): container finished" podID="bf698393-1f18-4014-a5d7-cb2f8a0ac86f" containerID="cd1196a421d67894e51633084d7fea0ee948051438446743edf3a077fbbc9b32" exitCode=0 Feb 26 17:49:42 crc kubenswrapper[4805]: I0226 17:49:42.715423 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xr99r" event={"ID":"bf698393-1f18-4014-a5d7-cb2f8a0ac86f","Type":"ContainerDied","Data":"cd1196a421d67894e51633084d7fea0ee948051438446743edf3a077fbbc9b32"} Feb 26 17:49:42 crc kubenswrapper[4805]: I0226 17:49:42.718931 4805 generic.go:334] "Generic (PLEG): container finished" podID="21f18c98-d7ac-4790-85ff-604d8e560225" containerID="a5f4de3e6c757adf337de850bde5b7c8ba0b962f117a1c043cdf03b6bdf6dca6" exitCode=0 Feb 26 17:49:42 crc kubenswrapper[4805]: I0226 17:49:42.719006 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-264wp" event={"ID":"21f18c98-d7ac-4790-85ff-604d8e560225","Type":"ContainerDied","Data":"a5f4de3e6c757adf337de850bde5b7c8ba0b962f117a1c043cdf03b6bdf6dca6"} Feb 26 17:49:42 crc kubenswrapper[4805]: I0226 17:49:42.719085 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-264wp" event={"ID":"21f18c98-d7ac-4790-85ff-604d8e560225","Type":"ContainerStarted","Data":"8f88bd820d90087052190d23a26993248eafedf03b55a03618508482b007a605"} Feb 26 17:49:43 crc kubenswrapper[4805]: I0226 17:49:43.735500 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xr99r" event={"ID":"bf698393-1f18-4014-a5d7-cb2f8a0ac86f","Type":"ContainerStarted","Data":"223206f1872aaae677289fbaf75c44c347765783afcbdaedcfe46afe2f8bb44b"} Feb 26 17:49:43 crc kubenswrapper[4805]: I0226 17:49:43.766723 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xr99r" podStartSLOduration=3.194393167 podStartE2EDuration="6.766688336s" podCreationTimestamp="2026-02-26 17:49:37 +0000 UTC" firstStartedPulling="2026-02-26 17:49:39.67857196 +0000 UTC m=+2094.240326299" lastFinishedPulling="2026-02-26 17:49:43.250867139 +0000 UTC m=+2097.812621468" observedRunningTime="2026-02-26 17:49:43.75379934 +0000 UTC m=+2098.315553699" watchObservedRunningTime="2026-02-26 17:49:43.766688336 +0000 UTC m=+2098.328442675" Feb 26 17:49:44 crc kubenswrapper[4805]: I0226 17:49:44.747452 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-264wp" event={"ID":"21f18c98-d7ac-4790-85ff-604d8e560225","Type":"ContainerStarted","Data":"96c41477ef8603d93c1d7e3b44cebe6dc04adea13822a7e16e4eb932e3d38d96"} Feb 26 17:49:45 crc kubenswrapper[4805]: I0226 17:49:45.024726 4805 scope.go:117] "RemoveContainer" containerID="fae150fe9c39f36ada3f8390a014777fbd381fda97923ec8607a1697117187e0" Feb 26 17:49:45 crc kubenswrapper[4805]: I0226 17:49:45.073348 4805 scope.go:117] "RemoveContainer" containerID="5e8b2dc525618a74dce4e1d421f9b858f0f55249f5cd42c645e6d4d81f6dd1ea" Feb 26 17:49:45 crc kubenswrapper[4805]: I0226 17:49:45.135447 4805 scope.go:117] "RemoveContainer" containerID="56941ff444a4e8d6538e0392b00c752ffa2d15a3bb5f28f4f4b3d1fb49310946" Feb 26 17:49:45 crc kubenswrapper[4805]: I0226 17:49:45.201260 4805 scope.go:117] "RemoveContainer" containerID="f18cd5e30cd9c44aaa5bf52ec9a00c41d4f42071caad7ddb87a3efcc5e7c2330" Feb 26 17:49:45 crc kubenswrapper[4805]: I0226 17:49:45.952908 4805 scope.go:117] "RemoveContainer" containerID="8805aea259eb85c371de7bf3781a3208df65fcf541775082ddfaaaa543988732" Feb 26 17:49:46 crc kubenswrapper[4805]: I0226 17:49:46.769549 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" event={"ID":"25e83477-65d0-41be-8e55-fdacfc5871a8","Type":"ContainerStarted","Data":"8b8968885710b097709d7bb1c27dbd325f06798e3afdff6ea081963d31b05a35"} Feb 26 17:49:48 crc kubenswrapper[4805]: I0226 17:49:48.153041 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xr99r" Feb 26 17:49:48 crc kubenswrapper[4805]: I0226 17:49:48.153994 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xr99r" Feb 26 17:49:49 crc kubenswrapper[4805]: I0226 17:49:49.207640 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-xr99r" podUID="bf698393-1f18-4014-a5d7-cb2f8a0ac86f" containerName="registry-server" probeResult="failure" output=< Feb 26 17:49:49 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 26 17:49:49 crc kubenswrapper[4805]: > Feb 26 17:49:51 crc kubenswrapper[4805]: I0226 17:49:51.828895 4805 generic.go:334] "Generic (PLEG): container finished" podID="21f18c98-d7ac-4790-85ff-604d8e560225" containerID="96c41477ef8603d93c1d7e3b44cebe6dc04adea13822a7e16e4eb932e3d38d96" exitCode=0 Feb 26 17:49:51 crc kubenswrapper[4805]: I0226 17:49:51.828967 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-264wp" event={"ID":"21f18c98-d7ac-4790-85ff-604d8e560225","Type":"ContainerDied","Data":"96c41477ef8603d93c1d7e3b44cebe6dc04adea13822a7e16e4eb932e3d38d96"} Feb 26 17:49:52 crc kubenswrapper[4805]: I0226 17:49:52.843764 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-264wp" event={"ID":"21f18c98-d7ac-4790-85ff-604d8e560225","Type":"ContainerStarted","Data":"153f91b82073113151f1c523388d94fcbcccaa945aeb9003de0071cb2aa643b5"} Feb 26 17:49:52 crc kubenswrapper[4805]: I0226 17:49:52.868629 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-264wp" podStartSLOduration=3.327367638 podStartE2EDuration="12.868609979s" podCreationTimestamp="2026-02-26 17:49:40 +0000 UTC" firstStartedPulling="2026-02-26 17:49:42.720400799 +0000 UTC m=+2097.282155138" lastFinishedPulling="2026-02-26 17:49:52.26164314 +0000 UTC m=+2106.823397479" observedRunningTime="2026-02-26 17:49:52.862888424 +0000 UTC m=+2107.424642773" watchObservedRunningTime="2026-02-26 17:49:52.868609979 +0000 UTC m=+2107.430364318" Feb 26 17:49:59 crc kubenswrapper[4805]: I0226 17:49:59.197483 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-xr99r" podUID="bf698393-1f18-4014-a5d7-cb2f8a0ac86f" containerName="registry-server" probeResult="failure" output=< Feb 26 17:49:59 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 26 17:49:59 crc kubenswrapper[4805]: > Feb 26 17:50:00 crc kubenswrapper[4805]: I0226 17:50:00.159485 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535470-c26pd"] Feb 26 17:50:00 crc kubenswrapper[4805]: I0226 17:50:00.161335 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535470-c26pd" Feb 26 17:50:00 crc kubenswrapper[4805]: I0226 17:50:00.163598 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 17:50:00 crc kubenswrapper[4805]: I0226 17:50:00.164815 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 17:50:00 crc kubenswrapper[4805]: I0226 17:50:00.164867 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 17:50:00 crc kubenswrapper[4805]: I0226 17:50:00.174981 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535470-c26pd"] Feb 26 17:50:00 crc kubenswrapper[4805]: I0226 17:50:00.286996 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp6zl\" (UniqueName: \"kubernetes.io/projected/676c2a01-20ee-4731-be9b-6b95816e3559-kube-api-access-rp6zl\") pod \"auto-csr-approver-29535470-c26pd\" (UID: \"676c2a01-20ee-4731-be9b-6b95816e3559\") " pod="openshift-infra/auto-csr-approver-29535470-c26pd" Feb 26 17:50:00 crc kubenswrapper[4805]: I0226 17:50:00.389673 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp6zl\" (UniqueName: \"kubernetes.io/projected/676c2a01-20ee-4731-be9b-6b95816e3559-kube-api-access-rp6zl\") pod \"auto-csr-approver-29535470-c26pd\" (UID: \"676c2a01-20ee-4731-be9b-6b95816e3559\") " pod="openshift-infra/auto-csr-approver-29535470-c26pd" Feb 26 17:50:00 crc kubenswrapper[4805]: I0226 17:50:00.416006 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp6zl\" (UniqueName: \"kubernetes.io/projected/676c2a01-20ee-4731-be9b-6b95816e3559-kube-api-access-rp6zl\") pod \"auto-csr-approver-29535470-c26pd\" (UID: \"676c2a01-20ee-4731-be9b-6b95816e3559\") " pod="openshift-infra/auto-csr-approver-29535470-c26pd" Feb 26 17:50:00 crc kubenswrapper[4805]: I0226 17:50:00.488686 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535470-c26pd" Feb 26 17:50:01 crc kubenswrapper[4805]: I0226 17:50:01.016057 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535470-c26pd"] Feb 26 17:50:01 crc kubenswrapper[4805]: I0226 17:50:01.168192 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-264wp" Feb 26 17:50:01 crc kubenswrapper[4805]: I0226 17:50:01.168261 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-264wp" Feb 26 17:50:01 crc kubenswrapper[4805]: I0226 17:50:01.219984 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-264wp" Feb 26 17:50:01 crc kubenswrapper[4805]: I0226 17:50:01.925836 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535470-c26pd" event={"ID":"676c2a01-20ee-4731-be9b-6b95816e3559","Type":"ContainerStarted","Data":"36dad0d4adec3366421d2d1d1965adf0b4477d79539b55b3e28d32a50ba57422"} Feb 26 17:50:01 crc kubenswrapper[4805]: I0226 17:50:01.974833 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-264wp" Feb 26 17:50:02 crc kubenswrapper[4805]: I0226 17:50:02.043403 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-264wp"] Feb 26 17:50:02 crc kubenswrapper[4805]: I0226 17:50:02.937837 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535470-c26pd" event={"ID":"676c2a01-20ee-4731-be9b-6b95816e3559","Type":"ContainerStarted","Data":"43d49a77ace660146e2be5f0e3a1a957bc25fad52783e84b10db29ccaff34241"} Feb 26 17:50:02 crc kubenswrapper[4805]: I0226 17:50:02.959148 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535470-c26pd" podStartSLOduration=1.539456631 podStartE2EDuration="2.959131165s" podCreationTimestamp="2026-02-26 17:50:00 +0000 UTC" firstStartedPulling="2026-02-26 17:50:01.020784891 +0000 UTC m=+2115.582539220" lastFinishedPulling="2026-02-26 17:50:02.440459415 +0000 UTC m=+2117.002213754" observedRunningTime="2026-02-26 17:50:02.951589874 +0000 UTC m=+2117.513344213" watchObservedRunningTime="2026-02-26 17:50:02.959131165 +0000 UTC m=+2117.520885504" Feb 26 17:50:03 crc kubenswrapper[4805]: I0226 17:50:03.949155 4805 generic.go:334] "Generic (PLEG): container finished" podID="676c2a01-20ee-4731-be9b-6b95816e3559" containerID="43d49a77ace660146e2be5f0e3a1a957bc25fad52783e84b10db29ccaff34241" exitCode=0 Feb 26 17:50:03 crc kubenswrapper[4805]: I0226 17:50:03.949244 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535470-c26pd" event={"ID":"676c2a01-20ee-4731-be9b-6b95816e3559","Type":"ContainerDied","Data":"43d49a77ace660146e2be5f0e3a1a957bc25fad52783e84b10db29ccaff34241"} Feb 26 17:50:03 crc kubenswrapper[4805]: I0226 17:50:03.949972 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-264wp" podUID="21f18c98-d7ac-4790-85ff-604d8e560225" containerName="registry-server" containerID="cri-o://153f91b82073113151f1c523388d94fcbcccaa945aeb9003de0071cb2aa643b5" gracePeriod=2 Feb 26 17:50:04 crc kubenswrapper[4805]: I0226 17:50:04.533053 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-264wp" Feb 26 17:50:04 crc kubenswrapper[4805]: I0226 17:50:04.639609 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21f18c98-d7ac-4790-85ff-604d8e560225-utilities\") pod \"21f18c98-d7ac-4790-85ff-604d8e560225\" (UID: \"21f18c98-d7ac-4790-85ff-604d8e560225\") " Feb 26 17:50:04 crc kubenswrapper[4805]: I0226 17:50:04.639684 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21f18c98-d7ac-4790-85ff-604d8e560225-catalog-content\") pod \"21f18c98-d7ac-4790-85ff-604d8e560225\" (UID: \"21f18c98-d7ac-4790-85ff-604d8e560225\") " Feb 26 17:50:04 crc kubenswrapper[4805]: I0226 17:50:04.639902 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4w87z\" (UniqueName: \"kubernetes.io/projected/21f18c98-d7ac-4790-85ff-604d8e560225-kube-api-access-4w87z\") pod \"21f18c98-d7ac-4790-85ff-604d8e560225\" (UID: \"21f18c98-d7ac-4790-85ff-604d8e560225\") " Feb 26 17:50:04 crc kubenswrapper[4805]: I0226 17:50:04.641016 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21f18c98-d7ac-4790-85ff-604d8e560225-utilities" (OuterVolumeSpecName: "utilities") pod "21f18c98-d7ac-4790-85ff-604d8e560225" (UID: "21f18c98-d7ac-4790-85ff-604d8e560225"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:50:04 crc kubenswrapper[4805]: I0226 17:50:04.642200 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21f18c98-d7ac-4790-85ff-604d8e560225-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 17:50:04 crc kubenswrapper[4805]: I0226 17:50:04.653983 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21f18c98-d7ac-4790-85ff-604d8e560225-kube-api-access-4w87z" (OuterVolumeSpecName: "kube-api-access-4w87z") pod "21f18c98-d7ac-4790-85ff-604d8e560225" (UID: "21f18c98-d7ac-4790-85ff-604d8e560225"). InnerVolumeSpecName "kube-api-access-4w87z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:50:04 crc kubenswrapper[4805]: I0226 17:50:04.743314 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4w87z\" (UniqueName: \"kubernetes.io/projected/21f18c98-d7ac-4790-85ff-604d8e560225-kube-api-access-4w87z\") on node \"crc\" DevicePath \"\"" Feb 26 17:50:04 crc kubenswrapper[4805]: I0226 17:50:04.766360 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21f18c98-d7ac-4790-85ff-604d8e560225-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "21f18c98-d7ac-4790-85ff-604d8e560225" (UID: "21f18c98-d7ac-4790-85ff-604d8e560225"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:50:04 crc kubenswrapper[4805]: I0226 17:50:04.846139 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21f18c98-d7ac-4790-85ff-604d8e560225-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 17:50:04 crc kubenswrapper[4805]: I0226 17:50:04.970032 4805 generic.go:334] "Generic (PLEG): container finished" podID="21f18c98-d7ac-4790-85ff-604d8e560225" containerID="153f91b82073113151f1c523388d94fcbcccaa945aeb9003de0071cb2aa643b5" exitCode=0 Feb 26 17:50:04 crc kubenswrapper[4805]: I0226 17:50:04.970252 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-264wp" Feb 26 17:50:04 crc kubenswrapper[4805]: I0226 17:50:04.971551 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-264wp" event={"ID":"21f18c98-d7ac-4790-85ff-604d8e560225","Type":"ContainerDied","Data":"153f91b82073113151f1c523388d94fcbcccaa945aeb9003de0071cb2aa643b5"} Feb 26 17:50:04 crc kubenswrapper[4805]: I0226 17:50:04.971598 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-264wp" event={"ID":"21f18c98-d7ac-4790-85ff-604d8e560225","Type":"ContainerDied","Data":"8f88bd820d90087052190d23a26993248eafedf03b55a03618508482b007a605"} Feb 26 17:50:04 crc kubenswrapper[4805]: I0226 17:50:04.971652 4805 scope.go:117] "RemoveContainer" containerID="153f91b82073113151f1c523388d94fcbcccaa945aeb9003de0071cb2aa643b5" Feb 26 17:50:05 crc kubenswrapper[4805]: I0226 17:50:05.008594 4805 scope.go:117] "RemoveContainer" containerID="96c41477ef8603d93c1d7e3b44cebe6dc04adea13822a7e16e4eb932e3d38d96" Feb 26 17:50:05 crc kubenswrapper[4805]: I0226 17:50:05.023327 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-264wp"] Feb 26 17:50:05 crc kubenswrapper[4805]: I0226 17:50:05.060539 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-264wp"] Feb 26 17:50:05 crc kubenswrapper[4805]: I0226 17:50:05.094851 4805 scope.go:117] "RemoveContainer" containerID="a5f4de3e6c757adf337de850bde5b7c8ba0b962f117a1c043cdf03b6bdf6dca6" Feb 26 17:50:05 crc kubenswrapper[4805]: I0226 17:50:05.193549 4805 scope.go:117] "RemoveContainer" containerID="153f91b82073113151f1c523388d94fcbcccaa945aeb9003de0071cb2aa643b5" Feb 26 17:50:05 crc kubenswrapper[4805]: E0226 17:50:05.194636 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"153f91b82073113151f1c523388d94fcbcccaa945aeb9003de0071cb2aa643b5\": container with ID starting with 153f91b82073113151f1c523388d94fcbcccaa945aeb9003de0071cb2aa643b5 not found: ID does not exist" containerID="153f91b82073113151f1c523388d94fcbcccaa945aeb9003de0071cb2aa643b5" Feb 26 17:50:05 crc kubenswrapper[4805]: I0226 17:50:05.194677 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"153f91b82073113151f1c523388d94fcbcccaa945aeb9003de0071cb2aa643b5"} err="failed to get container status \"153f91b82073113151f1c523388d94fcbcccaa945aeb9003de0071cb2aa643b5\": rpc error: code = NotFound desc = could not find container \"153f91b82073113151f1c523388d94fcbcccaa945aeb9003de0071cb2aa643b5\": container with ID starting with 153f91b82073113151f1c523388d94fcbcccaa945aeb9003de0071cb2aa643b5 not found: ID does not exist" Feb 26 17:50:05 crc kubenswrapper[4805]: I0226 17:50:05.194707 4805 scope.go:117] "RemoveContainer" containerID="96c41477ef8603d93c1d7e3b44cebe6dc04adea13822a7e16e4eb932e3d38d96" Feb 26 17:50:05 crc kubenswrapper[4805]: E0226 17:50:05.199538 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96c41477ef8603d93c1d7e3b44cebe6dc04adea13822a7e16e4eb932e3d38d96\": container with ID starting with 96c41477ef8603d93c1d7e3b44cebe6dc04adea13822a7e16e4eb932e3d38d96 not found: ID does not exist" containerID="96c41477ef8603d93c1d7e3b44cebe6dc04adea13822a7e16e4eb932e3d38d96" Feb 26 17:50:05 crc kubenswrapper[4805]: I0226 17:50:05.199589 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96c41477ef8603d93c1d7e3b44cebe6dc04adea13822a7e16e4eb932e3d38d96"} err="failed to get container status \"96c41477ef8603d93c1d7e3b44cebe6dc04adea13822a7e16e4eb932e3d38d96\": rpc error: code = NotFound desc = could not find container \"96c41477ef8603d93c1d7e3b44cebe6dc04adea13822a7e16e4eb932e3d38d96\": container with ID starting with 96c41477ef8603d93c1d7e3b44cebe6dc04adea13822a7e16e4eb932e3d38d96 not found: ID does not exist" Feb 26 17:50:05 crc kubenswrapper[4805]: I0226 17:50:05.199619 4805 scope.go:117] "RemoveContainer" containerID="a5f4de3e6c757adf337de850bde5b7c8ba0b962f117a1c043cdf03b6bdf6dca6" Feb 26 17:50:05 crc kubenswrapper[4805]: E0226 17:50:05.203396 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5f4de3e6c757adf337de850bde5b7c8ba0b962f117a1c043cdf03b6bdf6dca6\": container with ID starting with a5f4de3e6c757adf337de850bde5b7c8ba0b962f117a1c043cdf03b6bdf6dca6 not found: ID does not exist" containerID="a5f4de3e6c757adf337de850bde5b7c8ba0b962f117a1c043cdf03b6bdf6dca6" Feb 26 17:50:05 crc kubenswrapper[4805]: I0226 17:50:05.203444 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5f4de3e6c757adf337de850bde5b7c8ba0b962f117a1c043cdf03b6bdf6dca6"} err="failed to get container status \"a5f4de3e6c757adf337de850bde5b7c8ba0b962f117a1c043cdf03b6bdf6dca6\": rpc error: code = NotFound desc = could not find container \"a5f4de3e6c757adf337de850bde5b7c8ba0b962f117a1c043cdf03b6bdf6dca6\": container with ID starting with a5f4de3e6c757adf337de850bde5b7c8ba0b962f117a1c043cdf03b6bdf6dca6 not found: ID does not exist" Feb 26 17:50:05 crc kubenswrapper[4805]: I0226 17:50:05.680726 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535470-c26pd" Feb 26 17:50:05 crc kubenswrapper[4805]: I0226 17:50:05.866965 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rp6zl\" (UniqueName: \"kubernetes.io/projected/676c2a01-20ee-4731-be9b-6b95816e3559-kube-api-access-rp6zl\") pod \"676c2a01-20ee-4731-be9b-6b95816e3559\" (UID: \"676c2a01-20ee-4731-be9b-6b95816e3559\") " Feb 26 17:50:05 crc kubenswrapper[4805]: I0226 17:50:05.874326 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/676c2a01-20ee-4731-be9b-6b95816e3559-kube-api-access-rp6zl" (OuterVolumeSpecName: "kube-api-access-rp6zl") pod "676c2a01-20ee-4731-be9b-6b95816e3559" (UID: "676c2a01-20ee-4731-be9b-6b95816e3559"). InnerVolumeSpecName "kube-api-access-rp6zl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:50:05 crc kubenswrapper[4805]: I0226 17:50:05.970607 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rp6zl\" (UniqueName: \"kubernetes.io/projected/676c2a01-20ee-4731-be9b-6b95816e3559-kube-api-access-rp6zl\") on node \"crc\" DevicePath \"\"" Feb 26 17:50:05 crc kubenswrapper[4805]: I0226 17:50:05.987493 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535470-c26pd" event={"ID":"676c2a01-20ee-4731-be9b-6b95816e3559","Type":"ContainerDied","Data":"36dad0d4adec3366421d2d1d1965adf0b4477d79539b55b3e28d32a50ba57422"} Feb 26 17:50:05 crc kubenswrapper[4805]: I0226 17:50:05.987826 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36dad0d4adec3366421d2d1d1965adf0b4477d79539b55b3e28d32a50ba57422" Feb 26 17:50:05 crc kubenswrapper[4805]: I0226 17:50:05.987922 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535470-c26pd" Feb 26 17:50:06 crc kubenswrapper[4805]: I0226 17:50:06.034793 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535464-7bbhk"] Feb 26 17:50:06 crc kubenswrapper[4805]: I0226 17:50:06.043934 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535464-7bbhk"] Feb 26 17:50:06 crc kubenswrapper[4805]: I0226 17:50:06.971626 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21f18c98-d7ac-4790-85ff-604d8e560225" path="/var/lib/kubelet/pods/21f18c98-d7ac-4790-85ff-604d8e560225/volumes" Feb 26 17:50:06 crc kubenswrapper[4805]: I0226 17:50:06.972700 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa023ecd-19d5-44fa-89da-193222978970" path="/var/lib/kubelet/pods/aa023ecd-19d5-44fa-89da-193222978970/volumes" Feb 26 17:50:08 crc kubenswrapper[4805]: I0226 17:50:08.219957 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xr99r" Feb 26 17:50:08 crc kubenswrapper[4805]: I0226 17:50:08.272243 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xr99r" Feb 26 17:50:09 crc kubenswrapper[4805]: I0226 17:50:09.010368 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xr99r"] Feb 26 17:50:10 crc kubenswrapper[4805]: I0226 17:50:10.026833 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xr99r" podUID="bf698393-1f18-4014-a5d7-cb2f8a0ac86f" containerName="registry-server" containerID="cri-o://223206f1872aaae677289fbaf75c44c347765783afcbdaedcfe46afe2f8bb44b" gracePeriod=2 Feb 26 17:50:10 crc kubenswrapper[4805]: I0226 17:50:10.796595 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xr99r" Feb 26 17:50:10 crc kubenswrapper[4805]: I0226 17:50:10.915139 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf698393-1f18-4014-a5d7-cb2f8a0ac86f-catalog-content\") pod \"bf698393-1f18-4014-a5d7-cb2f8a0ac86f\" (UID: \"bf698393-1f18-4014-a5d7-cb2f8a0ac86f\") " Feb 26 17:50:10 crc kubenswrapper[4805]: I0226 17:50:10.915320 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bp7c8\" (UniqueName: \"kubernetes.io/projected/bf698393-1f18-4014-a5d7-cb2f8a0ac86f-kube-api-access-bp7c8\") pod \"bf698393-1f18-4014-a5d7-cb2f8a0ac86f\" (UID: \"bf698393-1f18-4014-a5d7-cb2f8a0ac86f\") " Feb 26 17:50:10 crc kubenswrapper[4805]: I0226 17:50:10.915440 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf698393-1f18-4014-a5d7-cb2f8a0ac86f-utilities\") pod \"bf698393-1f18-4014-a5d7-cb2f8a0ac86f\" (UID: \"bf698393-1f18-4014-a5d7-cb2f8a0ac86f\") " Feb 26 17:50:10 crc kubenswrapper[4805]: I0226 17:50:10.915916 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf698393-1f18-4014-a5d7-cb2f8a0ac86f-utilities" (OuterVolumeSpecName: "utilities") pod "bf698393-1f18-4014-a5d7-cb2f8a0ac86f" (UID: "bf698393-1f18-4014-a5d7-cb2f8a0ac86f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:50:10 crc kubenswrapper[4805]: I0226 17:50:10.922102 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf698393-1f18-4014-a5d7-cb2f8a0ac86f-kube-api-access-bp7c8" (OuterVolumeSpecName: "kube-api-access-bp7c8") pod "bf698393-1f18-4014-a5d7-cb2f8a0ac86f" (UID: "bf698393-1f18-4014-a5d7-cb2f8a0ac86f"). InnerVolumeSpecName "kube-api-access-bp7c8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:50:10 crc kubenswrapper[4805]: I0226 17:50:10.938896 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf698393-1f18-4014-a5d7-cb2f8a0ac86f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf698393-1f18-4014-a5d7-cb2f8a0ac86f" (UID: "bf698393-1f18-4014-a5d7-cb2f8a0ac86f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:50:11 crc kubenswrapper[4805]: I0226 17:50:11.017658 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf698393-1f18-4014-a5d7-cb2f8a0ac86f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 17:50:11 crc kubenswrapper[4805]: I0226 17:50:11.017972 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bp7c8\" (UniqueName: \"kubernetes.io/projected/bf698393-1f18-4014-a5d7-cb2f8a0ac86f-kube-api-access-bp7c8\") on node \"crc\" DevicePath \"\"" Feb 26 17:50:11 crc kubenswrapper[4805]: I0226 17:50:11.017988 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf698393-1f18-4014-a5d7-cb2f8a0ac86f-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 17:50:11 crc kubenswrapper[4805]: I0226 17:50:11.036821 4805 generic.go:334] "Generic (PLEG): container finished" podID="bf698393-1f18-4014-a5d7-cb2f8a0ac86f" containerID="223206f1872aaae677289fbaf75c44c347765783afcbdaedcfe46afe2f8bb44b" exitCode=0 Feb 26 17:50:11 crc kubenswrapper[4805]: I0226 17:50:11.036881 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xr99r" event={"ID":"bf698393-1f18-4014-a5d7-cb2f8a0ac86f","Type":"ContainerDied","Data":"223206f1872aaae677289fbaf75c44c347765783afcbdaedcfe46afe2f8bb44b"} Feb 26 17:50:11 crc kubenswrapper[4805]: I0226 17:50:11.036919 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xr99r" event={"ID":"bf698393-1f18-4014-a5d7-cb2f8a0ac86f","Type":"ContainerDied","Data":"015a6ce4699206d0fc06b56f618fa692002b8bb8127a3b83620c31130fd5a832"} Feb 26 17:50:11 crc kubenswrapper[4805]: I0226 17:50:11.036941 4805 scope.go:117] "RemoveContainer" containerID="223206f1872aaae677289fbaf75c44c347765783afcbdaedcfe46afe2f8bb44b" Feb 26 17:50:11 crc kubenswrapper[4805]: I0226 17:50:11.036889 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xr99r" Feb 26 17:50:11 crc kubenswrapper[4805]: I0226 17:50:11.061448 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xr99r"] Feb 26 17:50:11 crc kubenswrapper[4805]: I0226 17:50:11.065914 4805 scope.go:117] "RemoveContainer" containerID="cd1196a421d67894e51633084d7fea0ee948051438446743edf3a077fbbc9b32" Feb 26 17:50:11 crc kubenswrapper[4805]: I0226 17:50:11.072827 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xr99r"] Feb 26 17:50:11 crc kubenswrapper[4805]: I0226 17:50:11.088193 4805 scope.go:117] "RemoveContainer" containerID="f5c971c6266096f3b1ec3a70bdf6dc4f8ec8bbf3d1c5e59c20768608f4e125ee" Feb 26 17:50:11 crc kubenswrapper[4805]: I0226 17:50:11.150979 4805 scope.go:117] "RemoveContainer" containerID="223206f1872aaae677289fbaf75c44c347765783afcbdaedcfe46afe2f8bb44b" Feb 26 17:50:11 crc kubenswrapper[4805]: E0226 17:50:11.151503 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"223206f1872aaae677289fbaf75c44c347765783afcbdaedcfe46afe2f8bb44b\": container with ID starting with 223206f1872aaae677289fbaf75c44c347765783afcbdaedcfe46afe2f8bb44b not found: ID does not exist" containerID="223206f1872aaae677289fbaf75c44c347765783afcbdaedcfe46afe2f8bb44b" Feb 26 17:50:11 crc kubenswrapper[4805]: I0226 17:50:11.151547 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"223206f1872aaae677289fbaf75c44c347765783afcbdaedcfe46afe2f8bb44b"} err="failed to get container status \"223206f1872aaae677289fbaf75c44c347765783afcbdaedcfe46afe2f8bb44b\": rpc error: code = NotFound desc = could not find container \"223206f1872aaae677289fbaf75c44c347765783afcbdaedcfe46afe2f8bb44b\": container with ID starting with 223206f1872aaae677289fbaf75c44c347765783afcbdaedcfe46afe2f8bb44b not found: ID does not exist" Feb 26 17:50:11 crc kubenswrapper[4805]: I0226 17:50:11.151577 4805 scope.go:117] "RemoveContainer" containerID="cd1196a421d67894e51633084d7fea0ee948051438446743edf3a077fbbc9b32" Feb 26 17:50:11 crc kubenswrapper[4805]: E0226 17:50:11.151945 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd1196a421d67894e51633084d7fea0ee948051438446743edf3a077fbbc9b32\": container with ID starting with cd1196a421d67894e51633084d7fea0ee948051438446743edf3a077fbbc9b32 not found: ID does not exist" containerID="cd1196a421d67894e51633084d7fea0ee948051438446743edf3a077fbbc9b32" Feb 26 17:50:11 crc kubenswrapper[4805]: I0226 17:50:11.151988 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd1196a421d67894e51633084d7fea0ee948051438446743edf3a077fbbc9b32"} err="failed to get container status \"cd1196a421d67894e51633084d7fea0ee948051438446743edf3a077fbbc9b32\": rpc error: code = NotFound desc = could not find container \"cd1196a421d67894e51633084d7fea0ee948051438446743edf3a077fbbc9b32\": container with ID starting with cd1196a421d67894e51633084d7fea0ee948051438446743edf3a077fbbc9b32 not found: ID does not exist" Feb 26 17:50:11 crc kubenswrapper[4805]: I0226 17:50:11.152052 4805 scope.go:117] "RemoveContainer" containerID="f5c971c6266096f3b1ec3a70bdf6dc4f8ec8bbf3d1c5e59c20768608f4e125ee" Feb 26 17:50:11 crc kubenswrapper[4805]: E0226 17:50:11.152486 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5c971c6266096f3b1ec3a70bdf6dc4f8ec8bbf3d1c5e59c20768608f4e125ee\": container with ID starting with f5c971c6266096f3b1ec3a70bdf6dc4f8ec8bbf3d1c5e59c20768608f4e125ee not found: ID does not exist" containerID="f5c971c6266096f3b1ec3a70bdf6dc4f8ec8bbf3d1c5e59c20768608f4e125ee" Feb 26 17:50:11 crc kubenswrapper[4805]: I0226 17:50:11.152577 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5c971c6266096f3b1ec3a70bdf6dc4f8ec8bbf3d1c5e59c20768608f4e125ee"} err="failed to get container status \"f5c971c6266096f3b1ec3a70bdf6dc4f8ec8bbf3d1c5e59c20768608f4e125ee\": rpc error: code = NotFound desc = could not find container \"f5c971c6266096f3b1ec3a70bdf6dc4f8ec8bbf3d1c5e59c20768608f4e125ee\": container with ID starting with f5c971c6266096f3b1ec3a70bdf6dc4f8ec8bbf3d1c5e59c20768608f4e125ee not found: ID does not exist" Feb 26 17:50:12 crc kubenswrapper[4805]: I0226 17:50:12.970578 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf698393-1f18-4014-a5d7-cb2f8a0ac86f" path="/var/lib/kubelet/pods/bf698393-1f18-4014-a5d7-cb2f8a0ac86f/volumes" Feb 26 17:50:15 crc kubenswrapper[4805]: I0226 17:50:15.083791 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-c5c5-account-create-update-675kp"] Feb 26 17:50:15 crc kubenswrapper[4805]: I0226 17:50:15.092323 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-1518-account-create-update-ktdrg"] Feb 26 17:50:15 crc kubenswrapper[4805]: I0226 17:50:15.102166 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-c5c5-account-create-update-675kp"] Feb 26 17:50:15 crc kubenswrapper[4805]: I0226 17:50:15.112478 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-a085-account-create-update-5t9kt"] Feb 26 17:50:15 crc kubenswrapper[4805]: I0226 17:50:15.125503 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-1518-account-create-update-ktdrg"] Feb 26 17:50:15 crc kubenswrapper[4805]: I0226 17:50:15.135258 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-a085-account-create-update-5t9kt"] Feb 26 17:50:16 crc kubenswrapper[4805]: I0226 17:50:16.035212 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-76npn"] Feb 26 17:50:16 crc kubenswrapper[4805]: I0226 17:50:16.049790 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-76npn"] Feb 26 17:50:16 crc kubenswrapper[4805]: I0226 17:50:16.971119 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="435245a7-a717-4cb3-8125-9800dd40f909" path="/var/lib/kubelet/pods/435245a7-a717-4cb3-8125-9800dd40f909/volumes" Feb 26 17:50:16 crc kubenswrapper[4805]: I0226 17:50:16.972748 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="873a3075-565b-48e7-a3d7-e0bcc6a0a60b" path="/var/lib/kubelet/pods/873a3075-565b-48e7-a3d7-e0bcc6a0a60b/volumes" Feb 26 17:50:16 crc kubenswrapper[4805]: I0226 17:50:16.973804 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad35cf63-aace-4b6f-b063-1f3642da07da" path="/var/lib/kubelet/pods/ad35cf63-aace-4b6f-b063-1f3642da07da/volumes" Feb 26 17:50:16 crc kubenswrapper[4805]: I0226 17:50:16.974755 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d80f8620-b048-40aa-a97c-07cdd033379f" path="/var/lib/kubelet/pods/d80f8620-b048-40aa-a97c-07cdd033379f/volumes" Feb 26 17:50:17 crc kubenswrapper[4805]: I0226 17:50:17.034952 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-snh6t"] Feb 26 17:50:17 crc kubenswrapper[4805]: I0226 17:50:17.049731 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-jktxs"] Feb 26 17:50:17 crc kubenswrapper[4805]: I0226 17:50:17.060880 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-snh6t"] Feb 26 17:50:17 crc kubenswrapper[4805]: I0226 17:50:17.072286 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-jktxs"] Feb 26 17:50:18 crc kubenswrapper[4805]: I0226 17:50:18.983101 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8365e8e6-fa3b-4a87-936c-d63936b861d0" path="/var/lib/kubelet/pods/8365e8e6-fa3b-4a87-936c-d63936b861d0/volumes" Feb 26 17:50:18 crc kubenswrapper[4805]: I0226 17:50:18.984099 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cec8ed7b-d0af-4d30-b8d0-764518020645" path="/var/lib/kubelet/pods/cec8ed7b-d0af-4d30-b8d0-764518020645/volumes" Feb 26 17:50:24 crc kubenswrapper[4805]: I0226 17:50:24.191663 4805 generic.go:334] "Generic (PLEG): container finished" podID="2cefa4ec-85ad-4c95-a9dc-06978d1325c2" containerID="d4a6035d7bd1744f11f95a3e0aa7b8c3bc9effe5271b8f66881671051e1dd31a" exitCode=0 Feb 26 17:50:24 crc kubenswrapper[4805]: I0226 17:50:24.192035 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hg6g" event={"ID":"2cefa4ec-85ad-4c95-a9dc-06978d1325c2","Type":"ContainerDied","Data":"d4a6035d7bd1744f11f95a3e0aa7b8c3bc9effe5271b8f66881671051e1dd31a"} Feb 26 17:50:25 crc kubenswrapper[4805]: I0226 17:50:25.735132 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hg6g" Feb 26 17:50:25 crc kubenswrapper[4805]: I0226 17:50:25.845552 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9dgs\" (UniqueName: \"kubernetes.io/projected/2cefa4ec-85ad-4c95-a9dc-06978d1325c2-kube-api-access-c9dgs\") pod \"2cefa4ec-85ad-4c95-a9dc-06978d1325c2\" (UID: \"2cefa4ec-85ad-4c95-a9dc-06978d1325c2\") " Feb 26 17:50:25 crc kubenswrapper[4805]: I0226 17:50:25.845814 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cefa4ec-85ad-4c95-a9dc-06978d1325c2-ssh-key-openstack-edpm-ipam\") pod \"2cefa4ec-85ad-4c95-a9dc-06978d1325c2\" (UID: \"2cefa4ec-85ad-4c95-a9dc-06978d1325c2\") " Feb 26 17:50:25 crc kubenswrapper[4805]: I0226 17:50:25.845999 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cefa4ec-85ad-4c95-a9dc-06978d1325c2-inventory\") pod \"2cefa4ec-85ad-4c95-a9dc-06978d1325c2\" (UID: \"2cefa4ec-85ad-4c95-a9dc-06978d1325c2\") " Feb 26 17:50:25 crc kubenswrapper[4805]: I0226 17:50:25.852140 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cefa4ec-85ad-4c95-a9dc-06978d1325c2-kube-api-access-c9dgs" (OuterVolumeSpecName: "kube-api-access-c9dgs") pod "2cefa4ec-85ad-4c95-a9dc-06978d1325c2" (UID: "2cefa4ec-85ad-4c95-a9dc-06978d1325c2"). InnerVolumeSpecName "kube-api-access-c9dgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:50:25 crc kubenswrapper[4805]: I0226 17:50:25.876650 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cefa4ec-85ad-4c95-a9dc-06978d1325c2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2cefa4ec-85ad-4c95-a9dc-06978d1325c2" (UID: "2cefa4ec-85ad-4c95-a9dc-06978d1325c2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:50:25 crc kubenswrapper[4805]: I0226 17:50:25.883259 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cefa4ec-85ad-4c95-a9dc-06978d1325c2-inventory" (OuterVolumeSpecName: "inventory") pod "2cefa4ec-85ad-4c95-a9dc-06978d1325c2" (UID: "2cefa4ec-85ad-4c95-a9dc-06978d1325c2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:50:25 crc kubenswrapper[4805]: I0226 17:50:25.948676 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cefa4ec-85ad-4c95-a9dc-06978d1325c2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 17:50:25 crc kubenswrapper[4805]: I0226 17:50:25.948907 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cefa4ec-85ad-4c95-a9dc-06978d1325c2-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 17:50:25 crc kubenswrapper[4805]: I0226 17:50:25.948988 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9dgs\" (UniqueName: \"kubernetes.io/projected/2cefa4ec-85ad-4c95-a9dc-06978d1325c2-kube-api-access-c9dgs\") on node \"crc\" DevicePath \"\"" Feb 26 17:50:26 crc kubenswrapper[4805]: I0226 17:50:26.214460 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hg6g" event={"ID":"2cefa4ec-85ad-4c95-a9dc-06978d1325c2","Type":"ContainerDied","Data":"822bdcf784925bcc144edf749f7edb153084e985212b17c30513a357b68a490d"} Feb 26 17:50:26 crc kubenswrapper[4805]: I0226 17:50:26.214746 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="822bdcf784925bcc144edf749f7edb153084e985212b17c30513a357b68a490d" Feb 26 17:50:26 crc kubenswrapper[4805]: I0226 17:50:26.214551 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-5hg6g" Feb 26 17:50:26 crc kubenswrapper[4805]: I0226 17:50:26.424916 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42gxs"] Feb 26 17:50:26 crc kubenswrapper[4805]: E0226 17:50:26.427776 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21f18c98-d7ac-4790-85ff-604d8e560225" containerName="extract-utilities" Feb 26 17:50:26 crc kubenswrapper[4805]: I0226 17:50:26.427831 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="21f18c98-d7ac-4790-85ff-604d8e560225" containerName="extract-utilities" Feb 26 17:50:26 crc kubenswrapper[4805]: E0226 17:50:26.427863 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="676c2a01-20ee-4731-be9b-6b95816e3559" containerName="oc" Feb 26 17:50:26 crc kubenswrapper[4805]: I0226 17:50:26.427894 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="676c2a01-20ee-4731-be9b-6b95816e3559" containerName="oc" Feb 26 17:50:26 crc kubenswrapper[4805]: E0226 17:50:26.427911 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21f18c98-d7ac-4790-85ff-604d8e560225" containerName="registry-server" Feb 26 17:50:26 crc kubenswrapper[4805]: I0226 17:50:26.427922 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="21f18c98-d7ac-4790-85ff-604d8e560225" containerName="registry-server" Feb 26 17:50:26 crc kubenswrapper[4805]: E0226 17:50:26.427934 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf698393-1f18-4014-a5d7-cb2f8a0ac86f" containerName="extract-content" Feb 26 17:50:26 crc kubenswrapper[4805]: I0226 17:50:26.428120 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf698393-1f18-4014-a5d7-cb2f8a0ac86f" containerName="extract-content" Feb 26 17:50:26 crc kubenswrapper[4805]: E0226 17:50:26.428134 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf698393-1f18-4014-a5d7-cb2f8a0ac86f" containerName="registry-server" Feb 26 17:50:26 crc kubenswrapper[4805]: I0226 17:50:26.428140 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf698393-1f18-4014-a5d7-cb2f8a0ac86f" containerName="registry-server" Feb 26 17:50:26 crc kubenswrapper[4805]: E0226 17:50:26.428187 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21f18c98-d7ac-4790-85ff-604d8e560225" containerName="extract-content" Feb 26 17:50:26 crc kubenswrapper[4805]: I0226 17:50:26.428195 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="21f18c98-d7ac-4790-85ff-604d8e560225" containerName="extract-content" Feb 26 17:50:26 crc kubenswrapper[4805]: E0226 17:50:26.428207 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf698393-1f18-4014-a5d7-cb2f8a0ac86f" containerName="extract-utilities" Feb 26 17:50:26 crc kubenswrapper[4805]: I0226 17:50:26.428213 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf698393-1f18-4014-a5d7-cb2f8a0ac86f" containerName="extract-utilities" Feb 26 17:50:26 crc kubenswrapper[4805]: E0226 17:50:26.428225 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cefa4ec-85ad-4c95-a9dc-06978d1325c2" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 26 17:50:26 crc kubenswrapper[4805]: I0226 17:50:26.428232 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cefa4ec-85ad-4c95-a9dc-06978d1325c2" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 26 17:50:26 crc kubenswrapper[4805]: I0226 17:50:26.428478 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="676c2a01-20ee-4731-be9b-6b95816e3559" containerName="oc" Feb 26 17:50:26 crc kubenswrapper[4805]: I0226 17:50:26.428527 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="21f18c98-d7ac-4790-85ff-604d8e560225" containerName="registry-server" Feb 26 17:50:26 crc kubenswrapper[4805]: I0226 17:50:26.428541 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf698393-1f18-4014-a5d7-cb2f8a0ac86f" containerName="registry-server" Feb 26 17:50:26 crc kubenswrapper[4805]: I0226 17:50:26.428553 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cefa4ec-85ad-4c95-a9dc-06978d1325c2" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 26 17:50:26 crc kubenswrapper[4805]: I0226 17:50:26.429333 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42gxs" Feb 26 17:50:26 crc kubenswrapper[4805]: I0226 17:50:26.431345 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 17:50:26 crc kubenswrapper[4805]: I0226 17:50:26.433101 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 17:50:26 crc kubenswrapper[4805]: I0226 17:50:26.434181 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 17:50:26 crc kubenswrapper[4805]: I0226 17:50:26.438397 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sc2xs" Feb 26 17:50:26 crc kubenswrapper[4805]: I0226 17:50:26.445602 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42gxs"] Feb 26 17:50:26 crc kubenswrapper[4805]: I0226 17:50:26.561732 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9722d\" (UniqueName: \"kubernetes.io/projected/40f2cf4c-0815-415c-930f-90aebbfa5d64-kube-api-access-9722d\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-42gxs\" (UID: \"40f2cf4c-0815-415c-930f-90aebbfa5d64\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42gxs" Feb 26 17:50:26 crc kubenswrapper[4805]: I0226 17:50:26.561784 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/40f2cf4c-0815-415c-930f-90aebbfa5d64-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-42gxs\" (UID: \"40f2cf4c-0815-415c-930f-90aebbfa5d64\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42gxs" Feb 26 17:50:26 crc kubenswrapper[4805]: I0226 17:50:26.561992 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40f2cf4c-0815-415c-930f-90aebbfa5d64-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-42gxs\" (UID: \"40f2cf4c-0815-415c-930f-90aebbfa5d64\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42gxs" Feb 26 17:50:26 crc kubenswrapper[4805]: I0226 17:50:26.664455 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40f2cf4c-0815-415c-930f-90aebbfa5d64-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-42gxs\" (UID: \"40f2cf4c-0815-415c-930f-90aebbfa5d64\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42gxs" Feb 26 17:50:26 crc kubenswrapper[4805]: I0226 17:50:26.664526 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9722d\" (UniqueName: \"kubernetes.io/projected/40f2cf4c-0815-415c-930f-90aebbfa5d64-kube-api-access-9722d\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-42gxs\" (UID: \"40f2cf4c-0815-415c-930f-90aebbfa5d64\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42gxs" Feb 26 17:50:26 crc kubenswrapper[4805]: I0226 17:50:26.664557 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/40f2cf4c-0815-415c-930f-90aebbfa5d64-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-42gxs\" (UID: \"40f2cf4c-0815-415c-930f-90aebbfa5d64\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42gxs" Feb 26 17:50:26 crc kubenswrapper[4805]: I0226 17:50:26.668975 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/40f2cf4c-0815-415c-930f-90aebbfa5d64-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-42gxs\" (UID: \"40f2cf4c-0815-415c-930f-90aebbfa5d64\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42gxs" Feb 26 17:50:26 crc kubenswrapper[4805]: I0226 17:50:26.669199 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40f2cf4c-0815-415c-930f-90aebbfa5d64-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-42gxs\" (UID: \"40f2cf4c-0815-415c-930f-90aebbfa5d64\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42gxs" Feb 26 17:50:26 crc kubenswrapper[4805]: I0226 17:50:26.691314 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9722d\" (UniqueName: \"kubernetes.io/projected/40f2cf4c-0815-415c-930f-90aebbfa5d64-kube-api-access-9722d\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-42gxs\" (UID: \"40f2cf4c-0815-415c-930f-90aebbfa5d64\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42gxs" Feb 26 17:50:26 crc kubenswrapper[4805]: I0226 17:50:26.749657 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42gxs" Feb 26 17:50:27 crc kubenswrapper[4805]: I0226 17:50:27.297406 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42gxs"] Feb 26 17:50:27 crc kubenswrapper[4805]: W0226 17:50:27.297944 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40f2cf4c_0815_415c_930f_90aebbfa5d64.slice/crio-58858d354d7eb2a9e2105dfa8781f3f097ec54fa48e7b9234569c373a3d7bb8c WatchSource:0}: Error finding container 58858d354d7eb2a9e2105dfa8781f3f097ec54fa48e7b9234569c373a3d7bb8c: Status 404 returned error can't find the container with id 58858d354d7eb2a9e2105dfa8781f3f097ec54fa48e7b9234569c373a3d7bb8c Feb 26 17:50:28 crc kubenswrapper[4805]: I0226 17:50:28.233473 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42gxs" event={"ID":"40f2cf4c-0815-415c-930f-90aebbfa5d64","Type":"ContainerStarted","Data":"eefe187083a04b35181ae32291e4fdac4e9683f8560147d3acf3d7150f329707"} Feb 26 17:50:28 crc kubenswrapper[4805]: I0226 17:50:28.234066 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42gxs" event={"ID":"40f2cf4c-0815-415c-930f-90aebbfa5d64","Type":"ContainerStarted","Data":"58858d354d7eb2a9e2105dfa8781f3f097ec54fa48e7b9234569c373a3d7bb8c"} Feb 26 17:50:28 crc kubenswrapper[4805]: I0226 17:50:28.259314 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42gxs" podStartSLOduration=1.713769799 podStartE2EDuration="2.259288639s" podCreationTimestamp="2026-02-26 17:50:26 +0000 UTC" firstStartedPulling="2026-02-26 17:50:27.300193243 +0000 UTC m=+2141.861947582" lastFinishedPulling="2026-02-26 17:50:27.845712083 +0000 UTC m=+2142.407466422" observedRunningTime="2026-02-26 17:50:28.253467821 +0000 UTC m=+2142.815222180" watchObservedRunningTime="2026-02-26 17:50:28.259288639 +0000 UTC m=+2142.821042998" Feb 26 17:50:32 crc kubenswrapper[4805]: I0226 17:50:32.274753 4805 generic.go:334] "Generic (PLEG): container finished" podID="40f2cf4c-0815-415c-930f-90aebbfa5d64" containerID="eefe187083a04b35181ae32291e4fdac4e9683f8560147d3acf3d7150f329707" exitCode=0 Feb 26 17:50:32 crc kubenswrapper[4805]: I0226 17:50:32.274864 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42gxs" event={"ID":"40f2cf4c-0815-415c-930f-90aebbfa5d64","Type":"ContainerDied","Data":"eefe187083a04b35181ae32291e4fdac4e9683f8560147d3acf3d7150f329707"} Feb 26 17:50:33 crc kubenswrapper[4805]: I0226 17:50:33.784337 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42gxs" Feb 26 17:50:33 crc kubenswrapper[4805]: I0226 17:50:33.918419 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9722d\" (UniqueName: \"kubernetes.io/projected/40f2cf4c-0815-415c-930f-90aebbfa5d64-kube-api-access-9722d\") pod \"40f2cf4c-0815-415c-930f-90aebbfa5d64\" (UID: \"40f2cf4c-0815-415c-930f-90aebbfa5d64\") " Feb 26 17:50:33 crc kubenswrapper[4805]: I0226 17:50:33.918635 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/40f2cf4c-0815-415c-930f-90aebbfa5d64-ssh-key-openstack-edpm-ipam\") pod \"40f2cf4c-0815-415c-930f-90aebbfa5d64\" (UID: \"40f2cf4c-0815-415c-930f-90aebbfa5d64\") " Feb 26 17:50:33 crc kubenswrapper[4805]: I0226 17:50:33.918792 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40f2cf4c-0815-415c-930f-90aebbfa5d64-inventory\") pod \"40f2cf4c-0815-415c-930f-90aebbfa5d64\" (UID: \"40f2cf4c-0815-415c-930f-90aebbfa5d64\") " Feb 26 17:50:33 crc kubenswrapper[4805]: I0226 17:50:33.926472 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40f2cf4c-0815-415c-930f-90aebbfa5d64-kube-api-access-9722d" (OuterVolumeSpecName: "kube-api-access-9722d") pod "40f2cf4c-0815-415c-930f-90aebbfa5d64" (UID: "40f2cf4c-0815-415c-930f-90aebbfa5d64"). InnerVolumeSpecName "kube-api-access-9722d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:50:33 crc kubenswrapper[4805]: I0226 17:50:33.949911 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40f2cf4c-0815-415c-930f-90aebbfa5d64-inventory" (OuterVolumeSpecName: "inventory") pod "40f2cf4c-0815-415c-930f-90aebbfa5d64" (UID: "40f2cf4c-0815-415c-930f-90aebbfa5d64"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:50:33 crc kubenswrapper[4805]: I0226 17:50:33.952896 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40f2cf4c-0815-415c-930f-90aebbfa5d64-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "40f2cf4c-0815-415c-930f-90aebbfa5d64" (UID: "40f2cf4c-0815-415c-930f-90aebbfa5d64"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:50:34 crc kubenswrapper[4805]: I0226 17:50:34.022455 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40f2cf4c-0815-415c-930f-90aebbfa5d64-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 17:50:34 crc kubenswrapper[4805]: I0226 17:50:34.022503 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9722d\" (UniqueName: \"kubernetes.io/projected/40f2cf4c-0815-415c-930f-90aebbfa5d64-kube-api-access-9722d\") on node \"crc\" DevicePath \"\"" Feb 26 17:50:34 crc kubenswrapper[4805]: I0226 17:50:34.022519 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/40f2cf4c-0815-415c-930f-90aebbfa5d64-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 17:50:34 crc kubenswrapper[4805]: I0226 17:50:34.293695 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42gxs" event={"ID":"40f2cf4c-0815-415c-930f-90aebbfa5d64","Type":"ContainerDied","Data":"58858d354d7eb2a9e2105dfa8781f3f097ec54fa48e7b9234569c373a3d7bb8c"} Feb 26 17:50:34 crc kubenswrapper[4805]: I0226 17:50:34.293922 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58858d354d7eb2a9e2105dfa8781f3f097ec54fa48e7b9234569c373a3d7bb8c" Feb 26 17:50:34 crc kubenswrapper[4805]: I0226 17:50:34.293805 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-42gxs" Feb 26 17:50:34 crc kubenswrapper[4805]: I0226 17:50:34.404194 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-pfzsv"] Feb 26 17:50:34 crc kubenswrapper[4805]: E0226 17:50:34.404739 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40f2cf4c-0815-415c-930f-90aebbfa5d64" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 26 17:50:34 crc kubenswrapper[4805]: I0226 17:50:34.404760 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="40f2cf4c-0815-415c-930f-90aebbfa5d64" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 26 17:50:34 crc kubenswrapper[4805]: I0226 17:50:34.404985 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="40f2cf4c-0815-415c-930f-90aebbfa5d64" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 26 17:50:34 crc kubenswrapper[4805]: I0226 17:50:34.405865 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pfzsv" Feb 26 17:50:34 crc kubenswrapper[4805]: I0226 17:50:34.411907 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sc2xs" Feb 26 17:50:34 crc kubenswrapper[4805]: I0226 17:50:34.411925 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 17:50:34 crc kubenswrapper[4805]: I0226 17:50:34.412773 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 17:50:34 crc kubenswrapper[4805]: I0226 17:50:34.412879 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 17:50:34 crc kubenswrapper[4805]: I0226 17:50:34.437551 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-pfzsv"] Feb 26 17:50:34 crc kubenswrapper[4805]: I0226 17:50:34.540779 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkzbm\" (UniqueName: \"kubernetes.io/projected/705e88a7-b9ad-435e-9e9e-802e433d3bb0-kube-api-access-fkzbm\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pfzsv\" (UID: \"705e88a7-b9ad-435e-9e9e-802e433d3bb0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pfzsv" Feb 26 17:50:34 crc kubenswrapper[4805]: I0226 17:50:34.540857 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/705e88a7-b9ad-435e-9e9e-802e433d3bb0-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pfzsv\" (UID: \"705e88a7-b9ad-435e-9e9e-802e433d3bb0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pfzsv" Feb 26 17:50:34 crc kubenswrapper[4805]: I0226 17:50:34.540957 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/705e88a7-b9ad-435e-9e9e-802e433d3bb0-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pfzsv\" (UID: \"705e88a7-b9ad-435e-9e9e-802e433d3bb0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pfzsv" Feb 26 17:50:34 crc kubenswrapper[4805]: I0226 17:50:34.643572 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkzbm\" (UniqueName: \"kubernetes.io/projected/705e88a7-b9ad-435e-9e9e-802e433d3bb0-kube-api-access-fkzbm\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pfzsv\" (UID: \"705e88a7-b9ad-435e-9e9e-802e433d3bb0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pfzsv" Feb 26 17:50:34 crc kubenswrapper[4805]: I0226 17:50:34.643663 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/705e88a7-b9ad-435e-9e9e-802e433d3bb0-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pfzsv\" (UID: \"705e88a7-b9ad-435e-9e9e-802e433d3bb0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pfzsv" Feb 26 17:50:34 crc kubenswrapper[4805]: I0226 17:50:34.643731 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/705e88a7-b9ad-435e-9e9e-802e433d3bb0-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pfzsv\" (UID: \"705e88a7-b9ad-435e-9e9e-802e433d3bb0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pfzsv" Feb 26 17:50:34 crc kubenswrapper[4805]: I0226 17:50:34.647692 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/705e88a7-b9ad-435e-9e9e-802e433d3bb0-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pfzsv\" (UID: \"705e88a7-b9ad-435e-9e9e-802e433d3bb0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pfzsv" Feb 26 17:50:34 crc kubenswrapper[4805]: I0226 17:50:34.647804 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/705e88a7-b9ad-435e-9e9e-802e433d3bb0-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pfzsv\" (UID: \"705e88a7-b9ad-435e-9e9e-802e433d3bb0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pfzsv" Feb 26 17:50:34 crc kubenswrapper[4805]: I0226 17:50:34.659720 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkzbm\" (UniqueName: \"kubernetes.io/projected/705e88a7-b9ad-435e-9e9e-802e433d3bb0-kube-api-access-fkzbm\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-pfzsv\" (UID: \"705e88a7-b9ad-435e-9e9e-802e433d3bb0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pfzsv" Feb 26 17:50:34 crc kubenswrapper[4805]: I0226 17:50:34.734486 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pfzsv" Feb 26 17:50:35 crc kubenswrapper[4805]: I0226 17:50:35.287551 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-pfzsv"] Feb 26 17:50:35 crc kubenswrapper[4805]: I0226 17:50:35.305592 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pfzsv" event={"ID":"705e88a7-b9ad-435e-9e9e-802e433d3bb0","Type":"ContainerStarted","Data":"1e96e08af15493193f99132cce272c2ce88aea9fd85dab8df85ed9ebed76187d"} Feb 26 17:50:36 crc kubenswrapper[4805]: I0226 17:50:36.319896 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pfzsv" event={"ID":"705e88a7-b9ad-435e-9e9e-802e433d3bb0","Type":"ContainerStarted","Data":"3ffbe4254ba263393d55ab49afac0faa7ade575b5a766ed9af0d7d4e2b723eb7"} Feb 26 17:50:36 crc kubenswrapper[4805]: I0226 17:50:36.354852 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pfzsv" podStartSLOduration=1.959452281 podStartE2EDuration="2.354815445s" podCreationTimestamp="2026-02-26 17:50:34 +0000 UTC" firstStartedPulling="2026-02-26 17:50:35.292567414 +0000 UTC m=+2149.854321753" lastFinishedPulling="2026-02-26 17:50:35.687930578 +0000 UTC m=+2150.249684917" observedRunningTime="2026-02-26 17:50:36.338138872 +0000 UTC m=+2150.899893211" watchObservedRunningTime="2026-02-26 17:50:36.354815445 +0000 UTC m=+2150.916569784" Feb 26 17:50:45 crc kubenswrapper[4805]: I0226 17:50:45.340324 4805 scope.go:117] "RemoveContainer" containerID="e5b35dab67f4fa36723c7c2b347b7be0dd192c369bbba9e62e5e5adc1902b738" Feb 26 17:50:45 crc kubenswrapper[4805]: I0226 17:50:45.366705 4805 scope.go:117] "RemoveContainer" containerID="53f74aaeea74527f6b76cab70cebba1ec8737bf0505435a9496bfc8e895269ed" Feb 26 17:50:45 crc kubenswrapper[4805]: I0226 17:50:45.407390 4805 scope.go:117] "RemoveContainer" containerID="bfb8e199a6ecabf2e9849b0fdf0e6af6ad70d102d761356009ddd49368d17355" Feb 26 17:50:45 crc kubenswrapper[4805]: I0226 17:50:45.458260 4805 scope.go:117] "RemoveContainer" containerID="56d5eddd25a3f123749b0212de503c05892bf64ea83aab75b0a641264ab47e1d" Feb 26 17:50:45 crc kubenswrapper[4805]: I0226 17:50:45.524396 4805 scope.go:117] "RemoveContainer" containerID="acedf37dfa3dd13d23f4707dee188e7e7b37c50574245473acb8496ad1dafcc4" Feb 26 17:50:45 crc kubenswrapper[4805]: I0226 17:50:45.566398 4805 scope.go:117] "RemoveContainer" containerID="c6156fc5edcfeb21bc35b251b08eaebf27d36d89c0dc749e64b96adbd1f0bd11" Feb 26 17:50:45 crc kubenswrapper[4805]: I0226 17:50:45.627044 4805 scope.go:117] "RemoveContainer" containerID="7d516b9507afe0f04a90352e764e81cf937be244bb4d282e8177b35814eb1be7" Feb 26 17:51:06 crc kubenswrapper[4805]: I0226 17:51:06.627965 4805 generic.go:334] "Generic (PLEG): container finished" podID="705e88a7-b9ad-435e-9e9e-802e433d3bb0" containerID="3ffbe4254ba263393d55ab49afac0faa7ade575b5a766ed9af0d7d4e2b723eb7" exitCode=0 Feb 26 17:51:06 crc kubenswrapper[4805]: I0226 17:51:06.628129 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pfzsv" event={"ID":"705e88a7-b9ad-435e-9e9e-802e433d3bb0","Type":"ContainerDied","Data":"3ffbe4254ba263393d55ab49afac0faa7ade575b5a766ed9af0d7d4e2b723eb7"} Feb 26 17:51:08 crc kubenswrapper[4805]: I0226 17:51:08.121073 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pfzsv" Feb 26 17:51:08 crc kubenswrapper[4805]: I0226 17:51:08.289406 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkzbm\" (UniqueName: \"kubernetes.io/projected/705e88a7-b9ad-435e-9e9e-802e433d3bb0-kube-api-access-fkzbm\") pod \"705e88a7-b9ad-435e-9e9e-802e433d3bb0\" (UID: \"705e88a7-b9ad-435e-9e9e-802e433d3bb0\") " Feb 26 17:51:08 crc kubenswrapper[4805]: I0226 17:51:08.289463 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/705e88a7-b9ad-435e-9e9e-802e433d3bb0-inventory\") pod \"705e88a7-b9ad-435e-9e9e-802e433d3bb0\" (UID: \"705e88a7-b9ad-435e-9e9e-802e433d3bb0\") " Feb 26 17:51:08 crc kubenswrapper[4805]: I0226 17:51:08.289532 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/705e88a7-b9ad-435e-9e9e-802e433d3bb0-ssh-key-openstack-edpm-ipam\") pod \"705e88a7-b9ad-435e-9e9e-802e433d3bb0\" (UID: \"705e88a7-b9ad-435e-9e9e-802e433d3bb0\") " Feb 26 17:51:08 crc kubenswrapper[4805]: I0226 17:51:08.304319 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/705e88a7-b9ad-435e-9e9e-802e433d3bb0-kube-api-access-fkzbm" (OuterVolumeSpecName: "kube-api-access-fkzbm") pod "705e88a7-b9ad-435e-9e9e-802e433d3bb0" (UID: "705e88a7-b9ad-435e-9e9e-802e433d3bb0"). InnerVolumeSpecName "kube-api-access-fkzbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:51:08 crc kubenswrapper[4805]: I0226 17:51:08.322824 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/705e88a7-b9ad-435e-9e9e-802e433d3bb0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "705e88a7-b9ad-435e-9e9e-802e433d3bb0" (UID: "705e88a7-b9ad-435e-9e9e-802e433d3bb0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:51:08 crc kubenswrapper[4805]: I0226 17:51:08.333882 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/705e88a7-b9ad-435e-9e9e-802e433d3bb0-inventory" (OuterVolumeSpecName: "inventory") pod "705e88a7-b9ad-435e-9e9e-802e433d3bb0" (UID: "705e88a7-b9ad-435e-9e9e-802e433d3bb0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:51:08 crc kubenswrapper[4805]: I0226 17:51:08.391988 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkzbm\" (UniqueName: \"kubernetes.io/projected/705e88a7-b9ad-435e-9e9e-802e433d3bb0-kube-api-access-fkzbm\") on node \"crc\" DevicePath \"\"" Feb 26 17:51:08 crc kubenswrapper[4805]: I0226 17:51:08.392095 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/705e88a7-b9ad-435e-9e9e-802e433d3bb0-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 17:51:08 crc kubenswrapper[4805]: I0226 17:51:08.392108 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/705e88a7-b9ad-435e-9e9e-802e433d3bb0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 17:51:08 crc kubenswrapper[4805]: I0226 17:51:08.653982 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pfzsv" event={"ID":"705e88a7-b9ad-435e-9e9e-802e433d3bb0","Type":"ContainerDied","Data":"1e96e08af15493193f99132cce272c2ce88aea9fd85dab8df85ed9ebed76187d"} Feb 26 17:51:08 crc kubenswrapper[4805]: I0226 17:51:08.654040 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e96e08af15493193f99132cce272c2ce88aea9fd85dab8df85ed9ebed76187d" Feb 26 17:51:08 crc kubenswrapper[4805]: I0226 17:51:08.654064 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-pfzsv" Feb 26 17:51:08 crc kubenswrapper[4805]: I0226 17:51:08.743920 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q5n5f"] Feb 26 17:51:08 crc kubenswrapper[4805]: E0226 17:51:08.744514 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="705e88a7-b9ad-435e-9e9e-802e433d3bb0" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 26 17:51:08 crc kubenswrapper[4805]: I0226 17:51:08.744543 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="705e88a7-b9ad-435e-9e9e-802e433d3bb0" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 26 17:51:08 crc kubenswrapper[4805]: I0226 17:51:08.744842 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="705e88a7-b9ad-435e-9e9e-802e433d3bb0" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 26 17:51:08 crc kubenswrapper[4805]: I0226 17:51:08.745800 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q5n5f" Feb 26 17:51:08 crc kubenswrapper[4805]: I0226 17:51:08.747874 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sc2xs" Feb 26 17:51:08 crc kubenswrapper[4805]: I0226 17:51:08.748294 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 17:51:08 crc kubenswrapper[4805]: I0226 17:51:08.748546 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 17:51:08 crc kubenswrapper[4805]: I0226 17:51:08.749042 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 17:51:08 crc kubenswrapper[4805]: I0226 17:51:08.756436 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q5n5f"] Feb 26 17:51:08 crc kubenswrapper[4805]: I0226 17:51:08.903998 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fd5faad-e8f5-48cd-93a1-5818fc463c2e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-q5n5f\" (UID: \"4fd5faad-e8f5-48cd-93a1-5818fc463c2e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q5n5f" Feb 26 17:51:08 crc kubenswrapper[4805]: I0226 17:51:08.904196 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkr7n\" (UniqueName: \"kubernetes.io/projected/4fd5faad-e8f5-48cd-93a1-5818fc463c2e-kube-api-access-pkr7n\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-q5n5f\" (UID: \"4fd5faad-e8f5-48cd-93a1-5818fc463c2e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q5n5f" Feb 26 17:51:08 crc kubenswrapper[4805]: I0226 17:51:08.904239 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4fd5faad-e8f5-48cd-93a1-5818fc463c2e-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-q5n5f\" (UID: \"4fd5faad-e8f5-48cd-93a1-5818fc463c2e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q5n5f" Feb 26 17:51:09 crc kubenswrapper[4805]: I0226 17:51:09.005971 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkr7n\" (UniqueName: \"kubernetes.io/projected/4fd5faad-e8f5-48cd-93a1-5818fc463c2e-kube-api-access-pkr7n\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-q5n5f\" (UID: \"4fd5faad-e8f5-48cd-93a1-5818fc463c2e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q5n5f" Feb 26 17:51:09 crc kubenswrapper[4805]: I0226 17:51:09.006063 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4fd5faad-e8f5-48cd-93a1-5818fc463c2e-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-q5n5f\" (UID: \"4fd5faad-e8f5-48cd-93a1-5818fc463c2e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q5n5f" Feb 26 17:51:09 crc kubenswrapper[4805]: I0226 17:51:09.006157 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fd5faad-e8f5-48cd-93a1-5818fc463c2e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-q5n5f\" (UID: \"4fd5faad-e8f5-48cd-93a1-5818fc463c2e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q5n5f" Feb 26 17:51:09 crc kubenswrapper[4805]: I0226 17:51:09.009960 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4fd5faad-e8f5-48cd-93a1-5818fc463c2e-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-q5n5f\" (UID: \"4fd5faad-e8f5-48cd-93a1-5818fc463c2e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q5n5f" Feb 26 17:51:09 crc kubenswrapper[4805]: I0226 17:51:09.011358 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fd5faad-e8f5-48cd-93a1-5818fc463c2e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-q5n5f\" (UID: \"4fd5faad-e8f5-48cd-93a1-5818fc463c2e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q5n5f" Feb 26 17:51:09 crc kubenswrapper[4805]: I0226 17:51:09.023781 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkr7n\" (UniqueName: \"kubernetes.io/projected/4fd5faad-e8f5-48cd-93a1-5818fc463c2e-kube-api-access-pkr7n\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-q5n5f\" (UID: \"4fd5faad-e8f5-48cd-93a1-5818fc463c2e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q5n5f" Feb 26 17:51:09 crc kubenswrapper[4805]: I0226 17:51:09.069183 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q5n5f" Feb 26 17:51:09 crc kubenswrapper[4805]: I0226 17:51:09.625960 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q5n5f"] Feb 26 17:51:09 crc kubenswrapper[4805]: I0226 17:51:09.667643 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q5n5f" event={"ID":"4fd5faad-e8f5-48cd-93a1-5818fc463c2e","Type":"ContainerStarted","Data":"9e8d9ccad241a0979cea7d1b222350a70593ac04799988929c83730eb7a3e823"} Feb 26 17:51:10 crc kubenswrapper[4805]: I0226 17:51:10.676217 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q5n5f" event={"ID":"4fd5faad-e8f5-48cd-93a1-5818fc463c2e","Type":"ContainerStarted","Data":"6d474234306d4eafff9d7c677b7540e543cc60d1d26cdc3c4e3195f6555fe236"} Feb 26 17:51:10 crc kubenswrapper[4805]: I0226 17:51:10.703054 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q5n5f" podStartSLOduration=2.295843055 podStartE2EDuration="2.703035638s" podCreationTimestamp="2026-02-26 17:51:08 +0000 UTC" firstStartedPulling="2026-02-26 17:51:09.630121627 +0000 UTC m=+2184.191875966" lastFinishedPulling="2026-02-26 17:51:10.03731421 +0000 UTC m=+2184.599068549" observedRunningTime="2026-02-26 17:51:10.693009694 +0000 UTC m=+2185.254764053" watchObservedRunningTime="2026-02-26 17:51:10.703035638 +0000 UTC m=+2185.264789997" Feb 26 17:51:13 crc kubenswrapper[4805]: I0226 17:51:13.063944 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-82ztd"] Feb 26 17:51:13 crc kubenswrapper[4805]: I0226 17:51:13.083170 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-82ztd"] Feb 26 17:51:14 crc kubenswrapper[4805]: I0226 17:51:14.969535 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="328f9e45-178a-4f2d-b4f5-cf870b94e1a2" path="/var/lib/kubelet/pods/328f9e45-178a-4f2d-b4f5-cf870b94e1a2/volumes" Feb 26 17:51:37 crc kubenswrapper[4805]: I0226 17:51:37.055403 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-cs754"] Feb 26 17:51:37 crc kubenswrapper[4805]: I0226 17:51:37.067603 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-cs754"] Feb 26 17:51:38 crc kubenswrapper[4805]: I0226 17:51:38.968958 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f590f397-4abc-4f7a-a9f9-013f581e3ec6" path="/var/lib/kubelet/pods/f590f397-4abc-4f7a-a9f9-013f581e3ec6/volumes" Feb 26 17:51:41 crc kubenswrapper[4805]: I0226 17:51:41.032181 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bfpdp"] Feb 26 17:51:41 crc kubenswrapper[4805]: I0226 17:51:41.041652 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bfpdp"] Feb 26 17:51:43 crc kubenswrapper[4805]: I0226 17:51:43.271727 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59254db9-616c-48c0-bad7-c55d30e99749" path="/var/lib/kubelet/pods/59254db9-616c-48c0-bad7-c55d30e99749/volumes" Feb 26 17:51:45 crc kubenswrapper[4805]: I0226 17:51:45.848911 4805 scope.go:117] "RemoveContainer" containerID="a3416ad08c841f32896caf390933d5af6f20f84578821770aff259b157fb0d60" Feb 26 17:51:45 crc kubenswrapper[4805]: I0226 17:51:45.904002 4805 scope.go:117] "RemoveContainer" containerID="ea36b2b3a113a3b525158efbbaa019cdbbe15f4b9c7b369d58bde6e7a288fd7f" Feb 26 17:51:46 crc kubenswrapper[4805]: I0226 17:51:46.000716 4805 scope.go:117] "RemoveContainer" containerID="0e34f29d8dadc2970f60d8318c3a9e879c93c91e421544a2764c267db66e03bd" Feb 26 17:51:51 crc kubenswrapper[4805]: I0226 17:51:51.335546 4805 generic.go:334] "Generic (PLEG): container finished" podID="4fd5faad-e8f5-48cd-93a1-5818fc463c2e" containerID="6d474234306d4eafff9d7c677b7540e543cc60d1d26cdc3c4e3195f6555fe236" exitCode=0 Feb 26 17:51:51 crc kubenswrapper[4805]: I0226 17:51:51.335815 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q5n5f" event={"ID":"4fd5faad-e8f5-48cd-93a1-5818fc463c2e","Type":"ContainerDied","Data":"6d474234306d4eafff9d7c677b7540e543cc60d1d26cdc3c4e3195f6555fe236"} Feb 26 17:51:52 crc kubenswrapper[4805]: I0226 17:51:52.997600 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q5n5f" Feb 26 17:51:53 crc kubenswrapper[4805]: I0226 17:51:53.106758 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4fd5faad-e8f5-48cd-93a1-5818fc463c2e-ssh-key-openstack-edpm-ipam\") pod \"4fd5faad-e8f5-48cd-93a1-5818fc463c2e\" (UID: \"4fd5faad-e8f5-48cd-93a1-5818fc463c2e\") " Feb 26 17:51:53 crc kubenswrapper[4805]: I0226 17:51:53.107087 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fd5faad-e8f5-48cd-93a1-5818fc463c2e-inventory\") pod \"4fd5faad-e8f5-48cd-93a1-5818fc463c2e\" (UID: \"4fd5faad-e8f5-48cd-93a1-5818fc463c2e\") " Feb 26 17:51:53 crc kubenswrapper[4805]: I0226 17:51:53.107347 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkr7n\" (UniqueName: \"kubernetes.io/projected/4fd5faad-e8f5-48cd-93a1-5818fc463c2e-kube-api-access-pkr7n\") pod \"4fd5faad-e8f5-48cd-93a1-5818fc463c2e\" (UID: \"4fd5faad-e8f5-48cd-93a1-5818fc463c2e\") " Feb 26 17:51:53 crc kubenswrapper[4805]: I0226 17:51:53.132320 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fd5faad-e8f5-48cd-93a1-5818fc463c2e-kube-api-access-pkr7n" (OuterVolumeSpecName: "kube-api-access-pkr7n") pod "4fd5faad-e8f5-48cd-93a1-5818fc463c2e" (UID: "4fd5faad-e8f5-48cd-93a1-5818fc463c2e"). InnerVolumeSpecName "kube-api-access-pkr7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:51:53 crc kubenswrapper[4805]: I0226 17:51:53.175258 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fd5faad-e8f5-48cd-93a1-5818fc463c2e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4fd5faad-e8f5-48cd-93a1-5818fc463c2e" (UID: "4fd5faad-e8f5-48cd-93a1-5818fc463c2e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:51:53 crc kubenswrapper[4805]: I0226 17:51:53.176314 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fd5faad-e8f5-48cd-93a1-5818fc463c2e-inventory" (OuterVolumeSpecName: "inventory") pod "4fd5faad-e8f5-48cd-93a1-5818fc463c2e" (UID: "4fd5faad-e8f5-48cd-93a1-5818fc463c2e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:51:53 crc kubenswrapper[4805]: I0226 17:51:53.211720 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fd5faad-e8f5-48cd-93a1-5818fc463c2e-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 17:51:53 crc kubenswrapper[4805]: I0226 17:51:53.212162 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkr7n\" (UniqueName: \"kubernetes.io/projected/4fd5faad-e8f5-48cd-93a1-5818fc463c2e-kube-api-access-pkr7n\") on node \"crc\" DevicePath \"\"" Feb 26 17:51:53 crc kubenswrapper[4805]: I0226 17:51:53.212255 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4fd5faad-e8f5-48cd-93a1-5818fc463c2e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 17:51:53 crc kubenswrapper[4805]: I0226 17:51:53.364361 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q5n5f" event={"ID":"4fd5faad-e8f5-48cd-93a1-5818fc463c2e","Type":"ContainerDied","Data":"9e8d9ccad241a0979cea7d1b222350a70593ac04799988929c83730eb7a3e823"} Feb 26 17:51:53 crc kubenswrapper[4805]: I0226 17:51:53.364409 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e8d9ccad241a0979cea7d1b222350a70593ac04799988929c83730eb7a3e823" Feb 26 17:51:53 crc kubenswrapper[4805]: I0226 17:51:53.364472 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-q5n5f" Feb 26 17:51:53 crc kubenswrapper[4805]: I0226 17:51:53.447964 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wmbgb"] Feb 26 17:51:53 crc kubenswrapper[4805]: E0226 17:51:53.448567 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fd5faad-e8f5-48cd-93a1-5818fc463c2e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 26 17:51:53 crc kubenswrapper[4805]: I0226 17:51:53.448590 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fd5faad-e8f5-48cd-93a1-5818fc463c2e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 26 17:51:53 crc kubenswrapper[4805]: I0226 17:51:53.448839 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fd5faad-e8f5-48cd-93a1-5818fc463c2e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 26 17:51:53 crc kubenswrapper[4805]: I0226 17:51:53.449830 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wmbgb" Feb 26 17:51:53 crc kubenswrapper[4805]: I0226 17:51:53.452239 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 17:51:53 crc kubenswrapper[4805]: I0226 17:51:53.452430 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 17:51:53 crc kubenswrapper[4805]: I0226 17:51:53.452770 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 17:51:53 crc kubenswrapper[4805]: I0226 17:51:53.452925 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sc2xs" Feb 26 17:51:53 crc kubenswrapper[4805]: I0226 17:51:53.457711 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wmbgb"] Feb 26 17:51:53 crc kubenswrapper[4805]: I0226 17:51:53.618959 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d27a40c7-3e84-48cd-849c-a318aac82222-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-wmbgb\" (UID: \"d27a40c7-3e84-48cd-849c-a318aac82222\") " pod="openstack/ssh-known-hosts-edpm-deployment-wmbgb" Feb 26 17:51:53 crc kubenswrapper[4805]: I0226 17:51:53.619297 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d27a40c7-3e84-48cd-849c-a318aac82222-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-wmbgb\" (UID: \"d27a40c7-3e84-48cd-849c-a318aac82222\") " pod="openstack/ssh-known-hosts-edpm-deployment-wmbgb" Feb 26 17:51:53 crc kubenswrapper[4805]: I0226 17:51:53.619562 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxhr4\" (UniqueName: \"kubernetes.io/projected/d27a40c7-3e84-48cd-849c-a318aac82222-kube-api-access-gxhr4\") pod \"ssh-known-hosts-edpm-deployment-wmbgb\" (UID: \"d27a40c7-3e84-48cd-849c-a318aac82222\") " pod="openstack/ssh-known-hosts-edpm-deployment-wmbgb" Feb 26 17:51:53 crc kubenswrapper[4805]: I0226 17:51:53.721436 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxhr4\" (UniqueName: \"kubernetes.io/projected/d27a40c7-3e84-48cd-849c-a318aac82222-kube-api-access-gxhr4\") pod \"ssh-known-hosts-edpm-deployment-wmbgb\" (UID: \"d27a40c7-3e84-48cd-849c-a318aac82222\") " pod="openstack/ssh-known-hosts-edpm-deployment-wmbgb" Feb 26 17:51:53 crc kubenswrapper[4805]: I0226 17:51:53.721524 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d27a40c7-3e84-48cd-849c-a318aac82222-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-wmbgb\" (UID: \"d27a40c7-3e84-48cd-849c-a318aac82222\") " pod="openstack/ssh-known-hosts-edpm-deployment-wmbgb" Feb 26 17:51:53 crc kubenswrapper[4805]: I0226 17:51:53.721563 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d27a40c7-3e84-48cd-849c-a318aac82222-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-wmbgb\" (UID: \"d27a40c7-3e84-48cd-849c-a318aac82222\") " pod="openstack/ssh-known-hosts-edpm-deployment-wmbgb" Feb 26 17:51:53 crc kubenswrapper[4805]: I0226 17:51:53.726635 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d27a40c7-3e84-48cd-849c-a318aac82222-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-wmbgb\" (UID: \"d27a40c7-3e84-48cd-849c-a318aac82222\") " pod="openstack/ssh-known-hosts-edpm-deployment-wmbgb" Feb 26 17:51:53 crc kubenswrapper[4805]: I0226 17:51:53.728671 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d27a40c7-3e84-48cd-849c-a318aac82222-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-wmbgb\" (UID: \"d27a40c7-3e84-48cd-849c-a318aac82222\") " pod="openstack/ssh-known-hosts-edpm-deployment-wmbgb" Feb 26 17:51:53 crc kubenswrapper[4805]: I0226 17:51:53.739833 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxhr4\" (UniqueName: \"kubernetes.io/projected/d27a40c7-3e84-48cd-849c-a318aac82222-kube-api-access-gxhr4\") pod \"ssh-known-hosts-edpm-deployment-wmbgb\" (UID: \"d27a40c7-3e84-48cd-849c-a318aac82222\") " pod="openstack/ssh-known-hosts-edpm-deployment-wmbgb" Feb 26 17:51:53 crc kubenswrapper[4805]: I0226 17:51:53.776131 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wmbgb" Feb 26 17:51:54 crc kubenswrapper[4805]: I0226 17:51:54.306111 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wmbgb"] Feb 26 17:51:54 crc kubenswrapper[4805]: I0226 17:51:54.381207 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wmbgb" event={"ID":"d27a40c7-3e84-48cd-849c-a318aac82222","Type":"ContainerStarted","Data":"f9ef919c1e6459edf694427dfc6f719b10000da66dd3081312e1eff423574d12"} Feb 26 17:51:55 crc kubenswrapper[4805]: I0226 17:51:55.391427 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wmbgb" event={"ID":"d27a40c7-3e84-48cd-849c-a318aac82222","Type":"ContainerStarted","Data":"50ea915b0ee4b0f4323a7c9f76751a44e9ed42a72bfcfb5473c9c1d3ea48d0a6"} Feb 26 17:51:55 crc kubenswrapper[4805]: I0226 17:51:55.410144 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-wmbgb" podStartSLOduration=1.9703386 podStartE2EDuration="2.410123881s" podCreationTimestamp="2026-02-26 17:51:53 +0000 UTC" firstStartedPulling="2026-02-26 17:51:54.316380197 +0000 UTC m=+2228.878134526" lastFinishedPulling="2026-02-26 17:51:54.756165468 +0000 UTC m=+2229.317919807" observedRunningTime="2026-02-26 17:51:55.408304425 +0000 UTC m=+2229.970058764" watchObservedRunningTime="2026-02-26 17:51:55.410123881 +0000 UTC m=+2229.971878220" Feb 26 17:52:00 crc kubenswrapper[4805]: I0226 17:52:00.138709 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535472-l7dqp"] Feb 26 17:52:00 crc kubenswrapper[4805]: I0226 17:52:00.142129 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535472-l7dqp" Feb 26 17:52:00 crc kubenswrapper[4805]: I0226 17:52:00.144731 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 17:52:00 crc kubenswrapper[4805]: I0226 17:52:00.145509 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 17:52:00 crc kubenswrapper[4805]: I0226 17:52:00.146150 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 17:52:00 crc kubenswrapper[4805]: I0226 17:52:00.155244 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535472-l7dqp"] Feb 26 17:52:00 crc kubenswrapper[4805]: I0226 17:52:00.213514 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwgrd\" (UniqueName: \"kubernetes.io/projected/eb5b2036-695b-44a8-86a2-0c2b7968ace0-kube-api-access-cwgrd\") pod \"auto-csr-approver-29535472-l7dqp\" (UID: \"eb5b2036-695b-44a8-86a2-0c2b7968ace0\") " pod="openshift-infra/auto-csr-approver-29535472-l7dqp" Feb 26 17:52:00 crc kubenswrapper[4805]: I0226 17:52:00.315413 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwgrd\" (UniqueName: \"kubernetes.io/projected/eb5b2036-695b-44a8-86a2-0c2b7968ace0-kube-api-access-cwgrd\") pod \"auto-csr-approver-29535472-l7dqp\" (UID: \"eb5b2036-695b-44a8-86a2-0c2b7968ace0\") " pod="openshift-infra/auto-csr-approver-29535472-l7dqp" Feb 26 17:52:00 crc kubenswrapper[4805]: I0226 17:52:00.333900 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwgrd\" (UniqueName: \"kubernetes.io/projected/eb5b2036-695b-44a8-86a2-0c2b7968ace0-kube-api-access-cwgrd\") pod \"auto-csr-approver-29535472-l7dqp\" (UID: \"eb5b2036-695b-44a8-86a2-0c2b7968ace0\") " pod="openshift-infra/auto-csr-approver-29535472-l7dqp" Feb 26 17:52:00 crc kubenswrapper[4805]: I0226 17:52:00.469111 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535472-l7dqp" Feb 26 17:52:00 crc kubenswrapper[4805]: I0226 17:52:00.990047 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535472-l7dqp"] Feb 26 17:52:01 crc kubenswrapper[4805]: I0226 17:52:01.453895 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535472-l7dqp" event={"ID":"eb5b2036-695b-44a8-86a2-0c2b7968ace0","Type":"ContainerStarted","Data":"aee1abd5e04881e6d023be5d93665a412a5ca88a0ab42c8041da56e4c529443f"} Feb 26 17:52:01 crc kubenswrapper[4805]: I0226 17:52:01.455659 4805 generic.go:334] "Generic (PLEG): container finished" podID="d27a40c7-3e84-48cd-849c-a318aac82222" containerID="50ea915b0ee4b0f4323a7c9f76751a44e9ed42a72bfcfb5473c9c1d3ea48d0a6" exitCode=0 Feb 26 17:52:01 crc kubenswrapper[4805]: I0226 17:52:01.455706 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wmbgb" event={"ID":"d27a40c7-3e84-48cd-849c-a318aac82222","Type":"ContainerDied","Data":"50ea915b0ee4b0f4323a7c9f76751a44e9ed42a72bfcfb5473c9c1d3ea48d0a6"} Feb 26 17:52:02 crc kubenswrapper[4805]: I0226 17:52:02.467942 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535472-l7dqp" event={"ID":"eb5b2036-695b-44a8-86a2-0c2b7968ace0","Type":"ContainerStarted","Data":"272b2ec48f3cea8b1720e47197d75e8d53d12907b35305e26096a1e790ef17ae"} Feb 26 17:52:02 crc kubenswrapper[4805]: I0226 17:52:02.490373 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535472-l7dqp" podStartSLOduration=1.496016274 podStartE2EDuration="2.490349109s" podCreationTimestamp="2026-02-26 17:52:00 +0000 UTC" firstStartedPulling="2026-02-26 17:52:01.017928087 +0000 UTC m=+2235.579682426" lastFinishedPulling="2026-02-26 17:52:02.012260922 +0000 UTC m=+2236.574015261" observedRunningTime="2026-02-26 17:52:02.489761884 +0000 UTC m=+2237.051516223" watchObservedRunningTime="2026-02-26 17:52:02.490349109 +0000 UTC m=+2237.052103448" Feb 26 17:52:02 crc kubenswrapper[4805]: I0226 17:52:02.977665 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 17:52:02 crc kubenswrapper[4805]: I0226 17:52:02.978045 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 17:52:03 crc kubenswrapper[4805]: I0226 17:52:03.024925 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wmbgb" Feb 26 17:52:03 crc kubenswrapper[4805]: I0226 17:52:03.182519 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxhr4\" (UniqueName: \"kubernetes.io/projected/d27a40c7-3e84-48cd-849c-a318aac82222-kube-api-access-gxhr4\") pod \"d27a40c7-3e84-48cd-849c-a318aac82222\" (UID: \"d27a40c7-3e84-48cd-849c-a318aac82222\") " Feb 26 17:52:03 crc kubenswrapper[4805]: I0226 17:52:03.182932 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d27a40c7-3e84-48cd-849c-a318aac82222-inventory-0\") pod \"d27a40c7-3e84-48cd-849c-a318aac82222\" (UID: \"d27a40c7-3e84-48cd-849c-a318aac82222\") " Feb 26 17:52:03 crc kubenswrapper[4805]: I0226 17:52:03.182988 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d27a40c7-3e84-48cd-849c-a318aac82222-ssh-key-openstack-edpm-ipam\") pod \"d27a40c7-3e84-48cd-849c-a318aac82222\" (UID: \"d27a40c7-3e84-48cd-849c-a318aac82222\") " Feb 26 17:52:03 crc kubenswrapper[4805]: I0226 17:52:03.190384 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d27a40c7-3e84-48cd-849c-a318aac82222-kube-api-access-gxhr4" (OuterVolumeSpecName: "kube-api-access-gxhr4") pod "d27a40c7-3e84-48cd-849c-a318aac82222" (UID: "d27a40c7-3e84-48cd-849c-a318aac82222"). InnerVolumeSpecName "kube-api-access-gxhr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:52:03 crc kubenswrapper[4805]: I0226 17:52:03.213537 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d27a40c7-3e84-48cd-849c-a318aac82222-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d27a40c7-3e84-48cd-849c-a318aac82222" (UID: "d27a40c7-3e84-48cd-849c-a318aac82222"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:52:03 crc kubenswrapper[4805]: I0226 17:52:03.217252 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d27a40c7-3e84-48cd-849c-a318aac82222-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "d27a40c7-3e84-48cd-849c-a318aac82222" (UID: "d27a40c7-3e84-48cd-849c-a318aac82222"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:52:03 crc kubenswrapper[4805]: I0226 17:52:03.287411 4805 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d27a40c7-3e84-48cd-849c-a318aac82222-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 26 17:52:03 crc kubenswrapper[4805]: I0226 17:52:03.287439 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d27a40c7-3e84-48cd-849c-a318aac82222-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 17:52:03 crc kubenswrapper[4805]: I0226 17:52:03.287450 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxhr4\" (UniqueName: \"kubernetes.io/projected/d27a40c7-3e84-48cd-849c-a318aac82222-kube-api-access-gxhr4\") on node \"crc\" DevicePath \"\"" Feb 26 17:52:03 crc kubenswrapper[4805]: I0226 17:52:03.479319 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wmbgb" event={"ID":"d27a40c7-3e84-48cd-849c-a318aac82222","Type":"ContainerDied","Data":"f9ef919c1e6459edf694427dfc6f719b10000da66dd3081312e1eff423574d12"} Feb 26 17:52:03 crc kubenswrapper[4805]: I0226 17:52:03.479370 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9ef919c1e6459edf694427dfc6f719b10000da66dd3081312e1eff423574d12" Feb 26 17:52:03 crc kubenswrapper[4805]: I0226 17:52:03.479430 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wmbgb" Feb 26 17:52:03 crc kubenswrapper[4805]: I0226 17:52:03.485503 4805 generic.go:334] "Generic (PLEG): container finished" podID="eb5b2036-695b-44a8-86a2-0c2b7968ace0" containerID="272b2ec48f3cea8b1720e47197d75e8d53d12907b35305e26096a1e790ef17ae" exitCode=0 Feb 26 17:52:03 crc kubenswrapper[4805]: I0226 17:52:03.485536 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535472-l7dqp" event={"ID":"eb5b2036-695b-44a8-86a2-0c2b7968ace0","Type":"ContainerDied","Data":"272b2ec48f3cea8b1720e47197d75e8d53d12907b35305e26096a1e790ef17ae"} Feb 26 17:52:03 crc kubenswrapper[4805]: I0226 17:52:03.565117 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-pxdcx"] Feb 26 17:52:03 crc kubenswrapper[4805]: E0226 17:52:03.565713 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d27a40c7-3e84-48cd-849c-a318aac82222" containerName="ssh-known-hosts-edpm-deployment" Feb 26 17:52:03 crc kubenswrapper[4805]: I0226 17:52:03.565734 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d27a40c7-3e84-48cd-849c-a318aac82222" containerName="ssh-known-hosts-edpm-deployment" Feb 26 17:52:03 crc kubenswrapper[4805]: I0226 17:52:03.566053 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="d27a40c7-3e84-48cd-849c-a318aac82222" containerName="ssh-known-hosts-edpm-deployment" Feb 26 17:52:03 crc kubenswrapper[4805]: I0226 17:52:03.567402 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pxdcx" Feb 26 17:52:03 crc kubenswrapper[4805]: I0226 17:52:03.571582 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 17:52:03 crc kubenswrapper[4805]: I0226 17:52:03.572112 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sc2xs" Feb 26 17:52:03 crc kubenswrapper[4805]: I0226 17:52:03.572385 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 17:52:03 crc kubenswrapper[4805]: I0226 17:52:03.577645 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 17:52:03 crc kubenswrapper[4805]: I0226 17:52:03.584448 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-pxdcx"] Feb 26 17:52:03 crc kubenswrapper[4805]: I0226 17:52:03.696417 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/26d29c02-84a8-41af-b5ae-7ad977cc33a1-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pxdcx\" (UID: \"26d29c02-84a8-41af-b5ae-7ad977cc33a1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pxdcx" Feb 26 17:52:03 crc kubenswrapper[4805]: I0226 17:52:03.696590 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26d29c02-84a8-41af-b5ae-7ad977cc33a1-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pxdcx\" (UID: \"26d29c02-84a8-41af-b5ae-7ad977cc33a1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pxdcx" Feb 26 17:52:03 crc kubenswrapper[4805]: I0226 17:52:03.696620 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pshfz\" (UniqueName: \"kubernetes.io/projected/26d29c02-84a8-41af-b5ae-7ad977cc33a1-kube-api-access-pshfz\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pxdcx\" (UID: \"26d29c02-84a8-41af-b5ae-7ad977cc33a1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pxdcx" Feb 26 17:52:03 crc kubenswrapper[4805]: I0226 17:52:03.798208 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/26d29c02-84a8-41af-b5ae-7ad977cc33a1-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pxdcx\" (UID: \"26d29c02-84a8-41af-b5ae-7ad977cc33a1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pxdcx" Feb 26 17:52:03 crc kubenswrapper[4805]: I0226 17:52:03.798366 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26d29c02-84a8-41af-b5ae-7ad977cc33a1-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pxdcx\" (UID: \"26d29c02-84a8-41af-b5ae-7ad977cc33a1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pxdcx" Feb 26 17:52:03 crc kubenswrapper[4805]: I0226 17:52:03.798399 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pshfz\" (UniqueName: \"kubernetes.io/projected/26d29c02-84a8-41af-b5ae-7ad977cc33a1-kube-api-access-pshfz\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pxdcx\" (UID: \"26d29c02-84a8-41af-b5ae-7ad977cc33a1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pxdcx" Feb 26 17:52:03 crc kubenswrapper[4805]: I0226 17:52:03.804203 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26d29c02-84a8-41af-b5ae-7ad977cc33a1-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pxdcx\" (UID: \"26d29c02-84a8-41af-b5ae-7ad977cc33a1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pxdcx" Feb 26 17:52:03 crc kubenswrapper[4805]: I0226 17:52:03.808383 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/26d29c02-84a8-41af-b5ae-7ad977cc33a1-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pxdcx\" (UID: \"26d29c02-84a8-41af-b5ae-7ad977cc33a1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pxdcx" Feb 26 17:52:03 crc kubenswrapper[4805]: I0226 17:52:03.817061 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pshfz\" (UniqueName: \"kubernetes.io/projected/26d29c02-84a8-41af-b5ae-7ad977cc33a1-kube-api-access-pshfz\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pxdcx\" (UID: \"26d29c02-84a8-41af-b5ae-7ad977cc33a1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pxdcx" Feb 26 17:52:03 crc kubenswrapper[4805]: I0226 17:52:03.888049 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pxdcx" Feb 26 17:52:04 crc kubenswrapper[4805]: I0226 17:52:04.442360 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-pxdcx"] Feb 26 17:52:04 crc kubenswrapper[4805]: W0226 17:52:04.445050 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26d29c02_84a8_41af_b5ae_7ad977cc33a1.slice/crio-839612d7ef16c26116fb2494f595c001148d72d4c041322b5df0bd7ab3bc606c WatchSource:0}: Error finding container 839612d7ef16c26116fb2494f595c001148d72d4c041322b5df0bd7ab3bc606c: Status 404 returned error can't find the container with id 839612d7ef16c26116fb2494f595c001148d72d4c041322b5df0bd7ab3bc606c Feb 26 17:52:04 crc kubenswrapper[4805]: I0226 17:52:04.497220 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pxdcx" event={"ID":"26d29c02-84a8-41af-b5ae-7ad977cc33a1","Type":"ContainerStarted","Data":"839612d7ef16c26116fb2494f595c001148d72d4c041322b5df0bd7ab3bc606c"} Feb 26 17:52:05 crc kubenswrapper[4805]: I0226 17:52:05.091390 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535472-l7dqp" Feb 26 17:52:05 crc kubenswrapper[4805]: I0226 17:52:05.233983 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwgrd\" (UniqueName: \"kubernetes.io/projected/eb5b2036-695b-44a8-86a2-0c2b7968ace0-kube-api-access-cwgrd\") pod \"eb5b2036-695b-44a8-86a2-0c2b7968ace0\" (UID: \"eb5b2036-695b-44a8-86a2-0c2b7968ace0\") " Feb 26 17:52:05 crc kubenswrapper[4805]: I0226 17:52:05.247402 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb5b2036-695b-44a8-86a2-0c2b7968ace0-kube-api-access-cwgrd" (OuterVolumeSpecName: "kube-api-access-cwgrd") pod "eb5b2036-695b-44a8-86a2-0c2b7968ace0" (UID: "eb5b2036-695b-44a8-86a2-0c2b7968ace0"). InnerVolumeSpecName "kube-api-access-cwgrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:52:05 crc kubenswrapper[4805]: I0226 17:52:05.336550 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwgrd\" (UniqueName: \"kubernetes.io/projected/eb5b2036-695b-44a8-86a2-0c2b7968ace0-kube-api-access-cwgrd\") on node \"crc\" DevicePath \"\"" Feb 26 17:52:05 crc kubenswrapper[4805]: I0226 17:52:05.515790 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535472-l7dqp" event={"ID":"eb5b2036-695b-44a8-86a2-0c2b7968ace0","Type":"ContainerDied","Data":"aee1abd5e04881e6d023be5d93665a412a5ca88a0ab42c8041da56e4c529443f"} Feb 26 17:52:05 crc kubenswrapper[4805]: I0226 17:52:05.515830 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aee1abd5e04881e6d023be5d93665a412a5ca88a0ab42c8041da56e4c529443f" Feb 26 17:52:05 crc kubenswrapper[4805]: I0226 17:52:05.515903 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535472-l7dqp" Feb 26 17:52:05 crc kubenswrapper[4805]: I0226 17:52:05.530281 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pxdcx" event={"ID":"26d29c02-84a8-41af-b5ae-7ad977cc33a1","Type":"ContainerStarted","Data":"44d5404b4f2b5bf322fe0c1b6220766b3d8c209a79d750b9ea6b3a15bf3adead"} Feb 26 17:52:05 crc kubenswrapper[4805]: I0226 17:52:05.551091 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pxdcx" podStartSLOduration=2.105591504 podStartE2EDuration="2.551074622s" podCreationTimestamp="2026-02-26 17:52:03 +0000 UTC" firstStartedPulling="2026-02-26 17:52:04.447297252 +0000 UTC m=+2239.009051591" lastFinishedPulling="2026-02-26 17:52:04.89278037 +0000 UTC m=+2239.454534709" observedRunningTime="2026-02-26 17:52:05.550169699 +0000 UTC m=+2240.111924038" watchObservedRunningTime="2026-02-26 17:52:05.551074622 +0000 UTC m=+2240.112828961" Feb 26 17:52:05 crc kubenswrapper[4805]: I0226 17:52:05.583622 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535466-kwdlq"] Feb 26 17:52:05 crc kubenswrapper[4805]: I0226 17:52:05.598789 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535466-kwdlq"] Feb 26 17:52:06 crc kubenswrapper[4805]: I0226 17:52:06.970929 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="472fb3d5-0b1c-4668-93f8-06f6da00569f" path="/var/lib/kubelet/pods/472fb3d5-0b1c-4668-93f8-06f6da00569f/volumes" Feb 26 17:52:13 crc kubenswrapper[4805]: I0226 17:52:13.602149 4805 generic.go:334] "Generic (PLEG): container finished" podID="26d29c02-84a8-41af-b5ae-7ad977cc33a1" containerID="44d5404b4f2b5bf322fe0c1b6220766b3d8c209a79d750b9ea6b3a15bf3adead" exitCode=0 Feb 26 17:52:13 crc kubenswrapper[4805]: I0226 17:52:13.602260 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pxdcx" event={"ID":"26d29c02-84a8-41af-b5ae-7ad977cc33a1","Type":"ContainerDied","Data":"44d5404b4f2b5bf322fe0c1b6220766b3d8c209a79d750b9ea6b3a15bf3adead"} Feb 26 17:52:15 crc kubenswrapper[4805]: I0226 17:52:15.147696 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pxdcx" Feb 26 17:52:15 crc kubenswrapper[4805]: I0226 17:52:15.309482 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pshfz\" (UniqueName: \"kubernetes.io/projected/26d29c02-84a8-41af-b5ae-7ad977cc33a1-kube-api-access-pshfz\") pod \"26d29c02-84a8-41af-b5ae-7ad977cc33a1\" (UID: \"26d29c02-84a8-41af-b5ae-7ad977cc33a1\") " Feb 26 17:52:15 crc kubenswrapper[4805]: I0226 17:52:15.309669 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26d29c02-84a8-41af-b5ae-7ad977cc33a1-inventory\") pod \"26d29c02-84a8-41af-b5ae-7ad977cc33a1\" (UID: \"26d29c02-84a8-41af-b5ae-7ad977cc33a1\") " Feb 26 17:52:15 crc kubenswrapper[4805]: I0226 17:52:15.309734 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/26d29c02-84a8-41af-b5ae-7ad977cc33a1-ssh-key-openstack-edpm-ipam\") pod \"26d29c02-84a8-41af-b5ae-7ad977cc33a1\" (UID: \"26d29c02-84a8-41af-b5ae-7ad977cc33a1\") " Feb 26 17:52:15 crc kubenswrapper[4805]: I0226 17:52:15.316154 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26d29c02-84a8-41af-b5ae-7ad977cc33a1-kube-api-access-pshfz" (OuterVolumeSpecName: "kube-api-access-pshfz") pod "26d29c02-84a8-41af-b5ae-7ad977cc33a1" (UID: "26d29c02-84a8-41af-b5ae-7ad977cc33a1"). InnerVolumeSpecName "kube-api-access-pshfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:52:15 crc kubenswrapper[4805]: I0226 17:52:15.348077 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26d29c02-84a8-41af-b5ae-7ad977cc33a1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "26d29c02-84a8-41af-b5ae-7ad977cc33a1" (UID: "26d29c02-84a8-41af-b5ae-7ad977cc33a1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:52:15 crc kubenswrapper[4805]: I0226 17:52:15.353004 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26d29c02-84a8-41af-b5ae-7ad977cc33a1-inventory" (OuterVolumeSpecName: "inventory") pod "26d29c02-84a8-41af-b5ae-7ad977cc33a1" (UID: "26d29c02-84a8-41af-b5ae-7ad977cc33a1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:52:15 crc kubenswrapper[4805]: I0226 17:52:15.412410 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pshfz\" (UniqueName: \"kubernetes.io/projected/26d29c02-84a8-41af-b5ae-7ad977cc33a1-kube-api-access-pshfz\") on node \"crc\" DevicePath \"\"" Feb 26 17:52:15 crc kubenswrapper[4805]: I0226 17:52:15.412448 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26d29c02-84a8-41af-b5ae-7ad977cc33a1-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 17:52:15 crc kubenswrapper[4805]: I0226 17:52:15.412461 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/26d29c02-84a8-41af-b5ae-7ad977cc33a1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 17:52:15 crc kubenswrapper[4805]: I0226 17:52:15.629042 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pxdcx" event={"ID":"26d29c02-84a8-41af-b5ae-7ad977cc33a1","Type":"ContainerDied","Data":"839612d7ef16c26116fb2494f595c001148d72d4c041322b5df0bd7ab3bc606c"} Feb 26 17:52:15 crc kubenswrapper[4805]: I0226 17:52:15.629394 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="839612d7ef16c26116fb2494f595c001148d72d4c041322b5df0bd7ab3bc606c" Feb 26 17:52:15 crc kubenswrapper[4805]: I0226 17:52:15.629187 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pxdcx" Feb 26 17:52:15 crc kubenswrapper[4805]: I0226 17:52:15.700332 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xsfks"] Feb 26 17:52:15 crc kubenswrapper[4805]: E0226 17:52:15.701289 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26d29c02-84a8-41af-b5ae-7ad977cc33a1" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 26 17:52:15 crc kubenswrapper[4805]: I0226 17:52:15.701319 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="26d29c02-84a8-41af-b5ae-7ad977cc33a1" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 26 17:52:15 crc kubenswrapper[4805]: E0226 17:52:15.701358 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb5b2036-695b-44a8-86a2-0c2b7968ace0" containerName="oc" Feb 26 17:52:15 crc kubenswrapper[4805]: I0226 17:52:15.701366 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb5b2036-695b-44a8-86a2-0c2b7968ace0" containerName="oc" Feb 26 17:52:15 crc kubenswrapper[4805]: I0226 17:52:15.701667 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb5b2036-695b-44a8-86a2-0c2b7968ace0" containerName="oc" Feb 26 17:52:15 crc kubenswrapper[4805]: I0226 17:52:15.701709 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="26d29c02-84a8-41af-b5ae-7ad977cc33a1" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 26 17:52:15 crc kubenswrapper[4805]: I0226 17:52:15.703118 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xsfks" Feb 26 17:52:15 crc kubenswrapper[4805]: I0226 17:52:15.713008 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xsfks"] Feb 26 17:52:15 crc kubenswrapper[4805]: I0226 17:52:15.742703 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 17:52:15 crc kubenswrapper[4805]: I0226 17:52:15.742741 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sc2xs" Feb 26 17:52:15 crc kubenswrapper[4805]: I0226 17:52:15.742867 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 17:52:15 crc kubenswrapper[4805]: I0226 17:52:15.744733 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 17:52:15 crc kubenswrapper[4805]: I0226 17:52:15.846075 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9562d61f-fcfb-40c3-8a39-500bfa314c5e-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xsfks\" (UID: \"9562d61f-fcfb-40c3-8a39-500bfa314c5e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xsfks" Feb 26 17:52:15 crc kubenswrapper[4805]: I0226 17:52:15.846214 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzc25\" (UniqueName: \"kubernetes.io/projected/9562d61f-fcfb-40c3-8a39-500bfa314c5e-kube-api-access-vzc25\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xsfks\" (UID: \"9562d61f-fcfb-40c3-8a39-500bfa314c5e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xsfks" Feb 26 17:52:15 crc kubenswrapper[4805]: I0226 17:52:15.847285 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9562d61f-fcfb-40c3-8a39-500bfa314c5e-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xsfks\" (UID: \"9562d61f-fcfb-40c3-8a39-500bfa314c5e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xsfks" Feb 26 17:52:15 crc kubenswrapper[4805]: I0226 17:52:15.949237 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9562d61f-fcfb-40c3-8a39-500bfa314c5e-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xsfks\" (UID: \"9562d61f-fcfb-40c3-8a39-500bfa314c5e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xsfks" Feb 26 17:52:15 crc kubenswrapper[4805]: I0226 17:52:15.949332 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9562d61f-fcfb-40c3-8a39-500bfa314c5e-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xsfks\" (UID: \"9562d61f-fcfb-40c3-8a39-500bfa314c5e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xsfks" Feb 26 17:52:15 crc kubenswrapper[4805]: I0226 17:52:15.949968 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzc25\" (UniqueName: \"kubernetes.io/projected/9562d61f-fcfb-40c3-8a39-500bfa314c5e-kube-api-access-vzc25\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xsfks\" (UID: \"9562d61f-fcfb-40c3-8a39-500bfa314c5e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xsfks" Feb 26 17:52:15 crc kubenswrapper[4805]: I0226 17:52:15.953478 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9562d61f-fcfb-40c3-8a39-500bfa314c5e-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xsfks\" (UID: \"9562d61f-fcfb-40c3-8a39-500bfa314c5e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xsfks" Feb 26 17:52:15 crc kubenswrapper[4805]: I0226 17:52:15.962813 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9562d61f-fcfb-40c3-8a39-500bfa314c5e-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xsfks\" (UID: \"9562d61f-fcfb-40c3-8a39-500bfa314c5e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xsfks" Feb 26 17:52:15 crc kubenswrapper[4805]: I0226 17:52:15.985944 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzc25\" (UniqueName: \"kubernetes.io/projected/9562d61f-fcfb-40c3-8a39-500bfa314c5e-kube-api-access-vzc25\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xsfks\" (UID: \"9562d61f-fcfb-40c3-8a39-500bfa314c5e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xsfks" Feb 26 17:52:16 crc kubenswrapper[4805]: I0226 17:52:16.069706 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xsfks" Feb 26 17:52:16 crc kubenswrapper[4805]: I0226 17:52:16.691193 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xsfks"] Feb 26 17:52:17 crc kubenswrapper[4805]: I0226 17:52:17.649804 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xsfks" event={"ID":"9562d61f-fcfb-40c3-8a39-500bfa314c5e","Type":"ContainerStarted","Data":"b9731078b4f0d7a8b50d46bfb721157f7f40c33e43997bb09e25a6b0d8fff41b"} Feb 26 17:52:17 crc kubenswrapper[4805]: I0226 17:52:17.650372 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xsfks" event={"ID":"9562d61f-fcfb-40c3-8a39-500bfa314c5e","Type":"ContainerStarted","Data":"b04b7b5625eb741d93ff1d6677c8df4c6b1a02087adffcf1cf182737674c607f"} Feb 26 17:52:17 crc kubenswrapper[4805]: I0226 17:52:17.671605 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xsfks" podStartSLOduration=2.218704587 podStartE2EDuration="2.671486631s" podCreationTimestamp="2026-02-26 17:52:15 +0000 UTC" firstStartedPulling="2026-02-26 17:52:16.690335271 +0000 UTC m=+2251.252089610" lastFinishedPulling="2026-02-26 17:52:17.143117315 +0000 UTC m=+2251.704871654" observedRunningTime="2026-02-26 17:52:17.664719959 +0000 UTC m=+2252.226474298" watchObservedRunningTime="2026-02-26 17:52:17.671486631 +0000 UTC m=+2252.233240980" Feb 26 17:52:23 crc kubenswrapper[4805]: I0226 17:52:23.051787 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-p9sz6"] Feb 26 17:52:23 crc kubenswrapper[4805]: I0226 17:52:23.065281 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-p9sz6"] Feb 26 17:52:24 crc kubenswrapper[4805]: I0226 17:52:24.965073 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca29b268-971b-42f0-b538-4b02dfb52154" path="/var/lib/kubelet/pods/ca29b268-971b-42f0-b538-4b02dfb52154/volumes" Feb 26 17:52:26 crc kubenswrapper[4805]: I0226 17:52:26.785386 4805 generic.go:334] "Generic (PLEG): container finished" podID="9562d61f-fcfb-40c3-8a39-500bfa314c5e" containerID="b9731078b4f0d7a8b50d46bfb721157f7f40c33e43997bb09e25a6b0d8fff41b" exitCode=0 Feb 26 17:52:26 crc kubenswrapper[4805]: I0226 17:52:26.785467 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xsfks" event={"ID":"9562d61f-fcfb-40c3-8a39-500bfa314c5e","Type":"ContainerDied","Data":"b9731078b4f0d7a8b50d46bfb721157f7f40c33e43997bb09e25a6b0d8fff41b"} Feb 26 17:52:28 crc kubenswrapper[4805]: I0226 17:52:28.315120 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xsfks" Feb 26 17:52:28 crc kubenswrapper[4805]: I0226 17:52:28.459081 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzc25\" (UniqueName: \"kubernetes.io/projected/9562d61f-fcfb-40c3-8a39-500bfa314c5e-kube-api-access-vzc25\") pod \"9562d61f-fcfb-40c3-8a39-500bfa314c5e\" (UID: \"9562d61f-fcfb-40c3-8a39-500bfa314c5e\") " Feb 26 17:52:28 crc kubenswrapper[4805]: I0226 17:52:28.459596 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9562d61f-fcfb-40c3-8a39-500bfa314c5e-inventory\") pod \"9562d61f-fcfb-40c3-8a39-500bfa314c5e\" (UID: \"9562d61f-fcfb-40c3-8a39-500bfa314c5e\") " Feb 26 17:52:28 crc kubenswrapper[4805]: I0226 17:52:28.459632 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9562d61f-fcfb-40c3-8a39-500bfa314c5e-ssh-key-openstack-edpm-ipam\") pod \"9562d61f-fcfb-40c3-8a39-500bfa314c5e\" (UID: \"9562d61f-fcfb-40c3-8a39-500bfa314c5e\") " Feb 26 17:52:28 crc kubenswrapper[4805]: I0226 17:52:28.464799 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9562d61f-fcfb-40c3-8a39-500bfa314c5e-kube-api-access-vzc25" (OuterVolumeSpecName: "kube-api-access-vzc25") pod "9562d61f-fcfb-40c3-8a39-500bfa314c5e" (UID: "9562d61f-fcfb-40c3-8a39-500bfa314c5e"). InnerVolumeSpecName "kube-api-access-vzc25". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:52:28 crc kubenswrapper[4805]: I0226 17:52:28.488873 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9562d61f-fcfb-40c3-8a39-500bfa314c5e-inventory" (OuterVolumeSpecName: "inventory") pod "9562d61f-fcfb-40c3-8a39-500bfa314c5e" (UID: "9562d61f-fcfb-40c3-8a39-500bfa314c5e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:52:28 crc kubenswrapper[4805]: I0226 17:52:28.495895 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9562d61f-fcfb-40c3-8a39-500bfa314c5e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9562d61f-fcfb-40c3-8a39-500bfa314c5e" (UID: "9562d61f-fcfb-40c3-8a39-500bfa314c5e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:52:28 crc kubenswrapper[4805]: I0226 17:52:28.561424 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9562d61f-fcfb-40c3-8a39-500bfa314c5e-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 17:52:28 crc kubenswrapper[4805]: I0226 17:52:28.561472 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9562d61f-fcfb-40c3-8a39-500bfa314c5e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 17:52:28 crc kubenswrapper[4805]: I0226 17:52:28.561485 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzc25\" (UniqueName: \"kubernetes.io/projected/9562d61f-fcfb-40c3-8a39-500bfa314c5e-kube-api-access-vzc25\") on node \"crc\" DevicePath \"\"" Feb 26 17:52:28 crc kubenswrapper[4805]: I0226 17:52:28.809722 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xsfks" event={"ID":"9562d61f-fcfb-40c3-8a39-500bfa314c5e","Type":"ContainerDied","Data":"b04b7b5625eb741d93ff1d6677c8df4c6b1a02087adffcf1cf182737674c607f"} Feb 26 17:52:28 crc kubenswrapper[4805]: I0226 17:52:28.809774 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b04b7b5625eb741d93ff1d6677c8df4c6b1a02087adffcf1cf182737674c607f" Feb 26 17:52:28 crc kubenswrapper[4805]: I0226 17:52:28.809844 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xsfks" Feb 26 17:52:28 crc kubenswrapper[4805]: I0226 17:52:28.929530 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9"] Feb 26 17:52:28 crc kubenswrapper[4805]: E0226 17:52:28.929971 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9562d61f-fcfb-40c3-8a39-500bfa314c5e" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 26 17:52:28 crc kubenswrapper[4805]: I0226 17:52:28.929990 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="9562d61f-fcfb-40c3-8a39-500bfa314c5e" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 26 17:52:28 crc kubenswrapper[4805]: I0226 17:52:28.930213 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="9562d61f-fcfb-40c3-8a39-500bfa314c5e" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 26 17:52:28 crc kubenswrapper[4805]: I0226 17:52:28.930920 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:28 crc kubenswrapper[4805]: I0226 17:52:28.934911 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sc2xs" Feb 26 17:52:28 crc kubenswrapper[4805]: I0226 17:52:28.935339 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 17:52:28 crc kubenswrapper[4805]: I0226 17:52:28.936455 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 26 17:52:28 crc kubenswrapper[4805]: I0226 17:52:28.936708 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 17:52:28 crc kubenswrapper[4805]: I0226 17:52:28.936955 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 26 17:52:28 crc kubenswrapper[4805]: I0226 17:52:28.937160 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 26 17:52:28 crc kubenswrapper[4805]: I0226 17:52:28.937323 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 26 17:52:28 crc kubenswrapper[4805]: I0226 17:52:28.945875 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.010871 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9"] Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.077496 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.077566 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.077606 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.077675 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.077778 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.077812 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.077865 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.078169 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.078210 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.078534 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.079520 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2cp7\" (UniqueName: \"kubernetes.io/projected/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-kube-api-access-b2cp7\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.079633 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.079677 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.079777 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.182225 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.182286 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.182310 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.182338 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2cp7\" (UniqueName: \"kubernetes.io/projected/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-kube-api-access-b2cp7\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.182395 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.182416 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.182443 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.182477 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.182502 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.182533 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.182575 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.182650 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.182687 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.182711 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.189567 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.189585 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.189584 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.189576 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.190043 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.190684 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.191269 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.191707 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.191817 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.192549 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.193950 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.194600 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.196931 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.202471 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2cp7\" (UniqueName: \"kubernetes.io/projected/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-kube-api-access-b2cp7\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.251465 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:52:29 crc kubenswrapper[4805]: I0226 17:52:29.825797 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9"] Feb 26 17:52:30 crc kubenswrapper[4805]: I0226 17:52:30.832604 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" event={"ID":"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882","Type":"ContainerStarted","Data":"afce6f3d1ed3cfc1609feb620056f2ca9489de643ededbe957c85de361631f6c"} Feb 26 17:52:30 crc kubenswrapper[4805]: I0226 17:52:30.832908 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" event={"ID":"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882","Type":"ContainerStarted","Data":"8f75667e23b585b2a6f01f9140575d22a99d575895f9fcd7e9e60a25dd80fca7"} Feb 26 17:52:30 crc kubenswrapper[4805]: I0226 17:52:30.859762 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" podStartSLOduration=2.396610177 podStartE2EDuration="2.859738924s" podCreationTimestamp="2026-02-26 17:52:28 +0000 UTC" firstStartedPulling="2026-02-26 17:52:29.829407373 +0000 UTC m=+2264.391161712" lastFinishedPulling="2026-02-26 17:52:30.29253611 +0000 UTC m=+2264.854290459" observedRunningTime="2026-02-26 17:52:30.849873413 +0000 UTC m=+2265.411627752" watchObservedRunningTime="2026-02-26 17:52:30.859738924 +0000 UTC m=+2265.421493263" Feb 26 17:52:32 crc kubenswrapper[4805]: I0226 17:52:32.978325 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 17:52:32 crc kubenswrapper[4805]: I0226 17:52:32.978758 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 17:52:46 crc kubenswrapper[4805]: I0226 17:52:46.121002 4805 scope.go:117] "RemoveContainer" containerID="77b7861274b136b1a52e08d8d6651e01bd31d33e2e59523c1c4424105d63df2a" Feb 26 17:52:46 crc kubenswrapper[4805]: I0226 17:52:46.216810 4805 scope.go:117] "RemoveContainer" containerID="4d4a28e52744c6908a4378d042554017ce47662a1927515efa01d296af788ffd" Feb 26 17:53:02 crc kubenswrapper[4805]: I0226 17:53:02.978215 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 17:53:02 crc kubenswrapper[4805]: I0226 17:53:02.978740 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 17:53:02 crc kubenswrapper[4805]: I0226 17:53:02.978799 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" Feb 26 17:53:02 crc kubenswrapper[4805]: I0226 17:53:02.979700 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8b8968885710b097709d7bb1c27dbd325f06798e3afdff6ea081963d31b05a35"} pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 17:53:02 crc kubenswrapper[4805]: I0226 17:53:02.979756 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" containerID="cri-o://8b8968885710b097709d7bb1c27dbd325f06798e3afdff6ea081963d31b05a35" gracePeriod=600 Feb 26 17:53:03 crc kubenswrapper[4805]: I0226 17:53:03.142508 4805 generic.go:334] "Generic (PLEG): container finished" podID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerID="8b8968885710b097709d7bb1c27dbd325f06798e3afdff6ea081963d31b05a35" exitCode=0 Feb 26 17:53:03 crc kubenswrapper[4805]: I0226 17:53:03.142551 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" event={"ID":"25e83477-65d0-41be-8e55-fdacfc5871a8","Type":"ContainerDied","Data":"8b8968885710b097709d7bb1c27dbd325f06798e3afdff6ea081963d31b05a35"} Feb 26 17:53:03 crc kubenswrapper[4805]: I0226 17:53:03.142584 4805 scope.go:117] "RemoveContainer" containerID="8805aea259eb85c371de7bf3781a3208df65fcf541775082ddfaaaa543988732" Feb 26 17:53:04 crc kubenswrapper[4805]: I0226 17:53:04.206369 4805 generic.go:334] "Generic (PLEG): container finished" podID="48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882" containerID="afce6f3d1ed3cfc1609feb620056f2ca9489de643ededbe957c85de361631f6c" exitCode=0 Feb 26 17:53:04 crc kubenswrapper[4805]: I0226 17:53:04.206533 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" event={"ID":"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882","Type":"ContainerDied","Data":"afce6f3d1ed3cfc1609feb620056f2ca9489de643ededbe957c85de361631f6c"} Feb 26 17:53:04 crc kubenswrapper[4805]: I0226 17:53:04.210804 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" event={"ID":"25e83477-65d0-41be-8e55-fdacfc5871a8","Type":"ContainerStarted","Data":"1491bbd2becc7a38ab4213e51d123b16b67ad1e023fc2d8be2998a56175f8172"} Feb 26 17:53:05 crc kubenswrapper[4805]: I0226 17:53:05.824689 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:53:05 crc kubenswrapper[4805]: I0226 17:53:05.926145 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-bootstrap-combined-ca-bundle\") pod \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " Feb 26 17:53:05 crc kubenswrapper[4805]: I0226 17:53:05.926211 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-telemetry-combined-ca-bundle\") pod \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " Feb 26 17:53:05 crc kubenswrapper[4805]: I0226 17:53:05.926241 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-neutron-metadata-combined-ca-bundle\") pod \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " Feb 26 17:53:05 crc kubenswrapper[4805]: I0226 17:53:05.926273 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " Feb 26 17:53:05 crc kubenswrapper[4805]: I0226 17:53:05.926308 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-libvirt-combined-ca-bundle\") pod \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " Feb 26 17:53:05 crc kubenswrapper[4805]: I0226 17:53:05.926363 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-inventory\") pod \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " Feb 26 17:53:05 crc kubenswrapper[4805]: I0226 17:53:05.926474 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-repo-setup-combined-ca-bundle\") pod \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " Feb 26 17:53:05 crc kubenswrapper[4805]: I0226 17:53:05.926501 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " Feb 26 17:53:05 crc kubenswrapper[4805]: I0226 17:53:05.926524 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-nova-combined-ca-bundle\") pod \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " Feb 26 17:53:05 crc kubenswrapper[4805]: I0226 17:53:05.926568 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-ovn-combined-ca-bundle\") pod \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " Feb 26 17:53:05 crc kubenswrapper[4805]: I0226 17:53:05.926586 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2cp7\" (UniqueName: \"kubernetes.io/projected/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-kube-api-access-b2cp7\") pod \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " Feb 26 17:53:05 crc kubenswrapper[4805]: I0226 17:53:05.926688 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-ssh-key-openstack-edpm-ipam\") pod \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " Feb 26 17:53:05 crc kubenswrapper[4805]: I0226 17:53:05.926714 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-openstack-edpm-ipam-ovn-default-certs-0\") pod \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " Feb 26 17:53:05 crc kubenswrapper[4805]: I0226 17:53:05.926735 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\" (UID: \"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882\") " Feb 26 17:53:05 crc kubenswrapper[4805]: I0226 17:53:05.934861 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882" (UID: "48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:53:05 crc kubenswrapper[4805]: I0226 17:53:05.934998 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882" (UID: "48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:53:05 crc kubenswrapper[4805]: I0226 17:53:05.935177 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882" (UID: "48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:53:05 crc kubenswrapper[4805]: I0226 17:53:05.935396 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882" (UID: "48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:53:05 crc kubenswrapper[4805]: I0226 17:53:05.935539 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882" (UID: "48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:53:05 crc kubenswrapper[4805]: I0226 17:53:05.935529 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882" (UID: "48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:53:05 crc kubenswrapper[4805]: I0226 17:53:05.936264 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882" (UID: "48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:53:05 crc kubenswrapper[4805]: I0226 17:53:05.937457 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882" (UID: "48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:53:05 crc kubenswrapper[4805]: I0226 17:53:05.939474 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882" (UID: "48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:53:05 crc kubenswrapper[4805]: I0226 17:53:05.940933 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-kube-api-access-b2cp7" (OuterVolumeSpecName: "kube-api-access-b2cp7") pod "48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882" (UID: "48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882"). InnerVolumeSpecName "kube-api-access-b2cp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:53:05 crc kubenswrapper[4805]: I0226 17:53:05.945404 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882" (UID: "48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:53:05 crc kubenswrapper[4805]: I0226 17:53:05.950350 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882" (UID: "48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:53:05 crc kubenswrapper[4805]: I0226 17:53:05.968725 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882" (UID: "48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:53:05 crc kubenswrapper[4805]: I0226 17:53:05.977063 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-inventory" (OuterVolumeSpecName: "inventory") pod "48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882" (UID: "48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.029484 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.029519 4805 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.029531 4805 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.029577 4805 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.029590 4805 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.029598 4805 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.029607 4805 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.029618 4805 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.029627 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.029635 4805 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.029644 4805 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.029652 4805 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.029661 4805 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.029669 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2cp7\" (UniqueName: \"kubernetes.io/projected/48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882-kube-api-access-b2cp7\") on node \"crc\" DevicePath \"\"" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.232763 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" event={"ID":"48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882","Type":"ContainerDied","Data":"8f75667e23b585b2a6f01f9140575d22a99d575895f9fcd7e9e60a25dd80fca7"} Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.232814 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f75667e23b585b2a6f01f9140575d22a99d575895f9fcd7e9e60a25dd80fca7" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.232822 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.417601 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ghqf8"] Feb 26 17:53:06 crc kubenswrapper[4805]: E0226 17:53:06.418647 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.418720 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.418950 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.419794 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ghqf8" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.425644 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.425694 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sc2xs" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.425703 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.425667 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.426103 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.443214 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ghqf8"] Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.540441 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92900aa9-a844-4669-993f-b2250e3093a1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ghqf8\" (UID: \"92900aa9-a844-4669-993f-b2250e3093a1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ghqf8" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.540576 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92900aa9-a844-4669-993f-b2250e3093a1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ghqf8\" (UID: \"92900aa9-a844-4669-993f-b2250e3093a1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ghqf8" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.540619 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92900aa9-a844-4669-993f-b2250e3093a1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ghqf8\" (UID: \"92900aa9-a844-4669-993f-b2250e3093a1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ghqf8" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.540716 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/92900aa9-a844-4669-993f-b2250e3093a1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ghqf8\" (UID: \"92900aa9-a844-4669-993f-b2250e3093a1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ghqf8" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.540745 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rd8l\" (UniqueName: \"kubernetes.io/projected/92900aa9-a844-4669-993f-b2250e3093a1-kube-api-access-2rd8l\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ghqf8\" (UID: \"92900aa9-a844-4669-993f-b2250e3093a1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ghqf8" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.643528 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92900aa9-a844-4669-993f-b2250e3093a1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ghqf8\" (UID: \"92900aa9-a844-4669-993f-b2250e3093a1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ghqf8" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.643704 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92900aa9-a844-4669-993f-b2250e3093a1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ghqf8\" (UID: \"92900aa9-a844-4669-993f-b2250e3093a1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ghqf8" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.643887 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/92900aa9-a844-4669-993f-b2250e3093a1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ghqf8\" (UID: \"92900aa9-a844-4669-993f-b2250e3093a1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ghqf8" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.643949 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rd8l\" (UniqueName: \"kubernetes.io/projected/92900aa9-a844-4669-993f-b2250e3093a1-kube-api-access-2rd8l\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ghqf8\" (UID: \"92900aa9-a844-4669-993f-b2250e3093a1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ghqf8" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.644046 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92900aa9-a844-4669-993f-b2250e3093a1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ghqf8\" (UID: \"92900aa9-a844-4669-993f-b2250e3093a1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ghqf8" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.644870 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/92900aa9-a844-4669-993f-b2250e3093a1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ghqf8\" (UID: \"92900aa9-a844-4669-993f-b2250e3093a1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ghqf8" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.649368 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92900aa9-a844-4669-993f-b2250e3093a1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ghqf8\" (UID: \"92900aa9-a844-4669-993f-b2250e3093a1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ghqf8" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.650227 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92900aa9-a844-4669-993f-b2250e3093a1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ghqf8\" (UID: \"92900aa9-a844-4669-993f-b2250e3093a1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ghqf8" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.650525 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92900aa9-a844-4669-993f-b2250e3093a1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ghqf8\" (UID: \"92900aa9-a844-4669-993f-b2250e3093a1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ghqf8" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.664120 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rd8l\" (UniqueName: \"kubernetes.io/projected/92900aa9-a844-4669-993f-b2250e3093a1-kube-api-access-2rd8l\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ghqf8\" (UID: \"92900aa9-a844-4669-993f-b2250e3093a1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ghqf8" Feb 26 17:53:06 crc kubenswrapper[4805]: I0226 17:53:06.738743 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ghqf8" Feb 26 17:53:07 crc kubenswrapper[4805]: W0226 17:53:07.270236 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92900aa9_a844_4669_993f_b2250e3093a1.slice/crio-4f4dfd0b8f42f56a220924fb5eb537ff1bfc32618fdf2e25c6a19fa1ced7d488 WatchSource:0}: Error finding container 4f4dfd0b8f42f56a220924fb5eb537ff1bfc32618fdf2e25c6a19fa1ced7d488: Status 404 returned error can't find the container with id 4f4dfd0b8f42f56a220924fb5eb537ff1bfc32618fdf2e25c6a19fa1ced7d488 Feb 26 17:53:07 crc kubenswrapper[4805]: I0226 17:53:07.277567 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ghqf8"] Feb 26 17:53:08 crc kubenswrapper[4805]: I0226 17:53:08.250303 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ghqf8" event={"ID":"92900aa9-a844-4669-993f-b2250e3093a1","Type":"ContainerStarted","Data":"593fdf818dd770eb061e2bc380578115fd08694622b6902dd062309788a856d1"} Feb 26 17:53:08 crc kubenswrapper[4805]: I0226 17:53:08.250878 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ghqf8" event={"ID":"92900aa9-a844-4669-993f-b2250e3093a1","Type":"ContainerStarted","Data":"4f4dfd0b8f42f56a220924fb5eb537ff1bfc32618fdf2e25c6a19fa1ced7d488"} Feb 26 17:53:08 crc kubenswrapper[4805]: I0226 17:53:08.283515 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ghqf8" podStartSLOduration=1.776242208 podStartE2EDuration="2.283488477s" podCreationTimestamp="2026-02-26 17:53:06 +0000 UTC" firstStartedPulling="2026-02-26 17:53:07.27414094 +0000 UTC m=+2301.835895279" lastFinishedPulling="2026-02-26 17:53:07.781387209 +0000 UTC m=+2302.343141548" observedRunningTime="2026-02-26 17:53:08.264373991 +0000 UTC m=+2302.826128330" watchObservedRunningTime="2026-02-26 17:53:08.283488477 +0000 UTC m=+2302.845242816" Feb 26 17:53:09 crc kubenswrapper[4805]: I0226 17:53:09.050650 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-sync-f94vb"] Feb 26 17:53:09 crc kubenswrapper[4805]: I0226 17:53:09.065974 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-sync-f94vb"] Feb 26 17:53:10 crc kubenswrapper[4805]: I0226 17:53:10.972084 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82ed3f58-3a14-4f1d-9366-e5816386c23b" path="/var/lib/kubelet/pods/82ed3f58-3a14-4f1d-9366-e5816386c23b/volumes" Feb 26 17:53:16 crc kubenswrapper[4805]: I0226 17:53:16.057839 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-storageinit-5twt4"] Feb 26 17:53:16 crc kubenswrapper[4805]: I0226 17:53:16.070149 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-storageinit-5twt4"] Feb 26 17:53:16 crc kubenswrapper[4805]: I0226 17:53:16.983776 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32fc7d22-e37b-433b-85be-58596c3e9c0a" path="/var/lib/kubelet/pods/32fc7d22-e37b-433b-85be-58596c3e9c0a/volumes" Feb 26 17:53:23 crc kubenswrapper[4805]: I0226 17:53:23.186878 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s9kxd"] Feb 26 17:53:23 crc kubenswrapper[4805]: I0226 17:53:23.192940 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s9kxd" Feb 26 17:53:23 crc kubenswrapper[4805]: I0226 17:53:23.202263 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s9kxd"] Feb 26 17:53:23 crc kubenswrapper[4805]: I0226 17:53:23.299260 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdf444e7-5b29-4676-ab78-6a8cf0659504-utilities\") pod \"certified-operators-s9kxd\" (UID: \"bdf444e7-5b29-4676-ab78-6a8cf0659504\") " pod="openshift-marketplace/certified-operators-s9kxd" Feb 26 17:53:23 crc kubenswrapper[4805]: I0226 17:53:23.299579 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rnq5\" (UniqueName: \"kubernetes.io/projected/bdf444e7-5b29-4676-ab78-6a8cf0659504-kube-api-access-7rnq5\") pod \"certified-operators-s9kxd\" (UID: \"bdf444e7-5b29-4676-ab78-6a8cf0659504\") " pod="openshift-marketplace/certified-operators-s9kxd" Feb 26 17:53:23 crc kubenswrapper[4805]: I0226 17:53:23.299642 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdf444e7-5b29-4676-ab78-6a8cf0659504-catalog-content\") pod \"certified-operators-s9kxd\" (UID: \"bdf444e7-5b29-4676-ab78-6a8cf0659504\") " pod="openshift-marketplace/certified-operators-s9kxd" Feb 26 17:53:23 crc kubenswrapper[4805]: I0226 17:53:23.401682 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdf444e7-5b29-4676-ab78-6a8cf0659504-utilities\") pod \"certified-operators-s9kxd\" (UID: \"bdf444e7-5b29-4676-ab78-6a8cf0659504\") " pod="openshift-marketplace/certified-operators-s9kxd" Feb 26 17:53:23 crc kubenswrapper[4805]: I0226 17:53:23.401739 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rnq5\" (UniqueName: \"kubernetes.io/projected/bdf444e7-5b29-4676-ab78-6a8cf0659504-kube-api-access-7rnq5\") pod \"certified-operators-s9kxd\" (UID: \"bdf444e7-5b29-4676-ab78-6a8cf0659504\") " pod="openshift-marketplace/certified-operators-s9kxd" Feb 26 17:53:23 crc kubenswrapper[4805]: I0226 17:53:23.401832 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdf444e7-5b29-4676-ab78-6a8cf0659504-catalog-content\") pod \"certified-operators-s9kxd\" (UID: \"bdf444e7-5b29-4676-ab78-6a8cf0659504\") " pod="openshift-marketplace/certified-operators-s9kxd" Feb 26 17:53:23 crc kubenswrapper[4805]: I0226 17:53:23.402754 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdf444e7-5b29-4676-ab78-6a8cf0659504-utilities\") pod \"certified-operators-s9kxd\" (UID: \"bdf444e7-5b29-4676-ab78-6a8cf0659504\") " pod="openshift-marketplace/certified-operators-s9kxd" Feb 26 17:53:23 crc kubenswrapper[4805]: I0226 17:53:23.402755 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdf444e7-5b29-4676-ab78-6a8cf0659504-catalog-content\") pod \"certified-operators-s9kxd\" (UID: \"bdf444e7-5b29-4676-ab78-6a8cf0659504\") " pod="openshift-marketplace/certified-operators-s9kxd" Feb 26 17:53:23 crc kubenswrapper[4805]: I0226 17:53:23.421165 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rnq5\" (UniqueName: \"kubernetes.io/projected/bdf444e7-5b29-4676-ab78-6a8cf0659504-kube-api-access-7rnq5\") pod \"certified-operators-s9kxd\" (UID: \"bdf444e7-5b29-4676-ab78-6a8cf0659504\") " pod="openshift-marketplace/certified-operators-s9kxd" Feb 26 17:53:23 crc kubenswrapper[4805]: I0226 17:53:23.521809 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s9kxd" Feb 26 17:53:24 crc kubenswrapper[4805]: W0226 17:53:24.112278 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbdf444e7_5b29_4676_ab78_6a8cf0659504.slice/crio-4e0b05a849fdd9c7cc506c5725c111691c786363f8bfd25d41bd769ccb7f3bf1 WatchSource:0}: Error finding container 4e0b05a849fdd9c7cc506c5725c111691c786363f8bfd25d41bd769ccb7f3bf1: Status 404 returned error can't find the container with id 4e0b05a849fdd9c7cc506c5725c111691c786363f8bfd25d41bd769ccb7f3bf1 Feb 26 17:53:24 crc kubenswrapper[4805]: I0226 17:53:24.113005 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s9kxd"] Feb 26 17:53:24 crc kubenswrapper[4805]: I0226 17:53:24.416869 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9kxd" event={"ID":"bdf444e7-5b29-4676-ab78-6a8cf0659504","Type":"ContainerStarted","Data":"4e0b05a849fdd9c7cc506c5725c111691c786363f8bfd25d41bd769ccb7f3bf1"} Feb 26 17:53:25 crc kubenswrapper[4805]: I0226 17:53:25.429410 4805 generic.go:334] "Generic (PLEG): container finished" podID="bdf444e7-5b29-4676-ab78-6a8cf0659504" containerID="e58e0f609741b02cc8711a9f23922b10d02c3dafa5a3a97c518487f47a490d9e" exitCode=0 Feb 26 17:53:25 crc kubenswrapper[4805]: I0226 17:53:25.429730 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9kxd" event={"ID":"bdf444e7-5b29-4676-ab78-6a8cf0659504","Type":"ContainerDied","Data":"e58e0f609741b02cc8711a9f23922b10d02c3dafa5a3a97c518487f47a490d9e"} Feb 26 17:53:26 crc kubenswrapper[4805]: I0226 17:53:26.440659 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9kxd" event={"ID":"bdf444e7-5b29-4676-ab78-6a8cf0659504","Type":"ContainerStarted","Data":"fa7c82a097bb9dd1180d4bb1c1d5ef36bb4b70666a033bd4b95f33f28726df3c"} Feb 26 17:53:28 crc kubenswrapper[4805]: I0226 17:53:28.462795 4805 generic.go:334] "Generic (PLEG): container finished" podID="bdf444e7-5b29-4676-ab78-6a8cf0659504" containerID="fa7c82a097bb9dd1180d4bb1c1d5ef36bb4b70666a033bd4b95f33f28726df3c" exitCode=0 Feb 26 17:53:28 crc kubenswrapper[4805]: I0226 17:53:28.462889 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9kxd" event={"ID":"bdf444e7-5b29-4676-ab78-6a8cf0659504","Type":"ContainerDied","Data":"fa7c82a097bb9dd1180d4bb1c1d5ef36bb4b70666a033bd4b95f33f28726df3c"} Feb 26 17:53:29 crc kubenswrapper[4805]: I0226 17:53:29.476814 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9kxd" event={"ID":"bdf444e7-5b29-4676-ab78-6a8cf0659504","Type":"ContainerStarted","Data":"642bd9d53c74d35bae62fea8a26ddd3a7436032bbf169e54d4723e654fef037f"} Feb 26 17:53:29 crc kubenswrapper[4805]: I0226 17:53:29.495263 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s9kxd" podStartSLOduration=2.687478718 podStartE2EDuration="6.495244525s" podCreationTimestamp="2026-02-26 17:53:23 +0000 UTC" firstStartedPulling="2026-02-26 17:53:25.433492614 +0000 UTC m=+2319.995246953" lastFinishedPulling="2026-02-26 17:53:29.241258421 +0000 UTC m=+2323.803012760" observedRunningTime="2026-02-26 17:53:29.493906911 +0000 UTC m=+2324.055661250" watchObservedRunningTime="2026-02-26 17:53:29.495244525 +0000 UTC m=+2324.056998864" Feb 26 17:53:33 crc kubenswrapper[4805]: I0226 17:53:33.522500 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s9kxd" Feb 26 17:53:33 crc kubenswrapper[4805]: I0226 17:53:33.522999 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s9kxd" Feb 26 17:53:33 crc kubenswrapper[4805]: I0226 17:53:33.571489 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s9kxd" Feb 26 17:53:34 crc kubenswrapper[4805]: I0226 17:53:34.583007 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s9kxd" Feb 26 17:53:34 crc kubenswrapper[4805]: I0226 17:53:34.637714 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s9kxd"] Feb 26 17:53:36 crc kubenswrapper[4805]: I0226 17:53:36.547906 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s9kxd" podUID="bdf444e7-5b29-4676-ab78-6a8cf0659504" containerName="registry-server" containerID="cri-o://642bd9d53c74d35bae62fea8a26ddd3a7436032bbf169e54d4723e654fef037f" gracePeriod=2 Feb 26 17:53:37 crc kubenswrapper[4805]: I0226 17:53:37.129086 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s9kxd" Feb 26 17:53:37 crc kubenswrapper[4805]: I0226 17:53:37.147355 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdf444e7-5b29-4676-ab78-6a8cf0659504-catalog-content\") pod \"bdf444e7-5b29-4676-ab78-6a8cf0659504\" (UID: \"bdf444e7-5b29-4676-ab78-6a8cf0659504\") " Feb 26 17:53:37 crc kubenswrapper[4805]: I0226 17:53:37.147605 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rnq5\" (UniqueName: \"kubernetes.io/projected/bdf444e7-5b29-4676-ab78-6a8cf0659504-kube-api-access-7rnq5\") pod \"bdf444e7-5b29-4676-ab78-6a8cf0659504\" (UID: \"bdf444e7-5b29-4676-ab78-6a8cf0659504\") " Feb 26 17:53:37 crc kubenswrapper[4805]: I0226 17:53:37.147828 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdf444e7-5b29-4676-ab78-6a8cf0659504-utilities\") pod \"bdf444e7-5b29-4676-ab78-6a8cf0659504\" (UID: \"bdf444e7-5b29-4676-ab78-6a8cf0659504\") " Feb 26 17:53:37 crc kubenswrapper[4805]: I0226 17:53:37.148703 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdf444e7-5b29-4676-ab78-6a8cf0659504-utilities" (OuterVolumeSpecName: "utilities") pod "bdf444e7-5b29-4676-ab78-6a8cf0659504" (UID: "bdf444e7-5b29-4676-ab78-6a8cf0659504"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:53:37 crc kubenswrapper[4805]: I0226 17:53:37.149643 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdf444e7-5b29-4676-ab78-6a8cf0659504-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 17:53:37 crc kubenswrapper[4805]: I0226 17:53:37.154712 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdf444e7-5b29-4676-ab78-6a8cf0659504-kube-api-access-7rnq5" (OuterVolumeSpecName: "kube-api-access-7rnq5") pod "bdf444e7-5b29-4676-ab78-6a8cf0659504" (UID: "bdf444e7-5b29-4676-ab78-6a8cf0659504"). InnerVolumeSpecName "kube-api-access-7rnq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:53:37 crc kubenswrapper[4805]: I0226 17:53:37.252269 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rnq5\" (UniqueName: \"kubernetes.io/projected/bdf444e7-5b29-4676-ab78-6a8cf0659504-kube-api-access-7rnq5\") on node \"crc\" DevicePath \"\"" Feb 26 17:53:37 crc kubenswrapper[4805]: I0226 17:53:37.375758 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdf444e7-5b29-4676-ab78-6a8cf0659504-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bdf444e7-5b29-4676-ab78-6a8cf0659504" (UID: "bdf444e7-5b29-4676-ab78-6a8cf0659504"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:53:37 crc kubenswrapper[4805]: I0226 17:53:37.455778 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdf444e7-5b29-4676-ab78-6a8cf0659504-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 17:53:37 crc kubenswrapper[4805]: I0226 17:53:37.560950 4805 generic.go:334] "Generic (PLEG): container finished" podID="bdf444e7-5b29-4676-ab78-6a8cf0659504" containerID="642bd9d53c74d35bae62fea8a26ddd3a7436032bbf169e54d4723e654fef037f" exitCode=0 Feb 26 17:53:37 crc kubenswrapper[4805]: I0226 17:53:37.560995 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9kxd" event={"ID":"bdf444e7-5b29-4676-ab78-6a8cf0659504","Type":"ContainerDied","Data":"642bd9d53c74d35bae62fea8a26ddd3a7436032bbf169e54d4723e654fef037f"} Feb 26 17:53:37 crc kubenswrapper[4805]: I0226 17:53:37.561035 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s9kxd" Feb 26 17:53:37 crc kubenswrapper[4805]: I0226 17:53:37.561054 4805 scope.go:117] "RemoveContainer" containerID="642bd9d53c74d35bae62fea8a26ddd3a7436032bbf169e54d4723e654fef037f" Feb 26 17:53:37 crc kubenswrapper[4805]: I0226 17:53:37.561042 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9kxd" event={"ID":"bdf444e7-5b29-4676-ab78-6a8cf0659504","Type":"ContainerDied","Data":"4e0b05a849fdd9c7cc506c5725c111691c786363f8bfd25d41bd769ccb7f3bf1"} Feb 26 17:53:37 crc kubenswrapper[4805]: I0226 17:53:37.586799 4805 scope.go:117] "RemoveContainer" containerID="fa7c82a097bb9dd1180d4bb1c1d5ef36bb4b70666a033bd4b95f33f28726df3c" Feb 26 17:53:37 crc kubenswrapper[4805]: I0226 17:53:37.596700 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s9kxd"] Feb 26 17:53:37 crc kubenswrapper[4805]: I0226 17:53:37.606485 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s9kxd"] Feb 26 17:53:37 crc kubenswrapper[4805]: I0226 17:53:37.626473 4805 scope.go:117] "RemoveContainer" containerID="e58e0f609741b02cc8711a9f23922b10d02c3dafa5a3a97c518487f47a490d9e" Feb 26 17:53:37 crc kubenswrapper[4805]: I0226 17:53:37.681191 4805 scope.go:117] "RemoveContainer" containerID="642bd9d53c74d35bae62fea8a26ddd3a7436032bbf169e54d4723e654fef037f" Feb 26 17:53:37 crc kubenswrapper[4805]: E0226 17:53:37.681701 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"642bd9d53c74d35bae62fea8a26ddd3a7436032bbf169e54d4723e654fef037f\": container with ID starting with 642bd9d53c74d35bae62fea8a26ddd3a7436032bbf169e54d4723e654fef037f not found: ID does not exist" containerID="642bd9d53c74d35bae62fea8a26ddd3a7436032bbf169e54d4723e654fef037f" Feb 26 17:53:37 crc kubenswrapper[4805]: I0226 17:53:37.681737 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"642bd9d53c74d35bae62fea8a26ddd3a7436032bbf169e54d4723e654fef037f"} err="failed to get container status \"642bd9d53c74d35bae62fea8a26ddd3a7436032bbf169e54d4723e654fef037f\": rpc error: code = NotFound desc = could not find container \"642bd9d53c74d35bae62fea8a26ddd3a7436032bbf169e54d4723e654fef037f\": container with ID starting with 642bd9d53c74d35bae62fea8a26ddd3a7436032bbf169e54d4723e654fef037f not found: ID does not exist" Feb 26 17:53:37 crc kubenswrapper[4805]: I0226 17:53:37.681764 4805 scope.go:117] "RemoveContainer" containerID="fa7c82a097bb9dd1180d4bb1c1d5ef36bb4b70666a033bd4b95f33f28726df3c" Feb 26 17:53:37 crc kubenswrapper[4805]: E0226 17:53:37.682215 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa7c82a097bb9dd1180d4bb1c1d5ef36bb4b70666a033bd4b95f33f28726df3c\": container with ID starting with fa7c82a097bb9dd1180d4bb1c1d5ef36bb4b70666a033bd4b95f33f28726df3c not found: ID does not exist" containerID="fa7c82a097bb9dd1180d4bb1c1d5ef36bb4b70666a033bd4b95f33f28726df3c" Feb 26 17:53:37 crc kubenswrapper[4805]: I0226 17:53:37.682274 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa7c82a097bb9dd1180d4bb1c1d5ef36bb4b70666a033bd4b95f33f28726df3c"} err="failed to get container status \"fa7c82a097bb9dd1180d4bb1c1d5ef36bb4b70666a033bd4b95f33f28726df3c\": rpc error: code = NotFound desc = could not find container \"fa7c82a097bb9dd1180d4bb1c1d5ef36bb4b70666a033bd4b95f33f28726df3c\": container with ID starting with fa7c82a097bb9dd1180d4bb1c1d5ef36bb4b70666a033bd4b95f33f28726df3c not found: ID does not exist" Feb 26 17:53:37 crc kubenswrapper[4805]: I0226 17:53:37.682310 4805 scope.go:117] "RemoveContainer" containerID="e58e0f609741b02cc8711a9f23922b10d02c3dafa5a3a97c518487f47a490d9e" Feb 26 17:53:37 crc kubenswrapper[4805]: E0226 17:53:37.682663 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e58e0f609741b02cc8711a9f23922b10d02c3dafa5a3a97c518487f47a490d9e\": container with ID starting with e58e0f609741b02cc8711a9f23922b10d02c3dafa5a3a97c518487f47a490d9e not found: ID does not exist" containerID="e58e0f609741b02cc8711a9f23922b10d02c3dafa5a3a97c518487f47a490d9e" Feb 26 17:53:37 crc kubenswrapper[4805]: I0226 17:53:37.682694 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e58e0f609741b02cc8711a9f23922b10d02c3dafa5a3a97c518487f47a490d9e"} err="failed to get container status \"e58e0f609741b02cc8711a9f23922b10d02c3dafa5a3a97c518487f47a490d9e\": rpc error: code = NotFound desc = could not find container \"e58e0f609741b02cc8711a9f23922b10d02c3dafa5a3a97c518487f47a490d9e\": container with ID starting with e58e0f609741b02cc8711a9f23922b10d02c3dafa5a3a97c518487f47a490d9e not found: ID does not exist" Feb 26 17:53:38 crc kubenswrapper[4805]: I0226 17:53:38.967146 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdf444e7-5b29-4676-ab78-6a8cf0659504" path="/var/lib/kubelet/pods/bdf444e7-5b29-4676-ab78-6a8cf0659504/volumes" Feb 26 17:53:46 crc kubenswrapper[4805]: I0226 17:53:46.373039 4805 scope.go:117] "RemoveContainer" containerID="b74665d65368ed1a299c21ea9acd633f03fbb9ccfa23d912aad66481618bfdca" Feb 26 17:53:46 crc kubenswrapper[4805]: I0226 17:53:46.407722 4805 scope.go:117] "RemoveContainer" containerID="53f85add3a4d93cdd311452c753176994fc5f98d7227cbf05bbe1b1085cf2764" Feb 26 17:54:00 crc kubenswrapper[4805]: I0226 17:54:00.151249 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535474-dmvw5"] Feb 26 17:54:00 crc kubenswrapper[4805]: E0226 17:54:00.152253 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdf444e7-5b29-4676-ab78-6a8cf0659504" containerName="extract-content" Feb 26 17:54:00 crc kubenswrapper[4805]: I0226 17:54:00.152265 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdf444e7-5b29-4676-ab78-6a8cf0659504" containerName="extract-content" Feb 26 17:54:00 crc kubenswrapper[4805]: E0226 17:54:00.152289 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdf444e7-5b29-4676-ab78-6a8cf0659504" containerName="registry-server" Feb 26 17:54:00 crc kubenswrapper[4805]: I0226 17:54:00.152296 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdf444e7-5b29-4676-ab78-6a8cf0659504" containerName="registry-server" Feb 26 17:54:00 crc kubenswrapper[4805]: E0226 17:54:00.152327 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdf444e7-5b29-4676-ab78-6a8cf0659504" containerName="extract-utilities" Feb 26 17:54:00 crc kubenswrapper[4805]: I0226 17:54:00.152333 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdf444e7-5b29-4676-ab78-6a8cf0659504" containerName="extract-utilities" Feb 26 17:54:00 crc kubenswrapper[4805]: I0226 17:54:00.152532 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdf444e7-5b29-4676-ab78-6a8cf0659504" containerName="registry-server" Feb 26 17:54:00 crc kubenswrapper[4805]: I0226 17:54:00.153286 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535474-dmvw5" Feb 26 17:54:00 crc kubenswrapper[4805]: I0226 17:54:00.156029 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 17:54:00 crc kubenswrapper[4805]: I0226 17:54:00.158084 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 17:54:00 crc kubenswrapper[4805]: I0226 17:54:00.159076 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 17:54:00 crc kubenswrapper[4805]: I0226 17:54:00.162294 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535474-dmvw5"] Feb 26 17:54:00 crc kubenswrapper[4805]: I0226 17:54:00.248483 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2khzp\" (UniqueName: \"kubernetes.io/projected/7d6e84d7-d4c1-4dd1-af08-98fc71e26ef7-kube-api-access-2khzp\") pod \"auto-csr-approver-29535474-dmvw5\" (UID: \"7d6e84d7-d4c1-4dd1-af08-98fc71e26ef7\") " pod="openshift-infra/auto-csr-approver-29535474-dmvw5" Feb 26 17:54:00 crc kubenswrapper[4805]: I0226 17:54:00.350427 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2khzp\" (UniqueName: \"kubernetes.io/projected/7d6e84d7-d4c1-4dd1-af08-98fc71e26ef7-kube-api-access-2khzp\") pod \"auto-csr-approver-29535474-dmvw5\" (UID: \"7d6e84d7-d4c1-4dd1-af08-98fc71e26ef7\") " pod="openshift-infra/auto-csr-approver-29535474-dmvw5" Feb 26 17:54:00 crc kubenswrapper[4805]: I0226 17:54:00.372289 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2khzp\" (UniqueName: \"kubernetes.io/projected/7d6e84d7-d4c1-4dd1-af08-98fc71e26ef7-kube-api-access-2khzp\") pod \"auto-csr-approver-29535474-dmvw5\" (UID: \"7d6e84d7-d4c1-4dd1-af08-98fc71e26ef7\") " pod="openshift-infra/auto-csr-approver-29535474-dmvw5" Feb 26 17:54:00 crc kubenswrapper[4805]: I0226 17:54:00.476095 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535474-dmvw5" Feb 26 17:54:00 crc kubenswrapper[4805]: I0226 17:54:00.981146 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535474-dmvw5"] Feb 26 17:54:01 crc kubenswrapper[4805]: I0226 17:54:01.825619 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535474-dmvw5" event={"ID":"7d6e84d7-d4c1-4dd1-af08-98fc71e26ef7","Type":"ContainerStarted","Data":"b9712562454d55a942b7e93d3e9fc3efb85b86004194ade25d4928cf8bab15e5"} Feb 26 17:54:02 crc kubenswrapper[4805]: I0226 17:54:02.836719 4805 generic.go:334] "Generic (PLEG): container finished" podID="7d6e84d7-d4c1-4dd1-af08-98fc71e26ef7" containerID="cde3ae7a99cdcd7be8f51705218c587bdaff2a8db7008fab456bcde3f496ca2a" exitCode=0 Feb 26 17:54:02 crc kubenswrapper[4805]: I0226 17:54:02.836815 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535474-dmvw5" event={"ID":"7d6e84d7-d4c1-4dd1-af08-98fc71e26ef7","Type":"ContainerDied","Data":"cde3ae7a99cdcd7be8f51705218c587bdaff2a8db7008fab456bcde3f496ca2a"} Feb 26 17:54:03 crc kubenswrapper[4805]: I0226 17:54:03.856920 4805 generic.go:334] "Generic (PLEG): container finished" podID="92900aa9-a844-4669-993f-b2250e3093a1" containerID="593fdf818dd770eb061e2bc380578115fd08694622b6902dd062309788a856d1" exitCode=0 Feb 26 17:54:03 crc kubenswrapper[4805]: I0226 17:54:03.857037 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ghqf8" event={"ID":"92900aa9-a844-4669-993f-b2250e3093a1","Type":"ContainerDied","Data":"593fdf818dd770eb061e2bc380578115fd08694622b6902dd062309788a856d1"} Feb 26 17:54:04 crc kubenswrapper[4805]: I0226 17:54:04.303395 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535474-dmvw5" Feb 26 17:54:04 crc kubenswrapper[4805]: I0226 17:54:04.443949 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2khzp\" (UniqueName: \"kubernetes.io/projected/7d6e84d7-d4c1-4dd1-af08-98fc71e26ef7-kube-api-access-2khzp\") pod \"7d6e84d7-d4c1-4dd1-af08-98fc71e26ef7\" (UID: \"7d6e84d7-d4c1-4dd1-af08-98fc71e26ef7\") " Feb 26 17:54:04 crc kubenswrapper[4805]: I0226 17:54:04.450766 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d6e84d7-d4c1-4dd1-af08-98fc71e26ef7-kube-api-access-2khzp" (OuterVolumeSpecName: "kube-api-access-2khzp") pod "7d6e84d7-d4c1-4dd1-af08-98fc71e26ef7" (UID: "7d6e84d7-d4c1-4dd1-af08-98fc71e26ef7"). InnerVolumeSpecName "kube-api-access-2khzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:54:04 crc kubenswrapper[4805]: I0226 17:54:04.546750 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2khzp\" (UniqueName: \"kubernetes.io/projected/7d6e84d7-d4c1-4dd1-af08-98fc71e26ef7-kube-api-access-2khzp\") on node \"crc\" DevicePath \"\"" Feb 26 17:54:04 crc kubenswrapper[4805]: I0226 17:54:04.869562 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535474-dmvw5" event={"ID":"7d6e84d7-d4c1-4dd1-af08-98fc71e26ef7","Type":"ContainerDied","Data":"b9712562454d55a942b7e93d3e9fc3efb85b86004194ade25d4928cf8bab15e5"} Feb 26 17:54:04 crc kubenswrapper[4805]: I0226 17:54:04.869596 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535474-dmvw5" Feb 26 17:54:04 crc kubenswrapper[4805]: I0226 17:54:04.869637 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9712562454d55a942b7e93d3e9fc3efb85b86004194ade25d4928cf8bab15e5" Feb 26 17:54:05 crc kubenswrapper[4805]: I0226 17:54:05.392897 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ghqf8" Feb 26 17:54:05 crc kubenswrapper[4805]: I0226 17:54:05.397296 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535468-k58kk"] Feb 26 17:54:05 crc kubenswrapper[4805]: I0226 17:54:05.407164 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535468-k58kk"] Feb 26 17:54:05 crc kubenswrapper[4805]: I0226 17:54:05.570236 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rd8l\" (UniqueName: \"kubernetes.io/projected/92900aa9-a844-4669-993f-b2250e3093a1-kube-api-access-2rd8l\") pod \"92900aa9-a844-4669-993f-b2250e3093a1\" (UID: \"92900aa9-a844-4669-993f-b2250e3093a1\") " Feb 26 17:54:05 crc kubenswrapper[4805]: I0226 17:54:05.570324 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92900aa9-a844-4669-993f-b2250e3093a1-ssh-key-openstack-edpm-ipam\") pod \"92900aa9-a844-4669-993f-b2250e3093a1\" (UID: \"92900aa9-a844-4669-993f-b2250e3093a1\") " Feb 26 17:54:05 crc kubenswrapper[4805]: I0226 17:54:05.570373 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92900aa9-a844-4669-993f-b2250e3093a1-ovn-combined-ca-bundle\") pod \"92900aa9-a844-4669-993f-b2250e3093a1\" (UID: \"92900aa9-a844-4669-993f-b2250e3093a1\") " Feb 26 17:54:05 crc kubenswrapper[4805]: I0226 17:54:05.570413 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92900aa9-a844-4669-993f-b2250e3093a1-inventory\") pod \"92900aa9-a844-4669-993f-b2250e3093a1\" (UID: \"92900aa9-a844-4669-993f-b2250e3093a1\") " Feb 26 17:54:05 crc kubenswrapper[4805]: I0226 17:54:05.570518 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/92900aa9-a844-4669-993f-b2250e3093a1-ovncontroller-config-0\") pod \"92900aa9-a844-4669-993f-b2250e3093a1\" (UID: \"92900aa9-a844-4669-993f-b2250e3093a1\") " Feb 26 17:54:05 crc kubenswrapper[4805]: I0226 17:54:05.574808 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92900aa9-a844-4669-993f-b2250e3093a1-kube-api-access-2rd8l" (OuterVolumeSpecName: "kube-api-access-2rd8l") pod "92900aa9-a844-4669-993f-b2250e3093a1" (UID: "92900aa9-a844-4669-993f-b2250e3093a1"). InnerVolumeSpecName "kube-api-access-2rd8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:54:05 crc kubenswrapper[4805]: I0226 17:54:05.577974 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92900aa9-a844-4669-993f-b2250e3093a1-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "92900aa9-a844-4669-993f-b2250e3093a1" (UID: "92900aa9-a844-4669-993f-b2250e3093a1"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:54:05 crc kubenswrapper[4805]: I0226 17:54:05.600582 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92900aa9-a844-4669-993f-b2250e3093a1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "92900aa9-a844-4669-993f-b2250e3093a1" (UID: "92900aa9-a844-4669-993f-b2250e3093a1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:54:05 crc kubenswrapper[4805]: I0226 17:54:05.606219 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92900aa9-a844-4669-993f-b2250e3093a1-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "92900aa9-a844-4669-993f-b2250e3093a1" (UID: "92900aa9-a844-4669-993f-b2250e3093a1"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 17:54:05 crc kubenswrapper[4805]: I0226 17:54:05.611337 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92900aa9-a844-4669-993f-b2250e3093a1-inventory" (OuterVolumeSpecName: "inventory") pod "92900aa9-a844-4669-993f-b2250e3093a1" (UID: "92900aa9-a844-4669-993f-b2250e3093a1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:54:05 crc kubenswrapper[4805]: I0226 17:54:05.673494 4805 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92900aa9-a844-4669-993f-b2250e3093a1-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:54:05 crc kubenswrapper[4805]: I0226 17:54:05.673529 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92900aa9-a844-4669-993f-b2250e3093a1-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 17:54:05 crc kubenswrapper[4805]: I0226 17:54:05.673539 4805 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/92900aa9-a844-4669-993f-b2250e3093a1-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 26 17:54:05 crc kubenswrapper[4805]: I0226 17:54:05.673549 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rd8l\" (UniqueName: \"kubernetes.io/projected/92900aa9-a844-4669-993f-b2250e3093a1-kube-api-access-2rd8l\") on node \"crc\" DevicePath \"\"" Feb 26 17:54:05 crc kubenswrapper[4805]: I0226 17:54:05.673558 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92900aa9-a844-4669-993f-b2250e3093a1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 17:54:05 crc kubenswrapper[4805]: I0226 17:54:05.882630 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ghqf8" event={"ID":"92900aa9-a844-4669-993f-b2250e3093a1","Type":"ContainerDied","Data":"4f4dfd0b8f42f56a220924fb5eb537ff1bfc32618fdf2e25c6a19fa1ced7d488"} Feb 26 17:54:05 crc kubenswrapper[4805]: I0226 17:54:05.882686 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f4dfd0b8f42f56a220924fb5eb537ff1bfc32618fdf2e25c6a19fa1ced7d488" Feb 26 17:54:05 crc kubenswrapper[4805]: I0226 17:54:05.882709 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ghqf8" Feb 26 17:54:05 crc kubenswrapper[4805]: I0226 17:54:05.953648 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr"] Feb 26 17:54:05 crc kubenswrapper[4805]: E0226 17:54:05.954215 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d6e84d7-d4c1-4dd1-af08-98fc71e26ef7" containerName="oc" Feb 26 17:54:05 crc kubenswrapper[4805]: I0226 17:54:05.954245 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d6e84d7-d4c1-4dd1-af08-98fc71e26ef7" containerName="oc" Feb 26 17:54:05 crc kubenswrapper[4805]: E0226 17:54:05.954288 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92900aa9-a844-4669-993f-b2250e3093a1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 26 17:54:05 crc kubenswrapper[4805]: I0226 17:54:05.954295 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="92900aa9-a844-4669-993f-b2250e3093a1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 26 17:54:05 crc kubenswrapper[4805]: I0226 17:54:05.954589 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d6e84d7-d4c1-4dd1-af08-98fc71e26ef7" containerName="oc" Feb 26 17:54:05 crc kubenswrapper[4805]: I0226 17:54:05.954608 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="92900aa9-a844-4669-993f-b2250e3093a1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 26 17:54:05 crc kubenswrapper[4805]: I0226 17:54:05.955417 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr" Feb 26 17:54:05 crc kubenswrapper[4805]: I0226 17:54:05.958223 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 17:54:05 crc kubenswrapper[4805]: I0226 17:54:05.958495 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 26 17:54:05 crc kubenswrapper[4805]: I0226 17:54:05.958644 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sc2xs" Feb 26 17:54:05 crc kubenswrapper[4805]: I0226 17:54:05.958926 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 17:54:05 crc kubenswrapper[4805]: I0226 17:54:05.959110 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 17:54:05 crc kubenswrapper[4805]: I0226 17:54:05.962859 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 26 17:54:05 crc kubenswrapper[4805]: I0226 17:54:05.982779 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr"] Feb 26 17:54:06 crc kubenswrapper[4805]: I0226 17:54:06.082947 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70a36330-f6cb-4235-a39a-902ccc54bdde-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr\" (UID: \"70a36330-f6cb-4235-a39a-902ccc54bdde\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr" Feb 26 17:54:06 crc kubenswrapper[4805]: I0226 17:54:06.083009 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/70a36330-f6cb-4235-a39a-902ccc54bdde-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr\" (UID: \"70a36330-f6cb-4235-a39a-902ccc54bdde\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr" Feb 26 17:54:06 crc kubenswrapper[4805]: I0226 17:54:06.083088 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/70a36330-f6cb-4235-a39a-902ccc54bdde-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr\" (UID: \"70a36330-f6cb-4235-a39a-902ccc54bdde\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr" Feb 26 17:54:06 crc kubenswrapper[4805]: I0226 17:54:06.083644 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70a36330-f6cb-4235-a39a-902ccc54bdde-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr\" (UID: \"70a36330-f6cb-4235-a39a-902ccc54bdde\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr" Feb 26 17:54:06 crc kubenswrapper[4805]: I0226 17:54:06.084398 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70a36330-f6cb-4235-a39a-902ccc54bdde-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr\" (UID: \"70a36330-f6cb-4235-a39a-902ccc54bdde\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr" Feb 26 17:54:06 crc kubenswrapper[4805]: I0226 17:54:06.084640 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx45r\" (UniqueName: \"kubernetes.io/projected/70a36330-f6cb-4235-a39a-902ccc54bdde-kube-api-access-rx45r\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr\" (UID: \"70a36330-f6cb-4235-a39a-902ccc54bdde\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr" Feb 26 17:54:06 crc kubenswrapper[4805]: I0226 17:54:06.187862 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70a36330-f6cb-4235-a39a-902ccc54bdde-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr\" (UID: \"70a36330-f6cb-4235-a39a-902ccc54bdde\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr" Feb 26 17:54:06 crc kubenswrapper[4805]: I0226 17:54:06.188091 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70a36330-f6cb-4235-a39a-902ccc54bdde-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr\" (UID: \"70a36330-f6cb-4235-a39a-902ccc54bdde\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr" Feb 26 17:54:06 crc kubenswrapper[4805]: I0226 17:54:06.188626 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx45r\" (UniqueName: \"kubernetes.io/projected/70a36330-f6cb-4235-a39a-902ccc54bdde-kube-api-access-rx45r\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr\" (UID: \"70a36330-f6cb-4235-a39a-902ccc54bdde\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr" Feb 26 17:54:06 crc kubenswrapper[4805]: I0226 17:54:06.188764 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70a36330-f6cb-4235-a39a-902ccc54bdde-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr\" (UID: \"70a36330-f6cb-4235-a39a-902ccc54bdde\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr" Feb 26 17:54:06 crc kubenswrapper[4805]: I0226 17:54:06.188828 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/70a36330-f6cb-4235-a39a-902ccc54bdde-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr\" (UID: \"70a36330-f6cb-4235-a39a-902ccc54bdde\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr" Feb 26 17:54:06 crc kubenswrapper[4805]: I0226 17:54:06.188958 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/70a36330-f6cb-4235-a39a-902ccc54bdde-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr\" (UID: \"70a36330-f6cb-4235-a39a-902ccc54bdde\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr" Feb 26 17:54:06 crc kubenswrapper[4805]: I0226 17:54:06.192278 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/70a36330-f6cb-4235-a39a-902ccc54bdde-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr\" (UID: \"70a36330-f6cb-4235-a39a-902ccc54bdde\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr" Feb 26 17:54:06 crc kubenswrapper[4805]: I0226 17:54:06.192357 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70a36330-f6cb-4235-a39a-902ccc54bdde-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr\" (UID: \"70a36330-f6cb-4235-a39a-902ccc54bdde\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr" Feb 26 17:54:06 crc kubenswrapper[4805]: I0226 17:54:06.192357 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70a36330-f6cb-4235-a39a-902ccc54bdde-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr\" (UID: \"70a36330-f6cb-4235-a39a-902ccc54bdde\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr" Feb 26 17:54:06 crc kubenswrapper[4805]: I0226 17:54:06.192539 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/70a36330-f6cb-4235-a39a-902ccc54bdde-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr\" (UID: \"70a36330-f6cb-4235-a39a-902ccc54bdde\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr" Feb 26 17:54:06 crc kubenswrapper[4805]: I0226 17:54:06.198702 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70a36330-f6cb-4235-a39a-902ccc54bdde-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr\" (UID: \"70a36330-f6cb-4235-a39a-902ccc54bdde\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr" Feb 26 17:54:06 crc kubenswrapper[4805]: I0226 17:54:06.207690 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx45r\" (UniqueName: \"kubernetes.io/projected/70a36330-f6cb-4235-a39a-902ccc54bdde-kube-api-access-rx45r\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr\" (UID: \"70a36330-f6cb-4235-a39a-902ccc54bdde\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr" Feb 26 17:54:06 crc kubenswrapper[4805]: I0226 17:54:06.276837 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr" Feb 26 17:54:06 crc kubenswrapper[4805]: W0226 17:54:06.826421 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70a36330_f6cb_4235_a39a_902ccc54bdde.slice/crio-f710087776c1dadbd61b5baa47ff75eeabe5ba10b5bd32e29abb6c2ef158ec13 WatchSource:0}: Error finding container f710087776c1dadbd61b5baa47ff75eeabe5ba10b5bd32e29abb6c2ef158ec13: Status 404 returned error can't find the container with id f710087776c1dadbd61b5baa47ff75eeabe5ba10b5bd32e29abb6c2ef158ec13 Feb 26 17:54:06 crc kubenswrapper[4805]: I0226 17:54:06.827858 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr"] Feb 26 17:54:06 crc kubenswrapper[4805]: I0226 17:54:06.891862 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr" event={"ID":"70a36330-f6cb-4235-a39a-902ccc54bdde","Type":"ContainerStarted","Data":"f710087776c1dadbd61b5baa47ff75eeabe5ba10b5bd32e29abb6c2ef158ec13"} Feb 26 17:54:06 crc kubenswrapper[4805]: I0226 17:54:06.967659 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9663523a-376a-46ac-a20e-a622baf88a96" path="/var/lib/kubelet/pods/9663523a-376a-46ac-a20e-a622baf88a96/volumes" Feb 26 17:54:07 crc kubenswrapper[4805]: I0226 17:54:07.904646 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr" event={"ID":"70a36330-f6cb-4235-a39a-902ccc54bdde","Type":"ContainerStarted","Data":"0a406c1310497d65893f3abe7a5956087df2d8810430f1cce8ecc0c824666412"} Feb 26 17:54:07 crc kubenswrapper[4805]: I0226 17:54:07.926161 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr" podStartSLOduration=2.518170375 podStartE2EDuration="2.926143917s" podCreationTimestamp="2026-02-26 17:54:05 +0000 UTC" firstStartedPulling="2026-02-26 17:54:06.829221131 +0000 UTC m=+2361.390975470" lastFinishedPulling="2026-02-26 17:54:07.237194673 +0000 UTC m=+2361.798949012" observedRunningTime="2026-02-26 17:54:07.922868873 +0000 UTC m=+2362.484623222" watchObservedRunningTime="2026-02-26 17:54:07.926143917 +0000 UTC m=+2362.487898256" Feb 26 17:54:46 crc kubenswrapper[4805]: I0226 17:54:46.538736 4805 scope.go:117] "RemoveContainer" containerID="b48487eb648d56f3b12f255d3cc64a48919ce8d31be151e02bae28b1a39c8c25" Feb 26 17:54:51 crc kubenswrapper[4805]: I0226 17:54:51.396686 4805 generic.go:334] "Generic (PLEG): container finished" podID="70a36330-f6cb-4235-a39a-902ccc54bdde" containerID="0a406c1310497d65893f3abe7a5956087df2d8810430f1cce8ecc0c824666412" exitCode=0 Feb 26 17:54:51 crc kubenswrapper[4805]: I0226 17:54:51.396776 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr" event={"ID":"70a36330-f6cb-4235-a39a-902ccc54bdde","Type":"ContainerDied","Data":"0a406c1310497d65893f3abe7a5956087df2d8810430f1cce8ecc0c824666412"} Feb 26 17:54:52 crc kubenswrapper[4805]: I0226 17:54:52.896247 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr" Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.067446 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70a36330-f6cb-4235-a39a-902ccc54bdde-ssh-key-openstack-edpm-ipam\") pod \"70a36330-f6cb-4235-a39a-902ccc54bdde\" (UID: \"70a36330-f6cb-4235-a39a-902ccc54bdde\") " Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.067500 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rx45r\" (UniqueName: \"kubernetes.io/projected/70a36330-f6cb-4235-a39a-902ccc54bdde-kube-api-access-rx45r\") pod \"70a36330-f6cb-4235-a39a-902ccc54bdde\" (UID: \"70a36330-f6cb-4235-a39a-902ccc54bdde\") " Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.067550 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70a36330-f6cb-4235-a39a-902ccc54bdde-inventory\") pod \"70a36330-f6cb-4235-a39a-902ccc54bdde\" (UID: \"70a36330-f6cb-4235-a39a-902ccc54bdde\") " Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.067611 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/70a36330-f6cb-4235-a39a-902ccc54bdde-nova-metadata-neutron-config-0\") pod \"70a36330-f6cb-4235-a39a-902ccc54bdde\" (UID: \"70a36330-f6cb-4235-a39a-902ccc54bdde\") " Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.067695 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/70a36330-f6cb-4235-a39a-902ccc54bdde-neutron-ovn-metadata-agent-neutron-config-0\") pod \"70a36330-f6cb-4235-a39a-902ccc54bdde\" (UID: \"70a36330-f6cb-4235-a39a-902ccc54bdde\") " Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.068466 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70a36330-f6cb-4235-a39a-902ccc54bdde-neutron-metadata-combined-ca-bundle\") pod \"70a36330-f6cb-4235-a39a-902ccc54bdde\" (UID: \"70a36330-f6cb-4235-a39a-902ccc54bdde\") " Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.076241 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70a36330-f6cb-4235-a39a-902ccc54bdde-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "70a36330-f6cb-4235-a39a-902ccc54bdde" (UID: "70a36330-f6cb-4235-a39a-902ccc54bdde"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.097303 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70a36330-f6cb-4235-a39a-902ccc54bdde-kube-api-access-rx45r" (OuterVolumeSpecName: "kube-api-access-rx45r") pod "70a36330-f6cb-4235-a39a-902ccc54bdde" (UID: "70a36330-f6cb-4235-a39a-902ccc54bdde"). InnerVolumeSpecName "kube-api-access-rx45r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.112907 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70a36330-f6cb-4235-a39a-902ccc54bdde-inventory" (OuterVolumeSpecName: "inventory") pod "70a36330-f6cb-4235-a39a-902ccc54bdde" (UID: "70a36330-f6cb-4235-a39a-902ccc54bdde"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.115437 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70a36330-f6cb-4235-a39a-902ccc54bdde-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "70a36330-f6cb-4235-a39a-902ccc54bdde" (UID: "70a36330-f6cb-4235-a39a-902ccc54bdde"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.123156 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70a36330-f6cb-4235-a39a-902ccc54bdde-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "70a36330-f6cb-4235-a39a-902ccc54bdde" (UID: "70a36330-f6cb-4235-a39a-902ccc54bdde"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.132779 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70a36330-f6cb-4235-a39a-902ccc54bdde-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "70a36330-f6cb-4235-a39a-902ccc54bdde" (UID: "70a36330-f6cb-4235-a39a-902ccc54bdde"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.171132 4805 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70a36330-f6cb-4235-a39a-902ccc54bdde-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.171172 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/70a36330-f6cb-4235-a39a-902ccc54bdde-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.171188 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rx45r\" (UniqueName: \"kubernetes.io/projected/70a36330-f6cb-4235-a39a-902ccc54bdde-kube-api-access-rx45r\") on node \"crc\" DevicePath \"\"" Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.171202 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70a36330-f6cb-4235-a39a-902ccc54bdde-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.171215 4805 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/70a36330-f6cb-4235-a39a-902ccc54bdde-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.171227 4805 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/70a36330-f6cb-4235-a39a-902ccc54bdde-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.418356 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr" event={"ID":"70a36330-f6cb-4235-a39a-902ccc54bdde","Type":"ContainerDied","Data":"f710087776c1dadbd61b5baa47ff75eeabe5ba10b5bd32e29abb6c2ef158ec13"} Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.418412 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f710087776c1dadbd61b5baa47ff75eeabe5ba10b5bd32e29abb6c2ef158ec13" Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.418429 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr" Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.534872 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt"] Feb 26 17:54:53 crc kubenswrapper[4805]: E0226 17:54:53.535521 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70a36330-f6cb-4235-a39a-902ccc54bdde" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.535552 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="70a36330-f6cb-4235-a39a-902ccc54bdde" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.535913 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="70a36330-f6cb-4235-a39a-902ccc54bdde" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.537256 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt" Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.540169 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.540566 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.540697 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sc2xs" Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.541077 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.541131 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.546501 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt"] Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.683141 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt\" (UID: \"9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt" Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.684308 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fj5j\" (UniqueName: \"kubernetes.io/projected/9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753-kube-api-access-7fj5j\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt\" (UID: \"9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt" Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.684533 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt\" (UID: \"9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt" Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.684609 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt\" (UID: \"9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt" Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.684824 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt\" (UID: \"9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt" Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.786012 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fj5j\" (UniqueName: \"kubernetes.io/projected/9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753-kube-api-access-7fj5j\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt\" (UID: \"9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt" Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.786102 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt\" (UID: \"9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt" Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.786143 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt\" (UID: \"9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt" Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.786827 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt\" (UID: \"9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt" Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.786859 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt\" (UID: \"9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt" Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.789707 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt\" (UID: \"9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt" Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.790136 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt\" (UID: \"9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt" Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.791939 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt\" (UID: \"9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt" Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.801111 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt\" (UID: \"9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt" Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.808537 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fj5j\" (UniqueName: \"kubernetes.io/projected/9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753-kube-api-access-7fj5j\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt\" (UID: \"9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt" Feb 26 17:54:53 crc kubenswrapper[4805]: I0226 17:54:53.857462 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt" Feb 26 17:54:54 crc kubenswrapper[4805]: I0226 17:54:54.412127 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt"] Feb 26 17:54:54 crc kubenswrapper[4805]: I0226 17:54:54.418949 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 17:54:54 crc kubenswrapper[4805]: I0226 17:54:54.427332 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt" event={"ID":"9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753","Type":"ContainerStarted","Data":"7e1621a93d953cc66a98a45d0f7e5cb4563b213edd5e8e8085bbafc44336e261"} Feb 26 17:54:55 crc kubenswrapper[4805]: I0226 17:54:55.440097 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt" event={"ID":"9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753","Type":"ContainerStarted","Data":"da7d59e0ae14f17d155caffd64ab6aa3fb09cd9565d7cca40261ad8331c42d81"} Feb 26 17:54:55 crc kubenswrapper[4805]: I0226 17:54:55.460170 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt" podStartSLOduration=2.025521094 podStartE2EDuration="2.460146335s" podCreationTimestamp="2026-02-26 17:54:53 +0000 UTC" firstStartedPulling="2026-02-26 17:54:54.418741092 +0000 UTC m=+2408.980495431" lastFinishedPulling="2026-02-26 17:54:54.853366333 +0000 UTC m=+2409.415120672" observedRunningTime="2026-02-26 17:54:55.457701913 +0000 UTC m=+2410.019456272" watchObservedRunningTime="2026-02-26 17:54:55.460146335 +0000 UTC m=+2410.021900684" Feb 26 17:55:32 crc kubenswrapper[4805]: I0226 17:55:32.977761 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 17:55:32 crc kubenswrapper[4805]: I0226 17:55:32.978278 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 17:56:00 crc kubenswrapper[4805]: I0226 17:56:00.155771 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535476-mnb4s"] Feb 26 17:56:00 crc kubenswrapper[4805]: I0226 17:56:00.159306 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535476-mnb4s" Feb 26 17:56:00 crc kubenswrapper[4805]: I0226 17:56:00.162626 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 17:56:00 crc kubenswrapper[4805]: I0226 17:56:00.162878 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 17:56:00 crc kubenswrapper[4805]: I0226 17:56:00.166869 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 17:56:00 crc kubenswrapper[4805]: I0226 17:56:00.166991 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535476-mnb4s"] Feb 26 17:56:00 crc kubenswrapper[4805]: I0226 17:56:00.268234 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-252qg\" (UniqueName: \"kubernetes.io/projected/209d50d9-d2ec-4837-b292-a4b2a7f97825-kube-api-access-252qg\") pod \"auto-csr-approver-29535476-mnb4s\" (UID: \"209d50d9-d2ec-4837-b292-a4b2a7f97825\") " pod="openshift-infra/auto-csr-approver-29535476-mnb4s" Feb 26 17:56:00 crc kubenswrapper[4805]: I0226 17:56:00.370269 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-252qg\" (UniqueName: \"kubernetes.io/projected/209d50d9-d2ec-4837-b292-a4b2a7f97825-kube-api-access-252qg\") pod \"auto-csr-approver-29535476-mnb4s\" (UID: \"209d50d9-d2ec-4837-b292-a4b2a7f97825\") " pod="openshift-infra/auto-csr-approver-29535476-mnb4s" Feb 26 17:56:00 crc kubenswrapper[4805]: I0226 17:56:00.394571 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-252qg\" (UniqueName: \"kubernetes.io/projected/209d50d9-d2ec-4837-b292-a4b2a7f97825-kube-api-access-252qg\") pod \"auto-csr-approver-29535476-mnb4s\" (UID: \"209d50d9-d2ec-4837-b292-a4b2a7f97825\") " pod="openshift-infra/auto-csr-approver-29535476-mnb4s" Feb 26 17:56:00 crc kubenswrapper[4805]: I0226 17:56:00.485434 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535476-mnb4s" Feb 26 17:56:00 crc kubenswrapper[4805]: I0226 17:56:00.981381 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535476-mnb4s"] Feb 26 17:56:01 crc kubenswrapper[4805]: I0226 17:56:01.868590 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535476-mnb4s" event={"ID":"209d50d9-d2ec-4837-b292-a4b2a7f97825","Type":"ContainerStarted","Data":"97692cf4cd32f82865c1d6784695de3e5d98c15714a1ff72aa943c3f71e2b3da"} Feb 26 17:56:02 crc kubenswrapper[4805]: I0226 17:56:02.892308 4805 generic.go:334] "Generic (PLEG): container finished" podID="209d50d9-d2ec-4837-b292-a4b2a7f97825" containerID="c9f0e9748e4b6f3bc8c07e2b8cdf2230275414981dd5a568c3717f672d33d5aa" exitCode=0 Feb 26 17:56:02 crc kubenswrapper[4805]: I0226 17:56:02.892427 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535476-mnb4s" event={"ID":"209d50d9-d2ec-4837-b292-a4b2a7f97825","Type":"ContainerDied","Data":"c9f0e9748e4b6f3bc8c07e2b8cdf2230275414981dd5a568c3717f672d33d5aa"} Feb 26 17:56:02 crc kubenswrapper[4805]: I0226 17:56:02.978916 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 17:56:02 crc kubenswrapper[4805]: I0226 17:56:02.979226 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 17:56:04 crc kubenswrapper[4805]: I0226 17:56:04.360201 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535476-mnb4s" Feb 26 17:56:04 crc kubenswrapper[4805]: I0226 17:56:04.463057 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-252qg\" (UniqueName: \"kubernetes.io/projected/209d50d9-d2ec-4837-b292-a4b2a7f97825-kube-api-access-252qg\") pod \"209d50d9-d2ec-4837-b292-a4b2a7f97825\" (UID: \"209d50d9-d2ec-4837-b292-a4b2a7f97825\") " Feb 26 17:56:04 crc kubenswrapper[4805]: I0226 17:56:04.468579 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/209d50d9-d2ec-4837-b292-a4b2a7f97825-kube-api-access-252qg" (OuterVolumeSpecName: "kube-api-access-252qg") pod "209d50d9-d2ec-4837-b292-a4b2a7f97825" (UID: "209d50d9-d2ec-4837-b292-a4b2a7f97825"). InnerVolumeSpecName "kube-api-access-252qg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:56:04 crc kubenswrapper[4805]: I0226 17:56:04.565679 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-252qg\" (UniqueName: \"kubernetes.io/projected/209d50d9-d2ec-4837-b292-a4b2a7f97825-kube-api-access-252qg\") on node \"crc\" DevicePath \"\"" Feb 26 17:56:04 crc kubenswrapper[4805]: I0226 17:56:04.912220 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535476-mnb4s" event={"ID":"209d50d9-d2ec-4837-b292-a4b2a7f97825","Type":"ContainerDied","Data":"97692cf4cd32f82865c1d6784695de3e5d98c15714a1ff72aa943c3f71e2b3da"} Feb 26 17:56:04 crc kubenswrapper[4805]: I0226 17:56:04.912551 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97692cf4cd32f82865c1d6784695de3e5d98c15714a1ff72aa943c3f71e2b3da" Feb 26 17:56:04 crc kubenswrapper[4805]: I0226 17:56:04.912285 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535476-mnb4s" Feb 26 17:56:05 crc kubenswrapper[4805]: I0226 17:56:05.435706 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535470-c26pd"] Feb 26 17:56:05 crc kubenswrapper[4805]: I0226 17:56:05.445336 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535470-c26pd"] Feb 26 17:56:06 crc kubenswrapper[4805]: I0226 17:56:06.973626 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="676c2a01-20ee-4731-be9b-6b95816e3559" path="/var/lib/kubelet/pods/676c2a01-20ee-4731-be9b-6b95816e3559/volumes" Feb 26 17:56:32 crc kubenswrapper[4805]: I0226 17:56:32.978436 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 17:56:32 crc kubenswrapper[4805]: I0226 17:56:32.978983 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 17:56:32 crc kubenswrapper[4805]: I0226 17:56:32.979063 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" Feb 26 17:56:32 crc kubenswrapper[4805]: I0226 17:56:32.979928 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1491bbd2becc7a38ab4213e51d123b16b67ad1e023fc2d8be2998a56175f8172"} pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 17:56:32 crc kubenswrapper[4805]: I0226 17:56:32.979980 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" containerID="cri-o://1491bbd2becc7a38ab4213e51d123b16b67ad1e023fc2d8be2998a56175f8172" gracePeriod=600 Feb 26 17:56:33 crc kubenswrapper[4805]: E0226 17:56:33.102386 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:56:33 crc kubenswrapper[4805]: I0226 17:56:33.210725 4805 generic.go:334] "Generic (PLEG): container finished" podID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerID="1491bbd2becc7a38ab4213e51d123b16b67ad1e023fc2d8be2998a56175f8172" exitCode=0 Feb 26 17:56:33 crc kubenswrapper[4805]: I0226 17:56:33.210777 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" event={"ID":"25e83477-65d0-41be-8e55-fdacfc5871a8","Type":"ContainerDied","Data":"1491bbd2becc7a38ab4213e51d123b16b67ad1e023fc2d8be2998a56175f8172"} Feb 26 17:56:33 crc kubenswrapper[4805]: I0226 17:56:33.210826 4805 scope.go:117] "RemoveContainer" containerID="8b8968885710b097709d7bb1c27dbd325f06798e3afdff6ea081963d31b05a35" Feb 26 17:56:33 crc kubenswrapper[4805]: I0226 17:56:33.211562 4805 scope.go:117] "RemoveContainer" containerID="1491bbd2becc7a38ab4213e51d123b16b67ad1e023fc2d8be2998a56175f8172" Feb 26 17:56:33 crc kubenswrapper[4805]: E0226 17:56:33.211881 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:56:46 crc kubenswrapper[4805]: I0226 17:56:46.643855 4805 scope.go:117] "RemoveContainer" containerID="43d49a77ace660146e2be5f0e3a1a957bc25fad52783e84b10db29ccaff34241" Feb 26 17:56:47 crc kubenswrapper[4805]: I0226 17:56:47.954106 4805 scope.go:117] "RemoveContainer" containerID="1491bbd2becc7a38ab4213e51d123b16b67ad1e023fc2d8be2998a56175f8172" Feb 26 17:56:47 crc kubenswrapper[4805]: E0226 17:56:47.954794 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:56:58 crc kubenswrapper[4805]: I0226 17:56:58.953473 4805 scope.go:117] "RemoveContainer" containerID="1491bbd2becc7a38ab4213e51d123b16b67ad1e023fc2d8be2998a56175f8172" Feb 26 17:56:58 crc kubenswrapper[4805]: E0226 17:56:58.954162 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:57:12 crc kubenswrapper[4805]: I0226 17:57:12.954170 4805 scope.go:117] "RemoveContainer" containerID="1491bbd2becc7a38ab4213e51d123b16b67ad1e023fc2d8be2998a56175f8172" Feb 26 17:57:12 crc kubenswrapper[4805]: E0226 17:57:12.955180 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:57:14 crc kubenswrapper[4805]: I0226 17:57:14.939044 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w9phx"] Feb 26 17:57:14 crc kubenswrapper[4805]: E0226 17:57:14.939842 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="209d50d9-d2ec-4837-b292-a4b2a7f97825" containerName="oc" Feb 26 17:57:14 crc kubenswrapper[4805]: I0226 17:57:14.939858 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="209d50d9-d2ec-4837-b292-a4b2a7f97825" containerName="oc" Feb 26 17:57:14 crc kubenswrapper[4805]: I0226 17:57:14.940140 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="209d50d9-d2ec-4837-b292-a4b2a7f97825" containerName="oc" Feb 26 17:57:14 crc kubenswrapper[4805]: I0226 17:57:14.941956 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w9phx" Feb 26 17:57:14 crc kubenswrapper[4805]: I0226 17:57:14.990149 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w9phx"] Feb 26 17:57:15 crc kubenswrapper[4805]: I0226 17:57:15.082075 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d6d5981-f340-4cb4-a1de-2de9e6f5797b-utilities\") pod \"community-operators-w9phx\" (UID: \"9d6d5981-f340-4cb4-a1de-2de9e6f5797b\") " pod="openshift-marketplace/community-operators-w9phx" Feb 26 17:57:15 crc kubenswrapper[4805]: I0226 17:57:15.082200 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d6d5981-f340-4cb4-a1de-2de9e6f5797b-catalog-content\") pod \"community-operators-w9phx\" (UID: \"9d6d5981-f340-4cb4-a1de-2de9e6f5797b\") " pod="openshift-marketplace/community-operators-w9phx" Feb 26 17:57:15 crc kubenswrapper[4805]: I0226 17:57:15.082497 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbxl7\" (UniqueName: \"kubernetes.io/projected/9d6d5981-f340-4cb4-a1de-2de9e6f5797b-kube-api-access-jbxl7\") pod \"community-operators-w9phx\" (UID: \"9d6d5981-f340-4cb4-a1de-2de9e6f5797b\") " pod="openshift-marketplace/community-operators-w9phx" Feb 26 17:57:15 crc kubenswrapper[4805]: I0226 17:57:15.184956 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbxl7\" (UniqueName: \"kubernetes.io/projected/9d6d5981-f340-4cb4-a1de-2de9e6f5797b-kube-api-access-jbxl7\") pod \"community-operators-w9phx\" (UID: \"9d6d5981-f340-4cb4-a1de-2de9e6f5797b\") " pod="openshift-marketplace/community-operators-w9phx" Feb 26 17:57:15 crc kubenswrapper[4805]: I0226 17:57:15.185168 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d6d5981-f340-4cb4-a1de-2de9e6f5797b-utilities\") pod \"community-operators-w9phx\" (UID: \"9d6d5981-f340-4cb4-a1de-2de9e6f5797b\") " pod="openshift-marketplace/community-operators-w9phx" Feb 26 17:57:15 crc kubenswrapper[4805]: I0226 17:57:15.185209 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d6d5981-f340-4cb4-a1de-2de9e6f5797b-catalog-content\") pod \"community-operators-w9phx\" (UID: \"9d6d5981-f340-4cb4-a1de-2de9e6f5797b\") " pod="openshift-marketplace/community-operators-w9phx" Feb 26 17:57:15 crc kubenswrapper[4805]: I0226 17:57:15.185677 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d6d5981-f340-4cb4-a1de-2de9e6f5797b-catalog-content\") pod \"community-operators-w9phx\" (UID: \"9d6d5981-f340-4cb4-a1de-2de9e6f5797b\") " pod="openshift-marketplace/community-operators-w9phx" Feb 26 17:57:15 crc kubenswrapper[4805]: I0226 17:57:15.185695 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d6d5981-f340-4cb4-a1de-2de9e6f5797b-utilities\") pod \"community-operators-w9phx\" (UID: \"9d6d5981-f340-4cb4-a1de-2de9e6f5797b\") " pod="openshift-marketplace/community-operators-w9phx" Feb 26 17:57:15 crc kubenswrapper[4805]: I0226 17:57:15.205820 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbxl7\" (UniqueName: \"kubernetes.io/projected/9d6d5981-f340-4cb4-a1de-2de9e6f5797b-kube-api-access-jbxl7\") pod \"community-operators-w9phx\" (UID: \"9d6d5981-f340-4cb4-a1de-2de9e6f5797b\") " pod="openshift-marketplace/community-operators-w9phx" Feb 26 17:57:15 crc kubenswrapper[4805]: I0226 17:57:15.263384 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w9phx" Feb 26 17:57:15 crc kubenswrapper[4805]: I0226 17:57:15.862995 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w9phx"] Feb 26 17:57:16 crc kubenswrapper[4805]: I0226 17:57:16.646554 4805 generic.go:334] "Generic (PLEG): container finished" podID="9d6d5981-f340-4cb4-a1de-2de9e6f5797b" containerID="97bda32e290ad96fafe1dd7d06a577a51ff2d992335dca4b575c8c00b04424ab" exitCode=0 Feb 26 17:57:16 crc kubenswrapper[4805]: I0226 17:57:16.646638 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9phx" event={"ID":"9d6d5981-f340-4cb4-a1de-2de9e6f5797b","Type":"ContainerDied","Data":"97bda32e290ad96fafe1dd7d06a577a51ff2d992335dca4b575c8c00b04424ab"} Feb 26 17:57:16 crc kubenswrapper[4805]: I0226 17:57:16.646900 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9phx" event={"ID":"9d6d5981-f340-4cb4-a1de-2de9e6f5797b","Type":"ContainerStarted","Data":"23d59ff54cf253566ac5ac8b9b2c7e7cc6d7b005bf2fac22b3146e9cdb1ee2c7"} Feb 26 17:57:17 crc kubenswrapper[4805]: I0226 17:57:17.661134 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9phx" event={"ID":"9d6d5981-f340-4cb4-a1de-2de9e6f5797b","Type":"ContainerStarted","Data":"5ec801e9da462d402498f2a0046f4cb2f1d94dc8d60f1fa1887656d62bf872e9"} Feb 26 17:57:19 crc kubenswrapper[4805]: I0226 17:57:19.682625 4805 generic.go:334] "Generic (PLEG): container finished" podID="9d6d5981-f340-4cb4-a1de-2de9e6f5797b" containerID="5ec801e9da462d402498f2a0046f4cb2f1d94dc8d60f1fa1887656d62bf872e9" exitCode=0 Feb 26 17:57:19 crc kubenswrapper[4805]: I0226 17:57:19.682966 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9phx" event={"ID":"9d6d5981-f340-4cb4-a1de-2de9e6f5797b","Type":"ContainerDied","Data":"5ec801e9da462d402498f2a0046f4cb2f1d94dc8d60f1fa1887656d62bf872e9"} Feb 26 17:57:20 crc kubenswrapper[4805]: I0226 17:57:20.694295 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9phx" event={"ID":"9d6d5981-f340-4cb4-a1de-2de9e6f5797b","Type":"ContainerStarted","Data":"1d387f83dd523bf97998e5c29748d0133f54de68ffe6b85def5d9136c37a4373"} Feb 26 17:57:20 crc kubenswrapper[4805]: I0226 17:57:20.724521 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w9phx" podStartSLOduration=3.03452426 podStartE2EDuration="6.724498361s" podCreationTimestamp="2026-02-26 17:57:14 +0000 UTC" firstStartedPulling="2026-02-26 17:57:16.648284249 +0000 UTC m=+2551.210038588" lastFinishedPulling="2026-02-26 17:57:20.33825835 +0000 UTC m=+2554.900012689" observedRunningTime="2026-02-26 17:57:20.713794927 +0000 UTC m=+2555.275549266" watchObservedRunningTime="2026-02-26 17:57:20.724498361 +0000 UTC m=+2555.286252700" Feb 26 17:57:24 crc kubenswrapper[4805]: I0226 17:57:24.953672 4805 scope.go:117] "RemoveContainer" containerID="1491bbd2becc7a38ab4213e51d123b16b67ad1e023fc2d8be2998a56175f8172" Feb 26 17:57:24 crc kubenswrapper[4805]: E0226 17:57:24.954449 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:57:25 crc kubenswrapper[4805]: I0226 17:57:25.264960 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w9phx" Feb 26 17:57:25 crc kubenswrapper[4805]: I0226 17:57:25.265034 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w9phx" Feb 26 17:57:25 crc kubenswrapper[4805]: I0226 17:57:25.317598 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w9phx" Feb 26 17:57:25 crc kubenswrapper[4805]: I0226 17:57:25.803320 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w9phx" Feb 26 17:57:26 crc kubenswrapper[4805]: I0226 17:57:26.930588 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w9phx"] Feb 26 17:57:27 crc kubenswrapper[4805]: I0226 17:57:27.757831 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w9phx" podUID="9d6d5981-f340-4cb4-a1de-2de9e6f5797b" containerName="registry-server" containerID="cri-o://1d387f83dd523bf97998e5c29748d0133f54de68ffe6b85def5d9136c37a4373" gracePeriod=2 Feb 26 17:57:28 crc kubenswrapper[4805]: I0226 17:57:28.270839 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w9phx" Feb 26 17:57:28 crc kubenswrapper[4805]: I0226 17:57:28.393032 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbxl7\" (UniqueName: \"kubernetes.io/projected/9d6d5981-f340-4cb4-a1de-2de9e6f5797b-kube-api-access-jbxl7\") pod \"9d6d5981-f340-4cb4-a1de-2de9e6f5797b\" (UID: \"9d6d5981-f340-4cb4-a1de-2de9e6f5797b\") " Feb 26 17:57:28 crc kubenswrapper[4805]: I0226 17:57:28.393091 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d6d5981-f340-4cb4-a1de-2de9e6f5797b-catalog-content\") pod \"9d6d5981-f340-4cb4-a1de-2de9e6f5797b\" (UID: \"9d6d5981-f340-4cb4-a1de-2de9e6f5797b\") " Feb 26 17:57:28 crc kubenswrapper[4805]: I0226 17:57:28.393285 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d6d5981-f340-4cb4-a1de-2de9e6f5797b-utilities\") pod \"9d6d5981-f340-4cb4-a1de-2de9e6f5797b\" (UID: \"9d6d5981-f340-4cb4-a1de-2de9e6f5797b\") " Feb 26 17:57:28 crc kubenswrapper[4805]: I0226 17:57:28.394092 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d6d5981-f340-4cb4-a1de-2de9e6f5797b-utilities" (OuterVolumeSpecName: "utilities") pod "9d6d5981-f340-4cb4-a1de-2de9e6f5797b" (UID: "9d6d5981-f340-4cb4-a1de-2de9e6f5797b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:57:28 crc kubenswrapper[4805]: I0226 17:57:28.399272 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d6d5981-f340-4cb4-a1de-2de9e6f5797b-kube-api-access-jbxl7" (OuterVolumeSpecName: "kube-api-access-jbxl7") pod "9d6d5981-f340-4cb4-a1de-2de9e6f5797b" (UID: "9d6d5981-f340-4cb4-a1de-2de9e6f5797b"). InnerVolumeSpecName "kube-api-access-jbxl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:57:28 crc kubenswrapper[4805]: I0226 17:57:28.444648 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d6d5981-f340-4cb4-a1de-2de9e6f5797b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d6d5981-f340-4cb4-a1de-2de9e6f5797b" (UID: "9d6d5981-f340-4cb4-a1de-2de9e6f5797b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 17:57:28 crc kubenswrapper[4805]: I0226 17:57:28.495481 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbxl7\" (UniqueName: \"kubernetes.io/projected/9d6d5981-f340-4cb4-a1de-2de9e6f5797b-kube-api-access-jbxl7\") on node \"crc\" DevicePath \"\"" Feb 26 17:57:28 crc kubenswrapper[4805]: I0226 17:57:28.495554 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d6d5981-f340-4cb4-a1de-2de9e6f5797b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 17:57:28 crc kubenswrapper[4805]: I0226 17:57:28.495567 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d6d5981-f340-4cb4-a1de-2de9e6f5797b-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 17:57:28 crc kubenswrapper[4805]: I0226 17:57:28.788188 4805 generic.go:334] "Generic (PLEG): container finished" podID="9d6d5981-f340-4cb4-a1de-2de9e6f5797b" containerID="1d387f83dd523bf97998e5c29748d0133f54de68ffe6b85def5d9136c37a4373" exitCode=0 Feb 26 17:57:28 crc kubenswrapper[4805]: I0226 17:57:28.788258 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9phx" event={"ID":"9d6d5981-f340-4cb4-a1de-2de9e6f5797b","Type":"ContainerDied","Data":"1d387f83dd523bf97998e5c29748d0133f54de68ffe6b85def5d9136c37a4373"} Feb 26 17:57:28 crc kubenswrapper[4805]: I0226 17:57:28.788313 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9phx" event={"ID":"9d6d5981-f340-4cb4-a1de-2de9e6f5797b","Type":"ContainerDied","Data":"23d59ff54cf253566ac5ac8b9b2c7e7cc6d7b005bf2fac22b3146e9cdb1ee2c7"} Feb 26 17:57:28 crc kubenswrapper[4805]: I0226 17:57:28.788318 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w9phx" Feb 26 17:57:28 crc kubenswrapper[4805]: I0226 17:57:28.788348 4805 scope.go:117] "RemoveContainer" containerID="1d387f83dd523bf97998e5c29748d0133f54de68ffe6b85def5d9136c37a4373" Feb 26 17:57:28 crc kubenswrapper[4805]: I0226 17:57:28.814462 4805 scope.go:117] "RemoveContainer" containerID="5ec801e9da462d402498f2a0046f4cb2f1d94dc8d60f1fa1887656d62bf872e9" Feb 26 17:57:28 crc kubenswrapper[4805]: I0226 17:57:28.834321 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w9phx"] Feb 26 17:57:28 crc kubenswrapper[4805]: I0226 17:57:28.843211 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w9phx"] Feb 26 17:57:28 crc kubenswrapper[4805]: I0226 17:57:28.846790 4805 scope.go:117] "RemoveContainer" containerID="97bda32e290ad96fafe1dd7d06a577a51ff2d992335dca4b575c8c00b04424ab" Feb 26 17:57:28 crc kubenswrapper[4805]: I0226 17:57:28.894369 4805 scope.go:117] "RemoveContainer" containerID="1d387f83dd523bf97998e5c29748d0133f54de68ffe6b85def5d9136c37a4373" Feb 26 17:57:28 crc kubenswrapper[4805]: E0226 17:57:28.895199 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d387f83dd523bf97998e5c29748d0133f54de68ffe6b85def5d9136c37a4373\": container with ID starting with 1d387f83dd523bf97998e5c29748d0133f54de68ffe6b85def5d9136c37a4373 not found: ID does not exist" containerID="1d387f83dd523bf97998e5c29748d0133f54de68ffe6b85def5d9136c37a4373" Feb 26 17:57:28 crc kubenswrapper[4805]: I0226 17:57:28.895253 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d387f83dd523bf97998e5c29748d0133f54de68ffe6b85def5d9136c37a4373"} err="failed to get container status \"1d387f83dd523bf97998e5c29748d0133f54de68ffe6b85def5d9136c37a4373\": rpc error: code = NotFound desc = could not find container \"1d387f83dd523bf97998e5c29748d0133f54de68ffe6b85def5d9136c37a4373\": container with ID starting with 1d387f83dd523bf97998e5c29748d0133f54de68ffe6b85def5d9136c37a4373 not found: ID does not exist" Feb 26 17:57:28 crc kubenswrapper[4805]: I0226 17:57:28.895275 4805 scope.go:117] "RemoveContainer" containerID="5ec801e9da462d402498f2a0046f4cb2f1d94dc8d60f1fa1887656d62bf872e9" Feb 26 17:57:28 crc kubenswrapper[4805]: E0226 17:57:28.895593 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ec801e9da462d402498f2a0046f4cb2f1d94dc8d60f1fa1887656d62bf872e9\": container with ID starting with 5ec801e9da462d402498f2a0046f4cb2f1d94dc8d60f1fa1887656d62bf872e9 not found: ID does not exist" containerID="5ec801e9da462d402498f2a0046f4cb2f1d94dc8d60f1fa1887656d62bf872e9" Feb 26 17:57:28 crc kubenswrapper[4805]: I0226 17:57:28.895646 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ec801e9da462d402498f2a0046f4cb2f1d94dc8d60f1fa1887656d62bf872e9"} err="failed to get container status \"5ec801e9da462d402498f2a0046f4cb2f1d94dc8d60f1fa1887656d62bf872e9\": rpc error: code = NotFound desc = could not find container \"5ec801e9da462d402498f2a0046f4cb2f1d94dc8d60f1fa1887656d62bf872e9\": container with ID starting with 5ec801e9da462d402498f2a0046f4cb2f1d94dc8d60f1fa1887656d62bf872e9 not found: ID does not exist" Feb 26 17:57:28 crc kubenswrapper[4805]: I0226 17:57:28.895676 4805 scope.go:117] "RemoveContainer" containerID="97bda32e290ad96fafe1dd7d06a577a51ff2d992335dca4b575c8c00b04424ab" Feb 26 17:57:28 crc kubenswrapper[4805]: E0226 17:57:28.896084 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97bda32e290ad96fafe1dd7d06a577a51ff2d992335dca4b575c8c00b04424ab\": container with ID starting with 97bda32e290ad96fafe1dd7d06a577a51ff2d992335dca4b575c8c00b04424ab not found: ID does not exist" containerID="97bda32e290ad96fafe1dd7d06a577a51ff2d992335dca4b575c8c00b04424ab" Feb 26 17:57:28 crc kubenswrapper[4805]: I0226 17:57:28.896127 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97bda32e290ad96fafe1dd7d06a577a51ff2d992335dca4b575c8c00b04424ab"} err="failed to get container status \"97bda32e290ad96fafe1dd7d06a577a51ff2d992335dca4b575c8c00b04424ab\": rpc error: code = NotFound desc = could not find container \"97bda32e290ad96fafe1dd7d06a577a51ff2d992335dca4b575c8c00b04424ab\": container with ID starting with 97bda32e290ad96fafe1dd7d06a577a51ff2d992335dca4b575c8c00b04424ab not found: ID does not exist" Feb 26 17:57:28 crc kubenswrapper[4805]: I0226 17:57:28.965667 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d6d5981-f340-4cb4-a1de-2de9e6f5797b" path="/var/lib/kubelet/pods/9d6d5981-f340-4cb4-a1de-2de9e6f5797b/volumes" Feb 26 17:57:39 crc kubenswrapper[4805]: I0226 17:57:39.953344 4805 scope.go:117] "RemoveContainer" containerID="1491bbd2becc7a38ab4213e51d123b16b67ad1e023fc2d8be2998a56175f8172" Feb 26 17:57:39 crc kubenswrapper[4805]: E0226 17:57:39.954295 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:57:54 crc kubenswrapper[4805]: I0226 17:57:54.957937 4805 scope.go:117] "RemoveContainer" containerID="1491bbd2becc7a38ab4213e51d123b16b67ad1e023fc2d8be2998a56175f8172" Feb 26 17:57:54 crc kubenswrapper[4805]: E0226 17:57:54.958807 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:58:00 crc kubenswrapper[4805]: I0226 17:58:00.160390 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535478-j6qkw"] Feb 26 17:58:00 crc kubenswrapper[4805]: E0226 17:58:00.161352 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d6d5981-f340-4cb4-a1de-2de9e6f5797b" containerName="registry-server" Feb 26 17:58:00 crc kubenswrapper[4805]: I0226 17:58:00.161370 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d6d5981-f340-4cb4-a1de-2de9e6f5797b" containerName="registry-server" Feb 26 17:58:00 crc kubenswrapper[4805]: E0226 17:58:00.161386 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d6d5981-f340-4cb4-a1de-2de9e6f5797b" containerName="extract-utilities" Feb 26 17:58:00 crc kubenswrapper[4805]: I0226 17:58:00.161396 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d6d5981-f340-4cb4-a1de-2de9e6f5797b" containerName="extract-utilities" Feb 26 17:58:00 crc kubenswrapper[4805]: E0226 17:58:00.161424 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d6d5981-f340-4cb4-a1de-2de9e6f5797b" containerName="extract-content" Feb 26 17:58:00 crc kubenswrapper[4805]: I0226 17:58:00.161432 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d6d5981-f340-4cb4-a1de-2de9e6f5797b" containerName="extract-content" Feb 26 17:58:00 crc kubenswrapper[4805]: I0226 17:58:00.161707 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d6d5981-f340-4cb4-a1de-2de9e6f5797b" containerName="registry-server" Feb 26 17:58:00 crc kubenswrapper[4805]: I0226 17:58:00.162732 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535478-j6qkw" Feb 26 17:58:00 crc kubenswrapper[4805]: I0226 17:58:00.166037 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 17:58:00 crc kubenswrapper[4805]: I0226 17:58:00.166418 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 17:58:00 crc kubenswrapper[4805]: I0226 17:58:00.166502 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 17:58:00 crc kubenswrapper[4805]: I0226 17:58:00.179886 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535478-j6qkw"] Feb 26 17:58:00 crc kubenswrapper[4805]: I0226 17:58:00.298167 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4fsb\" (UniqueName: \"kubernetes.io/projected/da01a5e5-b523-47bc-a444-dbd5cb857330-kube-api-access-b4fsb\") pod \"auto-csr-approver-29535478-j6qkw\" (UID: \"da01a5e5-b523-47bc-a444-dbd5cb857330\") " pod="openshift-infra/auto-csr-approver-29535478-j6qkw" Feb 26 17:58:00 crc kubenswrapper[4805]: I0226 17:58:00.400129 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4fsb\" (UniqueName: \"kubernetes.io/projected/da01a5e5-b523-47bc-a444-dbd5cb857330-kube-api-access-b4fsb\") pod \"auto-csr-approver-29535478-j6qkw\" (UID: \"da01a5e5-b523-47bc-a444-dbd5cb857330\") " pod="openshift-infra/auto-csr-approver-29535478-j6qkw" Feb 26 17:58:00 crc kubenswrapper[4805]: I0226 17:58:00.423129 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4fsb\" (UniqueName: \"kubernetes.io/projected/da01a5e5-b523-47bc-a444-dbd5cb857330-kube-api-access-b4fsb\") pod \"auto-csr-approver-29535478-j6qkw\" (UID: \"da01a5e5-b523-47bc-a444-dbd5cb857330\") " pod="openshift-infra/auto-csr-approver-29535478-j6qkw" Feb 26 17:58:00 crc kubenswrapper[4805]: I0226 17:58:00.502615 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535478-j6qkw" Feb 26 17:58:01 crc kubenswrapper[4805]: I0226 17:58:01.018673 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535478-j6qkw"] Feb 26 17:58:01 crc kubenswrapper[4805]: I0226 17:58:01.115799 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535478-j6qkw" event={"ID":"da01a5e5-b523-47bc-a444-dbd5cb857330","Type":"ContainerStarted","Data":"ccb9d80becfc27ca20530cb2439d54efb112967311e0a8b70c074077b7948739"} Feb 26 17:58:03 crc kubenswrapper[4805]: I0226 17:58:03.135251 4805 generic.go:334] "Generic (PLEG): container finished" podID="da01a5e5-b523-47bc-a444-dbd5cb857330" containerID="088f703254843fc8336d7e6aa2addc1d3ee78ed8239ca3a1aebe16f9b30b3441" exitCode=0 Feb 26 17:58:03 crc kubenswrapper[4805]: I0226 17:58:03.135297 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535478-j6qkw" event={"ID":"da01a5e5-b523-47bc-a444-dbd5cb857330","Type":"ContainerDied","Data":"088f703254843fc8336d7e6aa2addc1d3ee78ed8239ca3a1aebe16f9b30b3441"} Feb 26 17:58:04 crc kubenswrapper[4805]: I0226 17:58:04.582910 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535478-j6qkw" Feb 26 17:58:04 crc kubenswrapper[4805]: I0226 17:58:04.694702 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4fsb\" (UniqueName: \"kubernetes.io/projected/da01a5e5-b523-47bc-a444-dbd5cb857330-kube-api-access-b4fsb\") pod \"da01a5e5-b523-47bc-a444-dbd5cb857330\" (UID: \"da01a5e5-b523-47bc-a444-dbd5cb857330\") " Feb 26 17:58:04 crc kubenswrapper[4805]: I0226 17:58:04.710213 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da01a5e5-b523-47bc-a444-dbd5cb857330-kube-api-access-b4fsb" (OuterVolumeSpecName: "kube-api-access-b4fsb") pod "da01a5e5-b523-47bc-a444-dbd5cb857330" (UID: "da01a5e5-b523-47bc-a444-dbd5cb857330"). InnerVolumeSpecName "kube-api-access-b4fsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:58:04 crc kubenswrapper[4805]: I0226 17:58:04.797605 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4fsb\" (UniqueName: \"kubernetes.io/projected/da01a5e5-b523-47bc-a444-dbd5cb857330-kube-api-access-b4fsb\") on node \"crc\" DevicePath \"\"" Feb 26 17:58:05 crc kubenswrapper[4805]: I0226 17:58:05.155684 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535478-j6qkw" event={"ID":"da01a5e5-b523-47bc-a444-dbd5cb857330","Type":"ContainerDied","Data":"ccb9d80becfc27ca20530cb2439d54efb112967311e0a8b70c074077b7948739"} Feb 26 17:58:05 crc kubenswrapper[4805]: I0226 17:58:05.155735 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535478-j6qkw" Feb 26 17:58:05 crc kubenswrapper[4805]: I0226 17:58:05.155735 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccb9d80becfc27ca20530cb2439d54efb112967311e0a8b70c074077b7948739" Feb 26 17:58:05 crc kubenswrapper[4805]: I0226 17:58:05.662645 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535472-l7dqp"] Feb 26 17:58:05 crc kubenswrapper[4805]: I0226 17:58:05.672876 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535472-l7dqp"] Feb 26 17:58:06 crc kubenswrapper[4805]: I0226 17:58:06.965232 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb5b2036-695b-44a8-86a2-0c2b7968ace0" path="/var/lib/kubelet/pods/eb5b2036-695b-44a8-86a2-0c2b7968ace0/volumes" Feb 26 17:58:08 crc kubenswrapper[4805]: I0226 17:58:08.953272 4805 scope.go:117] "RemoveContainer" containerID="1491bbd2becc7a38ab4213e51d123b16b67ad1e023fc2d8be2998a56175f8172" Feb 26 17:58:08 crc kubenswrapper[4805]: E0226 17:58:08.954618 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:58:22 crc kubenswrapper[4805]: I0226 17:58:22.953606 4805 scope.go:117] "RemoveContainer" containerID="1491bbd2becc7a38ab4213e51d123b16b67ad1e023fc2d8be2998a56175f8172" Feb 26 17:58:22 crc kubenswrapper[4805]: E0226 17:58:22.954558 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:58:25 crc kubenswrapper[4805]: I0226 17:58:25.361555 4805 generic.go:334] "Generic (PLEG): container finished" podID="9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753" containerID="da7d59e0ae14f17d155caffd64ab6aa3fb09cd9565d7cca40261ad8331c42d81" exitCode=0 Feb 26 17:58:25 crc kubenswrapper[4805]: I0226 17:58:25.361906 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt" event={"ID":"9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753","Type":"ContainerDied","Data":"da7d59e0ae14f17d155caffd64ab6aa3fb09cd9565d7cca40261ad8331c42d81"} Feb 26 17:58:26 crc kubenswrapper[4805]: I0226 17:58:26.917870 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.040130 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753-ssh-key-openstack-edpm-ipam\") pod \"9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753\" (UID: \"9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753\") " Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.040857 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753-inventory\") pod \"9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753\" (UID: \"9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753\") " Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.041071 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753-libvirt-secret-0\") pod \"9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753\" (UID: \"9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753\") " Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.041290 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fj5j\" (UniqueName: \"kubernetes.io/projected/9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753-kube-api-access-7fj5j\") pod \"9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753\" (UID: \"9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753\") " Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.041472 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753-libvirt-combined-ca-bundle\") pod \"9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753\" (UID: \"9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753\") " Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.046500 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753" (UID: "9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.046559 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753-kube-api-access-7fj5j" (OuterVolumeSpecName: "kube-api-access-7fj5j") pod "9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753" (UID: "9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753"). InnerVolumeSpecName "kube-api-access-7fj5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.069841 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753" (UID: "9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.075381 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753-inventory" (OuterVolumeSpecName: "inventory") pod "9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753" (UID: "9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.075724 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753" (UID: "9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.144944 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.144977 4805 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.144989 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fj5j\" (UniqueName: \"kubernetes.io/projected/9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753-kube-api-access-7fj5j\") on node \"crc\" DevicePath \"\"" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.144998 4805 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.145008 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.381042 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt" event={"ID":"9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753","Type":"ContainerDied","Data":"7e1621a93d953cc66a98a45d0f7e5cb4563b213edd5e8e8085bbafc44336e261"} Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.381081 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e1621a93d953cc66a98a45d0f7e5cb4563b213edd5e8e8085bbafc44336e261" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.381130 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.484728 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr"] Feb 26 17:58:27 crc kubenswrapper[4805]: E0226 17:58:27.485288 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.485311 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 26 17:58:27 crc kubenswrapper[4805]: E0226 17:58:27.485342 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da01a5e5-b523-47bc-a444-dbd5cb857330" containerName="oc" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.485352 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="da01a5e5-b523-47bc-a444-dbd5cb857330" containerName="oc" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.485634 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="da01a5e5-b523-47bc-a444-dbd5cb857330" containerName="oc" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.485654 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.486547 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.490798 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.491009 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sc2xs" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.519064 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.519916 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.520056 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.520152 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.520899 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.537979 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr"] Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.655829 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btpdr\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.655879 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btpdr\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.655907 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btpdr\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.655968 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btpdr\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.656043 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/16ac99ed-1590-491d-938b-7a795e72c605-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btpdr\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.656088 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btpdr\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.656125 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btpdr\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.656257 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btpdr\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.656303 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btpdr\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.656343 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btpdr\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.656380 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt7wv\" (UniqueName: \"kubernetes.io/projected/16ac99ed-1590-491d-938b-7a795e72c605-kube-api-access-lt7wv\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btpdr\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.757804 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btpdr\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.757869 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btpdr\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.757908 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt7wv\" (UniqueName: \"kubernetes.io/projected/16ac99ed-1590-491d-938b-7a795e72c605-kube-api-access-lt7wv\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btpdr\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.757977 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btpdr\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.758008 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btpdr\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.758163 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btpdr\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.758195 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btpdr\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.758268 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/16ac99ed-1590-491d-938b-7a795e72c605-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btpdr\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.758319 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btpdr\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.758562 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btpdr\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.761036 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/16ac99ed-1590-491d-938b-7a795e72c605-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btpdr\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.761419 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btpdr\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.761643 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btpdr\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.761839 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btpdr\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.765893 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btpdr\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.765901 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btpdr\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.766041 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btpdr\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.766556 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btpdr\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.767087 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btpdr\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.773213 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btpdr\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.776079 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt7wv\" (UniqueName: \"kubernetes.io/projected/16ac99ed-1590-491d-938b-7a795e72c605-kube-api-access-lt7wv\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btpdr\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.780441 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-btpdr\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" Feb 26 17:58:27 crc kubenswrapper[4805]: I0226 17:58:27.815933 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" Feb 26 17:58:28 crc kubenswrapper[4805]: I0226 17:58:28.400073 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr"] Feb 26 17:58:29 crc kubenswrapper[4805]: I0226 17:58:29.408783 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" event={"ID":"16ac99ed-1590-491d-938b-7a795e72c605","Type":"ContainerStarted","Data":"13ff2ea8178f3bc86428c66411a90b8e42b7da132cd509c52883de91d595d3f3"} Feb 26 17:58:29 crc kubenswrapper[4805]: I0226 17:58:29.409738 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" event={"ID":"16ac99ed-1590-491d-938b-7a795e72c605","Type":"ContainerStarted","Data":"8d495e750763923b9ae2f15f210a138829c91b9d45b610e622686f4a643e26d0"} Feb 26 17:58:29 crc kubenswrapper[4805]: I0226 17:58:29.439593 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" podStartSLOduration=2.05271308 podStartE2EDuration="2.439570366s" podCreationTimestamp="2026-02-26 17:58:27 +0000 UTC" firstStartedPulling="2026-02-26 17:58:28.403177947 +0000 UTC m=+2622.964932286" lastFinishedPulling="2026-02-26 17:58:28.790035243 +0000 UTC m=+2623.351789572" observedRunningTime="2026-02-26 17:58:29.430264468 +0000 UTC m=+2623.992018807" watchObservedRunningTime="2026-02-26 17:58:29.439570366 +0000 UTC m=+2624.001324705" Feb 26 17:58:35 crc kubenswrapper[4805]: I0226 17:58:35.953997 4805 scope.go:117] "RemoveContainer" containerID="1491bbd2becc7a38ab4213e51d123b16b67ad1e023fc2d8be2998a56175f8172" Feb 26 17:58:35 crc kubenswrapper[4805]: E0226 17:58:35.954998 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:58:46 crc kubenswrapper[4805]: I0226 17:58:46.786577 4805 scope.go:117] "RemoveContainer" containerID="272b2ec48f3cea8b1720e47197d75e8d53d12907b35305e26096a1e790ef17ae" Feb 26 17:58:46 crc kubenswrapper[4805]: I0226 17:58:46.959525 4805 scope.go:117] "RemoveContainer" containerID="1491bbd2becc7a38ab4213e51d123b16b67ad1e023fc2d8be2998a56175f8172" Feb 26 17:58:46 crc kubenswrapper[4805]: E0226 17:58:46.960008 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:59:00 crc kubenswrapper[4805]: I0226 17:59:00.958699 4805 scope.go:117] "RemoveContainer" containerID="1491bbd2becc7a38ab4213e51d123b16b67ad1e023fc2d8be2998a56175f8172" Feb 26 17:59:00 crc kubenswrapper[4805]: E0226 17:59:00.959555 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:59:12 crc kubenswrapper[4805]: I0226 17:59:12.953726 4805 scope.go:117] "RemoveContainer" containerID="1491bbd2becc7a38ab4213e51d123b16b67ad1e023fc2d8be2998a56175f8172" Feb 26 17:59:12 crc kubenswrapper[4805]: E0226 17:59:12.954574 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:59:27 crc kubenswrapper[4805]: I0226 17:59:27.953875 4805 scope.go:117] "RemoveContainer" containerID="1491bbd2becc7a38ab4213e51d123b16b67ad1e023fc2d8be2998a56175f8172" Feb 26 17:59:27 crc kubenswrapper[4805]: E0226 17:59:27.954615 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:59:42 crc kubenswrapper[4805]: I0226 17:59:42.953811 4805 scope.go:117] "RemoveContainer" containerID="1491bbd2becc7a38ab4213e51d123b16b67ad1e023fc2d8be2998a56175f8172" Feb 26 17:59:42 crc kubenswrapper[4805]: E0226 17:59:42.954874 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 17:59:55 crc kubenswrapper[4805]: I0226 17:59:55.953509 4805 scope.go:117] "RemoveContainer" containerID="1491bbd2becc7a38ab4213e51d123b16b67ad1e023fc2d8be2998a56175f8172" Feb 26 17:59:55 crc kubenswrapper[4805]: E0226 17:59:55.954417 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:00:00 crc kubenswrapper[4805]: I0226 18:00:00.151469 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535480-4fqk4"] Feb 26 18:00:00 crc kubenswrapper[4805]: I0226 18:00:00.154486 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535480-4fqk4" Feb 26 18:00:00 crc kubenswrapper[4805]: I0226 18:00:00.157353 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 18:00:00 crc kubenswrapper[4805]: I0226 18:00:00.157605 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 18:00:00 crc kubenswrapper[4805]: I0226 18:00:00.162270 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 18:00:00 crc kubenswrapper[4805]: I0226 18:00:00.168491 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535480-wsv6c"] Feb 26 18:00:00 crc kubenswrapper[4805]: I0226 18:00:00.170495 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535480-wsv6c" Feb 26 18:00:00 crc kubenswrapper[4805]: I0226 18:00:00.174464 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 18:00:00 crc kubenswrapper[4805]: I0226 18:00:00.174509 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 18:00:00 crc kubenswrapper[4805]: I0226 18:00:00.180419 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535480-4fqk4"] Feb 26 18:00:00 crc kubenswrapper[4805]: I0226 18:00:00.211463 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535480-wsv6c"] Feb 26 18:00:00 crc kubenswrapper[4805]: I0226 18:00:00.294330 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d43471c4-be75-4b63-9191-a52fc498d6f5-secret-volume\") pod \"collect-profiles-29535480-wsv6c\" (UID: \"d43471c4-be75-4b63-9191-a52fc498d6f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535480-wsv6c" Feb 26 18:00:00 crc kubenswrapper[4805]: I0226 18:00:00.294413 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzvnp\" (UniqueName: \"kubernetes.io/projected/c164d473-a7f5-4a34-bd10-83e762547f46-kube-api-access-kzvnp\") pod \"auto-csr-approver-29535480-4fqk4\" (UID: \"c164d473-a7f5-4a34-bd10-83e762547f46\") " pod="openshift-infra/auto-csr-approver-29535480-4fqk4" Feb 26 18:00:00 crc kubenswrapper[4805]: I0226 18:00:00.294616 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4xwr\" (UniqueName: \"kubernetes.io/projected/d43471c4-be75-4b63-9191-a52fc498d6f5-kube-api-access-h4xwr\") pod \"collect-profiles-29535480-wsv6c\" (UID: \"d43471c4-be75-4b63-9191-a52fc498d6f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535480-wsv6c" Feb 26 18:00:00 crc kubenswrapper[4805]: I0226 18:00:00.294851 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d43471c4-be75-4b63-9191-a52fc498d6f5-config-volume\") pod \"collect-profiles-29535480-wsv6c\" (UID: \"d43471c4-be75-4b63-9191-a52fc498d6f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535480-wsv6c" Feb 26 18:00:00 crc kubenswrapper[4805]: I0226 18:00:00.396332 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d43471c4-be75-4b63-9191-a52fc498d6f5-secret-volume\") pod \"collect-profiles-29535480-wsv6c\" (UID: \"d43471c4-be75-4b63-9191-a52fc498d6f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535480-wsv6c" Feb 26 18:00:00 crc kubenswrapper[4805]: I0226 18:00:00.396415 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzvnp\" (UniqueName: \"kubernetes.io/projected/c164d473-a7f5-4a34-bd10-83e762547f46-kube-api-access-kzvnp\") pod \"auto-csr-approver-29535480-4fqk4\" (UID: \"c164d473-a7f5-4a34-bd10-83e762547f46\") " pod="openshift-infra/auto-csr-approver-29535480-4fqk4" Feb 26 18:00:00 crc kubenswrapper[4805]: I0226 18:00:00.396489 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4xwr\" (UniqueName: \"kubernetes.io/projected/d43471c4-be75-4b63-9191-a52fc498d6f5-kube-api-access-h4xwr\") pod \"collect-profiles-29535480-wsv6c\" (UID: \"d43471c4-be75-4b63-9191-a52fc498d6f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535480-wsv6c" Feb 26 18:00:00 crc kubenswrapper[4805]: I0226 18:00:00.396613 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d43471c4-be75-4b63-9191-a52fc498d6f5-config-volume\") pod \"collect-profiles-29535480-wsv6c\" (UID: \"d43471c4-be75-4b63-9191-a52fc498d6f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535480-wsv6c" Feb 26 18:00:00 crc kubenswrapper[4805]: I0226 18:00:00.397729 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d43471c4-be75-4b63-9191-a52fc498d6f5-config-volume\") pod \"collect-profiles-29535480-wsv6c\" (UID: \"d43471c4-be75-4b63-9191-a52fc498d6f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535480-wsv6c" Feb 26 18:00:00 crc kubenswrapper[4805]: I0226 18:00:00.402855 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d43471c4-be75-4b63-9191-a52fc498d6f5-secret-volume\") pod \"collect-profiles-29535480-wsv6c\" (UID: \"d43471c4-be75-4b63-9191-a52fc498d6f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535480-wsv6c" Feb 26 18:00:00 crc kubenswrapper[4805]: I0226 18:00:00.416117 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4xwr\" (UniqueName: \"kubernetes.io/projected/d43471c4-be75-4b63-9191-a52fc498d6f5-kube-api-access-h4xwr\") pod \"collect-profiles-29535480-wsv6c\" (UID: \"d43471c4-be75-4b63-9191-a52fc498d6f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535480-wsv6c" Feb 26 18:00:00 crc kubenswrapper[4805]: I0226 18:00:00.421307 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzvnp\" (UniqueName: \"kubernetes.io/projected/c164d473-a7f5-4a34-bd10-83e762547f46-kube-api-access-kzvnp\") pod \"auto-csr-approver-29535480-4fqk4\" (UID: \"c164d473-a7f5-4a34-bd10-83e762547f46\") " pod="openshift-infra/auto-csr-approver-29535480-4fqk4" Feb 26 18:00:00 crc kubenswrapper[4805]: I0226 18:00:00.485372 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535480-4fqk4" Feb 26 18:00:00 crc kubenswrapper[4805]: I0226 18:00:00.501196 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535480-wsv6c" Feb 26 18:00:00 crc kubenswrapper[4805]: I0226 18:00:00.974124 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535480-4fqk4"] Feb 26 18:00:00 crc kubenswrapper[4805]: I0226 18:00:00.981482 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 18:00:01 crc kubenswrapper[4805]: W0226 18:00:01.045884 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd43471c4_be75_4b63_9191_a52fc498d6f5.slice/crio-f093d10778ffb7facfd7b58cf698409171ef586e6b9741463e7e027737fb6ced WatchSource:0}: Error finding container f093d10778ffb7facfd7b58cf698409171ef586e6b9741463e7e027737fb6ced: Status 404 returned error can't find the container with id f093d10778ffb7facfd7b58cf698409171ef586e6b9741463e7e027737fb6ced Feb 26 18:00:01 crc kubenswrapper[4805]: I0226 18:00:01.047466 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535480-wsv6c"] Feb 26 18:00:01 crc kubenswrapper[4805]: I0226 18:00:01.353031 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535480-4fqk4" event={"ID":"c164d473-a7f5-4a34-bd10-83e762547f46","Type":"ContainerStarted","Data":"6301f4c301fd3bc4680717ad24ae1796799ee3272777ad88a68a4d75d891b4d4"} Feb 26 18:00:01 crc kubenswrapper[4805]: I0226 18:00:01.354493 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535480-wsv6c" event={"ID":"d43471c4-be75-4b63-9191-a52fc498d6f5","Type":"ContainerStarted","Data":"75e4c75efefc060ffbb51ce5c755e14f9201e32c29f123edfd88014b9643c0cf"} Feb 26 18:00:01 crc kubenswrapper[4805]: I0226 18:00:01.354521 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535480-wsv6c" event={"ID":"d43471c4-be75-4b63-9191-a52fc498d6f5","Type":"ContainerStarted","Data":"f093d10778ffb7facfd7b58cf698409171ef586e6b9741463e7e027737fb6ced"} Feb 26 18:00:01 crc kubenswrapper[4805]: I0226 18:00:01.374927 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29535480-wsv6c" podStartSLOduration=1.374906177 podStartE2EDuration="1.374906177s" podCreationTimestamp="2026-02-26 18:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 18:00:01.371782577 +0000 UTC m=+2715.933536926" watchObservedRunningTime="2026-02-26 18:00:01.374906177 +0000 UTC m=+2715.936660516" Feb 26 18:00:02 crc kubenswrapper[4805]: I0226 18:00:02.372104 4805 generic.go:334] "Generic (PLEG): container finished" podID="d43471c4-be75-4b63-9191-a52fc498d6f5" containerID="75e4c75efefc060ffbb51ce5c755e14f9201e32c29f123edfd88014b9643c0cf" exitCode=0 Feb 26 18:00:02 crc kubenswrapper[4805]: I0226 18:00:02.372232 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535480-wsv6c" event={"ID":"d43471c4-be75-4b63-9191-a52fc498d6f5","Type":"ContainerDied","Data":"75e4c75efefc060ffbb51ce5c755e14f9201e32c29f123edfd88014b9643c0cf"} Feb 26 18:00:03 crc kubenswrapper[4805]: I0226 18:00:03.793503 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535480-wsv6c" Feb 26 18:00:03 crc kubenswrapper[4805]: I0226 18:00:03.982809 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d43471c4-be75-4b63-9191-a52fc498d6f5-secret-volume\") pod \"d43471c4-be75-4b63-9191-a52fc498d6f5\" (UID: \"d43471c4-be75-4b63-9191-a52fc498d6f5\") " Feb 26 18:00:03 crc kubenswrapper[4805]: I0226 18:00:03.982874 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d43471c4-be75-4b63-9191-a52fc498d6f5-config-volume\") pod \"d43471c4-be75-4b63-9191-a52fc498d6f5\" (UID: \"d43471c4-be75-4b63-9191-a52fc498d6f5\") " Feb 26 18:00:03 crc kubenswrapper[4805]: I0226 18:00:03.982921 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4xwr\" (UniqueName: \"kubernetes.io/projected/d43471c4-be75-4b63-9191-a52fc498d6f5-kube-api-access-h4xwr\") pod \"d43471c4-be75-4b63-9191-a52fc498d6f5\" (UID: \"d43471c4-be75-4b63-9191-a52fc498d6f5\") " Feb 26 18:00:03 crc kubenswrapper[4805]: I0226 18:00:03.984045 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d43471c4-be75-4b63-9191-a52fc498d6f5-config-volume" (OuterVolumeSpecName: "config-volume") pod "d43471c4-be75-4b63-9191-a52fc498d6f5" (UID: "d43471c4-be75-4b63-9191-a52fc498d6f5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 18:00:03 crc kubenswrapper[4805]: I0226 18:00:03.989723 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d43471c4-be75-4b63-9191-a52fc498d6f5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d43471c4-be75-4b63-9191-a52fc498d6f5" (UID: "d43471c4-be75-4b63-9191-a52fc498d6f5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 18:00:03 crc kubenswrapper[4805]: I0226 18:00:03.993708 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d43471c4-be75-4b63-9191-a52fc498d6f5-kube-api-access-h4xwr" (OuterVolumeSpecName: "kube-api-access-h4xwr") pod "d43471c4-be75-4b63-9191-a52fc498d6f5" (UID: "d43471c4-be75-4b63-9191-a52fc498d6f5"). InnerVolumeSpecName "kube-api-access-h4xwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:00:04 crc kubenswrapper[4805]: I0226 18:00:04.085508 4805 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d43471c4-be75-4b63-9191-a52fc498d6f5-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 18:00:04 crc kubenswrapper[4805]: I0226 18:00:04.085548 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d43471c4-be75-4b63-9191-a52fc498d6f5-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 18:00:04 crc kubenswrapper[4805]: I0226 18:00:04.085559 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4xwr\" (UniqueName: \"kubernetes.io/projected/d43471c4-be75-4b63-9191-a52fc498d6f5-kube-api-access-h4xwr\") on node \"crc\" DevicePath \"\"" Feb 26 18:00:04 crc kubenswrapper[4805]: I0226 18:00:04.392724 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535480-wsv6c" event={"ID":"d43471c4-be75-4b63-9191-a52fc498d6f5","Type":"ContainerDied","Data":"f093d10778ffb7facfd7b58cf698409171ef586e6b9741463e7e027737fb6ced"} Feb 26 18:00:04 crc kubenswrapper[4805]: I0226 18:00:04.393042 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f093d10778ffb7facfd7b58cf698409171ef586e6b9741463e7e027737fb6ced" Feb 26 18:00:04 crc kubenswrapper[4805]: I0226 18:00:04.392850 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535480-wsv6c" Feb 26 18:00:04 crc kubenswrapper[4805]: I0226 18:00:04.450484 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535435-n8728"] Feb 26 18:00:04 crc kubenswrapper[4805]: I0226 18:00:04.459855 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535435-n8728"] Feb 26 18:00:04 crc kubenswrapper[4805]: I0226 18:00:04.970944 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f3f2460-fdb8-4b47-89f9-3cbb84e143e8" path="/var/lib/kubelet/pods/0f3f2460-fdb8-4b47-89f9-3cbb84e143e8/volumes" Feb 26 18:00:05 crc kubenswrapper[4805]: I0226 18:00:05.404606 4805 generic.go:334] "Generic (PLEG): container finished" podID="c164d473-a7f5-4a34-bd10-83e762547f46" containerID="09b0abbfc876806cdc905c0e45224fffe27d589c2db03fffce73c5b0507f7209" exitCode=0 Feb 26 18:00:05 crc kubenswrapper[4805]: I0226 18:00:05.404695 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535480-4fqk4" event={"ID":"c164d473-a7f5-4a34-bd10-83e762547f46","Type":"ContainerDied","Data":"09b0abbfc876806cdc905c0e45224fffe27d589c2db03fffce73c5b0507f7209"} Feb 26 18:00:06 crc kubenswrapper[4805]: I0226 18:00:06.847939 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535480-4fqk4" Feb 26 18:00:06 crc kubenswrapper[4805]: I0226 18:00:06.951088 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzvnp\" (UniqueName: \"kubernetes.io/projected/c164d473-a7f5-4a34-bd10-83e762547f46-kube-api-access-kzvnp\") pod \"c164d473-a7f5-4a34-bd10-83e762547f46\" (UID: \"c164d473-a7f5-4a34-bd10-83e762547f46\") " Feb 26 18:00:06 crc kubenswrapper[4805]: I0226 18:00:06.955539 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c164d473-a7f5-4a34-bd10-83e762547f46-kube-api-access-kzvnp" (OuterVolumeSpecName: "kube-api-access-kzvnp") pod "c164d473-a7f5-4a34-bd10-83e762547f46" (UID: "c164d473-a7f5-4a34-bd10-83e762547f46"). InnerVolumeSpecName "kube-api-access-kzvnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:00:07 crc kubenswrapper[4805]: I0226 18:00:07.055047 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzvnp\" (UniqueName: \"kubernetes.io/projected/c164d473-a7f5-4a34-bd10-83e762547f46-kube-api-access-kzvnp\") on node \"crc\" DevicePath \"\"" Feb 26 18:00:07 crc kubenswrapper[4805]: I0226 18:00:07.431854 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535480-4fqk4" event={"ID":"c164d473-a7f5-4a34-bd10-83e762547f46","Type":"ContainerDied","Data":"6301f4c301fd3bc4680717ad24ae1796799ee3272777ad88a68a4d75d891b4d4"} Feb 26 18:00:07 crc kubenswrapper[4805]: I0226 18:00:07.432103 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6301f4c301fd3bc4680717ad24ae1796799ee3272777ad88a68a4d75d891b4d4" Feb 26 18:00:07 crc kubenswrapper[4805]: I0226 18:00:07.432141 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535480-4fqk4" Feb 26 18:00:07 crc kubenswrapper[4805]: I0226 18:00:07.910572 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535474-dmvw5"] Feb 26 18:00:07 crc kubenswrapper[4805]: I0226 18:00:07.922278 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535474-dmvw5"] Feb 26 18:00:07 crc kubenswrapper[4805]: I0226 18:00:07.953503 4805 scope.go:117] "RemoveContainer" containerID="1491bbd2becc7a38ab4213e51d123b16b67ad1e023fc2d8be2998a56175f8172" Feb 26 18:00:07 crc kubenswrapper[4805]: E0226 18:00:07.953773 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:00:08 crc kubenswrapper[4805]: I0226 18:00:08.971451 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d6e84d7-d4c1-4dd1-af08-98fc71e26ef7" path="/var/lib/kubelet/pods/7d6e84d7-d4c1-4dd1-af08-98fc71e26ef7/volumes" Feb 26 18:00:20 crc kubenswrapper[4805]: I0226 18:00:20.954514 4805 scope.go:117] "RemoveContainer" containerID="1491bbd2becc7a38ab4213e51d123b16b67ad1e023fc2d8be2998a56175f8172" Feb 26 18:00:20 crc kubenswrapper[4805]: E0226 18:00:20.955442 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:00:32 crc kubenswrapper[4805]: I0226 18:00:32.953744 4805 scope.go:117] "RemoveContainer" containerID="1491bbd2becc7a38ab4213e51d123b16b67ad1e023fc2d8be2998a56175f8172" Feb 26 18:00:32 crc kubenswrapper[4805]: E0226 18:00:32.955236 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:00:34 crc kubenswrapper[4805]: I0226 18:00:34.680078 4805 generic.go:334] "Generic (PLEG): container finished" podID="16ac99ed-1590-491d-938b-7a795e72c605" containerID="13ff2ea8178f3bc86428c66411a90b8e42b7da132cd509c52883de91d595d3f3" exitCode=0 Feb 26 18:00:34 crc kubenswrapper[4805]: I0226 18:00:34.680157 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" event={"ID":"16ac99ed-1590-491d-938b-7a795e72c605","Type":"ContainerDied","Data":"13ff2ea8178f3bc86428c66411a90b8e42b7da132cd509c52883de91d595d3f3"} Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.255029 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.382490 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-cell1-compute-config-1\") pod \"16ac99ed-1590-491d-938b-7a795e72c605\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.382586 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-ssh-key-openstack-edpm-ipam\") pod \"16ac99ed-1590-491d-938b-7a795e72c605\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.382616 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-inventory\") pod \"16ac99ed-1590-491d-938b-7a795e72c605\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.382687 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-migration-ssh-key-1\") pod \"16ac99ed-1590-491d-938b-7a795e72c605\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.382707 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lt7wv\" (UniqueName: \"kubernetes.io/projected/16ac99ed-1590-491d-938b-7a795e72c605-kube-api-access-lt7wv\") pod \"16ac99ed-1590-491d-938b-7a795e72c605\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.382756 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-migration-ssh-key-0\") pod \"16ac99ed-1590-491d-938b-7a795e72c605\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.382804 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/16ac99ed-1590-491d-938b-7a795e72c605-nova-extra-config-0\") pod \"16ac99ed-1590-491d-938b-7a795e72c605\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.382905 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-cell1-compute-config-2\") pod \"16ac99ed-1590-491d-938b-7a795e72c605\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.382922 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-cell1-compute-config-0\") pod \"16ac99ed-1590-491d-938b-7a795e72c605\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.382940 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-cell1-compute-config-3\") pod \"16ac99ed-1590-491d-938b-7a795e72c605\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.382956 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-combined-ca-bundle\") pod \"16ac99ed-1590-491d-938b-7a795e72c605\" (UID: \"16ac99ed-1590-491d-938b-7a795e72c605\") " Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.390222 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "16ac99ed-1590-491d-938b-7a795e72c605" (UID: "16ac99ed-1590-491d-938b-7a795e72c605"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.391269 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16ac99ed-1590-491d-938b-7a795e72c605-kube-api-access-lt7wv" (OuterVolumeSpecName: "kube-api-access-lt7wv") pod "16ac99ed-1590-491d-938b-7a795e72c605" (UID: "16ac99ed-1590-491d-938b-7a795e72c605"). InnerVolumeSpecName "kube-api-access-lt7wv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.418727 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "16ac99ed-1590-491d-938b-7a795e72c605" (UID: "16ac99ed-1590-491d-938b-7a795e72c605"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.419042 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-inventory" (OuterVolumeSpecName: "inventory") pod "16ac99ed-1590-491d-938b-7a795e72c605" (UID: "16ac99ed-1590-491d-938b-7a795e72c605"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.419613 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "16ac99ed-1590-491d-938b-7a795e72c605" (UID: "16ac99ed-1590-491d-938b-7a795e72c605"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.419690 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "16ac99ed-1590-491d-938b-7a795e72c605" (UID: "16ac99ed-1590-491d-938b-7a795e72c605"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.421536 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "16ac99ed-1590-491d-938b-7a795e72c605" (UID: "16ac99ed-1590-491d-938b-7a795e72c605"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.422568 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "16ac99ed-1590-491d-938b-7a795e72c605" (UID: "16ac99ed-1590-491d-938b-7a795e72c605"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.426412 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "16ac99ed-1590-491d-938b-7a795e72c605" (UID: "16ac99ed-1590-491d-938b-7a795e72c605"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.428442 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "16ac99ed-1590-491d-938b-7a795e72c605" (UID: "16ac99ed-1590-491d-938b-7a795e72c605"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.429700 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16ac99ed-1590-491d-938b-7a795e72c605-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "16ac99ed-1590-491d-938b-7a795e72c605" (UID: "16ac99ed-1590-491d-938b-7a795e72c605"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.485273 4805 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.485315 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lt7wv\" (UniqueName: \"kubernetes.io/projected/16ac99ed-1590-491d-938b-7a795e72c605-kube-api-access-lt7wv\") on node \"crc\" DevicePath \"\"" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.485325 4805 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.485334 4805 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/16ac99ed-1590-491d-938b-7a795e72c605-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.485344 4805 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.485354 4805 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.485363 4805 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.485373 4805 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.485383 4805 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.485392 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.485401 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16ac99ed-1590-491d-938b-7a795e72c605-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.704261 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" event={"ID":"16ac99ed-1590-491d-938b-7a795e72c605","Type":"ContainerDied","Data":"8d495e750763923b9ae2f15f210a138829c91b9d45b610e622686f4a643e26d0"} Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.704306 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d495e750763923b9ae2f15f210a138829c91b9d45b610e622686f4a643e26d0" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.704359 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-btpdr" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.926695 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xnt74"] Feb 26 18:00:36 crc kubenswrapper[4805]: E0226 18:00:36.927274 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c164d473-a7f5-4a34-bd10-83e762547f46" containerName="oc" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.927293 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="c164d473-a7f5-4a34-bd10-83e762547f46" containerName="oc" Feb 26 18:00:36 crc kubenswrapper[4805]: E0226 18:00:36.927313 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16ac99ed-1590-491d-938b-7a795e72c605" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.927323 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="16ac99ed-1590-491d-938b-7a795e72c605" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 26 18:00:36 crc kubenswrapper[4805]: E0226 18:00:36.927337 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d43471c4-be75-4b63-9191-a52fc498d6f5" containerName="collect-profiles" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.927344 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d43471c4-be75-4b63-9191-a52fc498d6f5" containerName="collect-profiles" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.927575 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="16ac99ed-1590-491d-938b-7a795e72c605" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.927600 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="d43471c4-be75-4b63-9191-a52fc498d6f5" containerName="collect-profiles" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.927625 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="c164d473-a7f5-4a34-bd10-83e762547f46" containerName="oc" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.929541 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xnt74" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.933061 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.933367 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.933538 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.933731 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-sc2xs" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.933913 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.951706 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xnt74"] Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.993691 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xnt74\" (UID: \"b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xnt74" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.993743 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xnt74\" (UID: \"b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xnt74" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.993773 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xnt74\" (UID: \"b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xnt74" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.993838 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5w77\" (UniqueName: \"kubernetes.io/projected/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-kube-api-access-v5w77\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xnt74\" (UID: \"b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xnt74" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.993889 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xnt74\" (UID: \"b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xnt74" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.993938 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xnt74\" (UID: \"b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xnt74" Feb 26 18:00:36 crc kubenswrapper[4805]: I0226 18:00:36.993991 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xnt74\" (UID: \"b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xnt74" Feb 26 18:00:37 crc kubenswrapper[4805]: I0226 18:00:37.098461 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xnt74\" (UID: \"b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xnt74" Feb 26 18:00:37 crc kubenswrapper[4805]: I0226 18:00:37.098541 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xnt74\" (UID: \"b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xnt74" Feb 26 18:00:37 crc kubenswrapper[4805]: I0226 18:00:37.098690 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xnt74\" (UID: \"b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xnt74" Feb 26 18:00:37 crc kubenswrapper[4805]: I0226 18:00:37.098715 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xnt74\" (UID: \"b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xnt74" Feb 26 18:00:37 crc kubenswrapper[4805]: I0226 18:00:37.098741 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xnt74\" (UID: \"b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xnt74" Feb 26 18:00:37 crc kubenswrapper[4805]: I0226 18:00:37.098777 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5w77\" (UniqueName: \"kubernetes.io/projected/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-kube-api-access-v5w77\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xnt74\" (UID: \"b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xnt74" Feb 26 18:00:37 crc kubenswrapper[4805]: I0226 18:00:37.098811 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xnt74\" (UID: \"b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xnt74" Feb 26 18:00:37 crc kubenswrapper[4805]: I0226 18:00:37.102902 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xnt74\" (UID: \"b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xnt74" Feb 26 18:00:37 crc kubenswrapper[4805]: I0226 18:00:37.102942 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xnt74\" (UID: \"b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xnt74" Feb 26 18:00:37 crc kubenswrapper[4805]: I0226 18:00:37.102970 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xnt74\" (UID: \"b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xnt74" Feb 26 18:00:37 crc kubenswrapper[4805]: I0226 18:00:37.103205 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xnt74\" (UID: \"b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xnt74" Feb 26 18:00:37 crc kubenswrapper[4805]: I0226 18:00:37.104783 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xnt74\" (UID: \"b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xnt74" Feb 26 18:00:37 crc kubenswrapper[4805]: I0226 18:00:37.105007 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xnt74\" (UID: \"b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xnt74" Feb 26 18:00:37 crc kubenswrapper[4805]: I0226 18:00:37.116071 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5w77\" (UniqueName: \"kubernetes.io/projected/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-kube-api-access-v5w77\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xnt74\" (UID: \"b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xnt74" Feb 26 18:00:37 crc kubenswrapper[4805]: I0226 18:00:37.303472 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xnt74" Feb 26 18:00:37 crc kubenswrapper[4805]: I0226 18:00:37.866051 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xnt74"] Feb 26 18:00:38 crc kubenswrapper[4805]: I0226 18:00:38.725452 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xnt74" event={"ID":"b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0","Type":"ContainerStarted","Data":"19dfeb024d4fc8a2e5e424bb406f56b8add563d7d7fe2f48f397d1562be7812c"} Feb 26 18:00:38 crc kubenswrapper[4805]: I0226 18:00:38.801384 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7m2d5"] Feb 26 18:00:38 crc kubenswrapper[4805]: I0226 18:00:38.804036 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7m2d5" Feb 26 18:00:38 crc kubenswrapper[4805]: I0226 18:00:38.844314 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m2d5"] Feb 26 18:00:38 crc kubenswrapper[4805]: I0226 18:00:38.952011 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dncs\" (UniqueName: \"kubernetes.io/projected/1714aa07-983b-4521-814f-fb44b0ea75e5-kube-api-access-6dncs\") pod \"redhat-marketplace-7m2d5\" (UID: \"1714aa07-983b-4521-814f-fb44b0ea75e5\") " pod="openshift-marketplace/redhat-marketplace-7m2d5" Feb 26 18:00:38 crc kubenswrapper[4805]: I0226 18:00:38.952162 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1714aa07-983b-4521-814f-fb44b0ea75e5-utilities\") pod \"redhat-marketplace-7m2d5\" (UID: \"1714aa07-983b-4521-814f-fb44b0ea75e5\") " pod="openshift-marketplace/redhat-marketplace-7m2d5" Feb 26 18:00:38 crc kubenswrapper[4805]: I0226 18:00:38.952205 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1714aa07-983b-4521-814f-fb44b0ea75e5-catalog-content\") pod \"redhat-marketplace-7m2d5\" (UID: \"1714aa07-983b-4521-814f-fb44b0ea75e5\") " pod="openshift-marketplace/redhat-marketplace-7m2d5" Feb 26 18:00:39 crc kubenswrapper[4805]: I0226 18:00:39.053878 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1714aa07-983b-4521-814f-fb44b0ea75e5-utilities\") pod \"redhat-marketplace-7m2d5\" (UID: \"1714aa07-983b-4521-814f-fb44b0ea75e5\") " pod="openshift-marketplace/redhat-marketplace-7m2d5" Feb 26 18:00:39 crc kubenswrapper[4805]: I0226 18:00:39.053963 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1714aa07-983b-4521-814f-fb44b0ea75e5-catalog-content\") pod \"redhat-marketplace-7m2d5\" (UID: \"1714aa07-983b-4521-814f-fb44b0ea75e5\") " pod="openshift-marketplace/redhat-marketplace-7m2d5" Feb 26 18:00:39 crc kubenswrapper[4805]: I0226 18:00:39.054195 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dncs\" (UniqueName: \"kubernetes.io/projected/1714aa07-983b-4521-814f-fb44b0ea75e5-kube-api-access-6dncs\") pod \"redhat-marketplace-7m2d5\" (UID: \"1714aa07-983b-4521-814f-fb44b0ea75e5\") " pod="openshift-marketplace/redhat-marketplace-7m2d5" Feb 26 18:00:39 crc kubenswrapper[4805]: I0226 18:00:39.055766 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1714aa07-983b-4521-814f-fb44b0ea75e5-utilities\") pod \"redhat-marketplace-7m2d5\" (UID: \"1714aa07-983b-4521-814f-fb44b0ea75e5\") " pod="openshift-marketplace/redhat-marketplace-7m2d5" Feb 26 18:00:39 crc kubenswrapper[4805]: I0226 18:00:39.057286 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1714aa07-983b-4521-814f-fb44b0ea75e5-catalog-content\") pod \"redhat-marketplace-7m2d5\" (UID: \"1714aa07-983b-4521-814f-fb44b0ea75e5\") " pod="openshift-marketplace/redhat-marketplace-7m2d5" Feb 26 18:00:39 crc kubenswrapper[4805]: I0226 18:00:39.073595 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dncs\" (UniqueName: \"kubernetes.io/projected/1714aa07-983b-4521-814f-fb44b0ea75e5-kube-api-access-6dncs\") pod \"redhat-marketplace-7m2d5\" (UID: \"1714aa07-983b-4521-814f-fb44b0ea75e5\") " pod="openshift-marketplace/redhat-marketplace-7m2d5" Feb 26 18:00:39 crc kubenswrapper[4805]: I0226 18:00:39.122204 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7m2d5" Feb 26 18:00:39 crc kubenswrapper[4805]: W0226 18:00:39.634319 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1714aa07_983b_4521_814f_fb44b0ea75e5.slice/crio-064b48cdd2637579157a20bbaf7ac433f9481c42576d0e08a9faa266536646c1 WatchSource:0}: Error finding container 064b48cdd2637579157a20bbaf7ac433f9481c42576d0e08a9faa266536646c1: Status 404 returned error can't find the container with id 064b48cdd2637579157a20bbaf7ac433f9481c42576d0e08a9faa266536646c1 Feb 26 18:00:39 crc kubenswrapper[4805]: I0226 18:00:39.635852 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m2d5"] Feb 26 18:00:39 crc kubenswrapper[4805]: I0226 18:00:39.736680 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m2d5" event={"ID":"1714aa07-983b-4521-814f-fb44b0ea75e5","Type":"ContainerStarted","Data":"064b48cdd2637579157a20bbaf7ac433f9481c42576d0e08a9faa266536646c1"} Feb 26 18:00:39 crc kubenswrapper[4805]: I0226 18:00:39.738436 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xnt74" event={"ID":"b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0","Type":"ContainerStarted","Data":"45b00c22861226a53dd35eaf2fe60f5b9c6fa50a2895ddebcc04ab3999e9ef8d"} Feb 26 18:00:39 crc kubenswrapper[4805]: I0226 18:00:39.760090 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xnt74" podStartSLOduration=3.114446397 podStartE2EDuration="3.76007294s" podCreationTimestamp="2026-02-26 18:00:36 +0000 UTC" firstStartedPulling="2026-02-26 18:00:37.877014617 +0000 UTC m=+2752.438769006" lastFinishedPulling="2026-02-26 18:00:38.5226412 +0000 UTC m=+2753.084395549" observedRunningTime="2026-02-26 18:00:39.758565842 +0000 UTC m=+2754.320320191" watchObservedRunningTime="2026-02-26 18:00:39.76007294 +0000 UTC m=+2754.321827279" Feb 26 18:00:40 crc kubenswrapper[4805]: I0226 18:00:40.748609 4805 generic.go:334] "Generic (PLEG): container finished" podID="1714aa07-983b-4521-814f-fb44b0ea75e5" containerID="62aa3896f251605915605da140e8a1083948fd7bdc7836a9d4b720194bea98ce" exitCode=0 Feb 26 18:00:40 crc kubenswrapper[4805]: I0226 18:00:40.748704 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m2d5" event={"ID":"1714aa07-983b-4521-814f-fb44b0ea75e5","Type":"ContainerDied","Data":"62aa3896f251605915605da140e8a1083948fd7bdc7836a9d4b720194bea98ce"} Feb 26 18:00:41 crc kubenswrapper[4805]: I0226 18:00:41.759883 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m2d5" event={"ID":"1714aa07-983b-4521-814f-fb44b0ea75e5","Type":"ContainerStarted","Data":"a8de57cff5914aecc691cd0afab0b688a7e5059450119abd30f2657aa6b8d09c"} Feb 26 18:00:42 crc kubenswrapper[4805]: I0226 18:00:42.773381 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m2d5" event={"ID":"1714aa07-983b-4521-814f-fb44b0ea75e5","Type":"ContainerDied","Data":"a8de57cff5914aecc691cd0afab0b688a7e5059450119abd30f2657aa6b8d09c"} Feb 26 18:00:42 crc kubenswrapper[4805]: I0226 18:00:42.775029 4805 generic.go:334] "Generic (PLEG): container finished" podID="1714aa07-983b-4521-814f-fb44b0ea75e5" containerID="a8de57cff5914aecc691cd0afab0b688a7e5059450119abd30f2657aa6b8d09c" exitCode=0 Feb 26 18:00:43 crc kubenswrapper[4805]: I0226 18:00:43.785890 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m2d5" event={"ID":"1714aa07-983b-4521-814f-fb44b0ea75e5","Type":"ContainerStarted","Data":"d7ef18c19babaa19757272e7aa22efbe8f5bbfdbc87606a0ff46ab69ddade5c1"} Feb 26 18:00:43 crc kubenswrapper[4805]: I0226 18:00:43.803775 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7m2d5" podStartSLOduration=3.395918916 podStartE2EDuration="5.803757584s" podCreationTimestamp="2026-02-26 18:00:38 +0000 UTC" firstStartedPulling="2026-02-26 18:00:40.751890853 +0000 UTC m=+2755.313645192" lastFinishedPulling="2026-02-26 18:00:43.159729521 +0000 UTC m=+2757.721483860" observedRunningTime="2026-02-26 18:00:43.800890241 +0000 UTC m=+2758.362644580" watchObservedRunningTime="2026-02-26 18:00:43.803757584 +0000 UTC m=+2758.365511923" Feb 26 18:00:45 crc kubenswrapper[4805]: I0226 18:00:45.953589 4805 scope.go:117] "RemoveContainer" containerID="1491bbd2becc7a38ab4213e51d123b16b67ad1e023fc2d8be2998a56175f8172" Feb 26 18:00:45 crc kubenswrapper[4805]: E0226 18:00:45.954220 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:00:46 crc kubenswrapper[4805]: I0226 18:00:46.882854 4805 scope.go:117] "RemoveContainer" containerID="cde3ae7a99cdcd7be8f51705218c587bdaff2a8db7008fab456bcde3f496ca2a" Feb 26 18:00:46 crc kubenswrapper[4805]: I0226 18:00:46.941038 4805 scope.go:117] "RemoveContainer" containerID="fb246173955ffb56c9ea80ae6637ab230691d880eb3d111a5480e5535212888a" Feb 26 18:00:49 crc kubenswrapper[4805]: I0226 18:00:49.123453 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7m2d5" Feb 26 18:00:49 crc kubenswrapper[4805]: I0226 18:00:49.123780 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7m2d5" Feb 26 18:00:49 crc kubenswrapper[4805]: I0226 18:00:49.187564 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7m2d5" Feb 26 18:00:49 crc kubenswrapper[4805]: I0226 18:00:49.895569 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7m2d5" Feb 26 18:00:49 crc kubenswrapper[4805]: I0226 18:00:49.946827 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m2d5"] Feb 26 18:00:51 crc kubenswrapper[4805]: I0226 18:00:51.864435 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7m2d5" podUID="1714aa07-983b-4521-814f-fb44b0ea75e5" containerName="registry-server" containerID="cri-o://d7ef18c19babaa19757272e7aa22efbe8f5bbfdbc87606a0ff46ab69ddade5c1" gracePeriod=2 Feb 26 18:00:52 crc kubenswrapper[4805]: I0226 18:00:52.491005 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7m2d5" Feb 26 18:00:52 crc kubenswrapper[4805]: I0226 18:00:52.664615 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dncs\" (UniqueName: \"kubernetes.io/projected/1714aa07-983b-4521-814f-fb44b0ea75e5-kube-api-access-6dncs\") pod \"1714aa07-983b-4521-814f-fb44b0ea75e5\" (UID: \"1714aa07-983b-4521-814f-fb44b0ea75e5\") " Feb 26 18:00:52 crc kubenswrapper[4805]: I0226 18:00:52.664828 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1714aa07-983b-4521-814f-fb44b0ea75e5-utilities\") pod \"1714aa07-983b-4521-814f-fb44b0ea75e5\" (UID: \"1714aa07-983b-4521-814f-fb44b0ea75e5\") " Feb 26 18:00:52 crc kubenswrapper[4805]: I0226 18:00:52.664891 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1714aa07-983b-4521-814f-fb44b0ea75e5-catalog-content\") pod \"1714aa07-983b-4521-814f-fb44b0ea75e5\" (UID: \"1714aa07-983b-4521-814f-fb44b0ea75e5\") " Feb 26 18:00:52 crc kubenswrapper[4805]: I0226 18:00:52.665630 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1714aa07-983b-4521-814f-fb44b0ea75e5-utilities" (OuterVolumeSpecName: "utilities") pod "1714aa07-983b-4521-814f-fb44b0ea75e5" (UID: "1714aa07-983b-4521-814f-fb44b0ea75e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 18:00:52 crc kubenswrapper[4805]: I0226 18:00:52.681408 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1714aa07-983b-4521-814f-fb44b0ea75e5-kube-api-access-6dncs" (OuterVolumeSpecName: "kube-api-access-6dncs") pod "1714aa07-983b-4521-814f-fb44b0ea75e5" (UID: "1714aa07-983b-4521-814f-fb44b0ea75e5"). InnerVolumeSpecName "kube-api-access-6dncs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:00:52 crc kubenswrapper[4805]: I0226 18:00:52.686922 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1714aa07-983b-4521-814f-fb44b0ea75e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1714aa07-983b-4521-814f-fb44b0ea75e5" (UID: "1714aa07-983b-4521-814f-fb44b0ea75e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 18:00:52 crc kubenswrapper[4805]: I0226 18:00:52.767342 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1714aa07-983b-4521-814f-fb44b0ea75e5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 18:00:52 crc kubenswrapper[4805]: I0226 18:00:52.767373 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dncs\" (UniqueName: \"kubernetes.io/projected/1714aa07-983b-4521-814f-fb44b0ea75e5-kube-api-access-6dncs\") on node \"crc\" DevicePath \"\"" Feb 26 18:00:52 crc kubenswrapper[4805]: I0226 18:00:52.767385 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1714aa07-983b-4521-814f-fb44b0ea75e5-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 18:00:52 crc kubenswrapper[4805]: I0226 18:00:52.875707 4805 generic.go:334] "Generic (PLEG): container finished" podID="1714aa07-983b-4521-814f-fb44b0ea75e5" containerID="d7ef18c19babaa19757272e7aa22efbe8f5bbfdbc87606a0ff46ab69ddade5c1" exitCode=0 Feb 26 18:00:52 crc kubenswrapper[4805]: I0226 18:00:52.875760 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m2d5" event={"ID":"1714aa07-983b-4521-814f-fb44b0ea75e5","Type":"ContainerDied","Data":"d7ef18c19babaa19757272e7aa22efbe8f5bbfdbc87606a0ff46ab69ddade5c1"} Feb 26 18:00:52 crc kubenswrapper[4805]: I0226 18:00:52.875792 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m2d5" event={"ID":"1714aa07-983b-4521-814f-fb44b0ea75e5","Type":"ContainerDied","Data":"064b48cdd2637579157a20bbaf7ac433f9481c42576d0e08a9faa266536646c1"} Feb 26 18:00:52 crc kubenswrapper[4805]: I0226 18:00:52.875815 4805 scope.go:117] "RemoveContainer" containerID="d7ef18c19babaa19757272e7aa22efbe8f5bbfdbc87606a0ff46ab69ddade5c1" Feb 26 18:00:52 crc kubenswrapper[4805]: I0226 18:00:52.875886 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7m2d5" Feb 26 18:00:52 crc kubenswrapper[4805]: I0226 18:00:52.914201 4805 scope.go:117] "RemoveContainer" containerID="a8de57cff5914aecc691cd0afab0b688a7e5059450119abd30f2657aa6b8d09c" Feb 26 18:00:52 crc kubenswrapper[4805]: I0226 18:00:52.943247 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m2d5"] Feb 26 18:00:52 crc kubenswrapper[4805]: I0226 18:00:52.972401 4805 scope.go:117] "RemoveContainer" containerID="62aa3896f251605915605da140e8a1083948fd7bdc7836a9d4b720194bea98ce" Feb 26 18:00:52 crc kubenswrapper[4805]: I0226 18:00:52.997960 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m2d5"] Feb 26 18:00:53 crc kubenswrapper[4805]: I0226 18:00:53.023655 4805 scope.go:117] "RemoveContainer" containerID="d7ef18c19babaa19757272e7aa22efbe8f5bbfdbc87606a0ff46ab69ddade5c1" Feb 26 18:00:53 crc kubenswrapper[4805]: E0226 18:00:53.024238 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7ef18c19babaa19757272e7aa22efbe8f5bbfdbc87606a0ff46ab69ddade5c1\": container with ID starting with d7ef18c19babaa19757272e7aa22efbe8f5bbfdbc87606a0ff46ab69ddade5c1 not found: ID does not exist" containerID="d7ef18c19babaa19757272e7aa22efbe8f5bbfdbc87606a0ff46ab69ddade5c1" Feb 26 18:00:53 crc kubenswrapper[4805]: I0226 18:00:53.024267 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7ef18c19babaa19757272e7aa22efbe8f5bbfdbc87606a0ff46ab69ddade5c1"} err="failed to get container status \"d7ef18c19babaa19757272e7aa22efbe8f5bbfdbc87606a0ff46ab69ddade5c1\": rpc error: code = NotFound desc = could not find container \"d7ef18c19babaa19757272e7aa22efbe8f5bbfdbc87606a0ff46ab69ddade5c1\": container with ID starting with d7ef18c19babaa19757272e7aa22efbe8f5bbfdbc87606a0ff46ab69ddade5c1 not found: ID does not exist" Feb 26 18:00:53 crc kubenswrapper[4805]: I0226 18:00:53.024288 4805 scope.go:117] "RemoveContainer" containerID="a8de57cff5914aecc691cd0afab0b688a7e5059450119abd30f2657aa6b8d09c" Feb 26 18:00:53 crc kubenswrapper[4805]: E0226 18:00:53.024737 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8de57cff5914aecc691cd0afab0b688a7e5059450119abd30f2657aa6b8d09c\": container with ID starting with a8de57cff5914aecc691cd0afab0b688a7e5059450119abd30f2657aa6b8d09c not found: ID does not exist" containerID="a8de57cff5914aecc691cd0afab0b688a7e5059450119abd30f2657aa6b8d09c" Feb 26 18:00:53 crc kubenswrapper[4805]: I0226 18:00:53.024763 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8de57cff5914aecc691cd0afab0b688a7e5059450119abd30f2657aa6b8d09c"} err="failed to get container status \"a8de57cff5914aecc691cd0afab0b688a7e5059450119abd30f2657aa6b8d09c\": rpc error: code = NotFound desc = could not find container \"a8de57cff5914aecc691cd0afab0b688a7e5059450119abd30f2657aa6b8d09c\": container with ID starting with a8de57cff5914aecc691cd0afab0b688a7e5059450119abd30f2657aa6b8d09c not found: ID does not exist" Feb 26 18:00:53 crc kubenswrapper[4805]: I0226 18:00:53.024779 4805 scope.go:117] "RemoveContainer" containerID="62aa3896f251605915605da140e8a1083948fd7bdc7836a9d4b720194bea98ce" Feb 26 18:00:53 crc kubenswrapper[4805]: E0226 18:00:53.024969 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62aa3896f251605915605da140e8a1083948fd7bdc7836a9d4b720194bea98ce\": container with ID starting with 62aa3896f251605915605da140e8a1083948fd7bdc7836a9d4b720194bea98ce not found: ID does not exist" containerID="62aa3896f251605915605da140e8a1083948fd7bdc7836a9d4b720194bea98ce" Feb 26 18:00:53 crc kubenswrapper[4805]: I0226 18:00:53.024990 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62aa3896f251605915605da140e8a1083948fd7bdc7836a9d4b720194bea98ce"} err="failed to get container status \"62aa3896f251605915605da140e8a1083948fd7bdc7836a9d4b720194bea98ce\": rpc error: code = NotFound desc = could not find container \"62aa3896f251605915605da140e8a1083948fd7bdc7836a9d4b720194bea98ce\": container with ID starting with 62aa3896f251605915605da140e8a1083948fd7bdc7836a9d4b720194bea98ce not found: ID does not exist" Feb 26 18:00:54 crc kubenswrapper[4805]: I0226 18:00:54.965551 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1714aa07-983b-4521-814f-fb44b0ea75e5" path="/var/lib/kubelet/pods/1714aa07-983b-4521-814f-fb44b0ea75e5/volumes" Feb 26 18:00:57 crc kubenswrapper[4805]: I0226 18:00:57.953563 4805 scope.go:117] "RemoveContainer" containerID="1491bbd2becc7a38ab4213e51d123b16b67ad1e023fc2d8be2998a56175f8172" Feb 26 18:00:57 crc kubenswrapper[4805]: E0226 18:00:57.954522 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:01:00 crc kubenswrapper[4805]: I0226 18:01:00.156336 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29535481-jqwcv"] Feb 26 18:01:00 crc kubenswrapper[4805]: E0226 18:01:00.157144 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1714aa07-983b-4521-814f-fb44b0ea75e5" containerName="extract-content" Feb 26 18:01:00 crc kubenswrapper[4805]: I0226 18:01:00.157158 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1714aa07-983b-4521-814f-fb44b0ea75e5" containerName="extract-content" Feb 26 18:01:00 crc kubenswrapper[4805]: E0226 18:01:00.157194 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1714aa07-983b-4521-814f-fb44b0ea75e5" containerName="registry-server" Feb 26 18:01:00 crc kubenswrapper[4805]: I0226 18:01:00.157200 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1714aa07-983b-4521-814f-fb44b0ea75e5" containerName="registry-server" Feb 26 18:01:00 crc kubenswrapper[4805]: E0226 18:01:00.157213 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1714aa07-983b-4521-814f-fb44b0ea75e5" containerName="extract-utilities" Feb 26 18:01:00 crc kubenswrapper[4805]: I0226 18:01:00.157220 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1714aa07-983b-4521-814f-fb44b0ea75e5" containerName="extract-utilities" Feb 26 18:01:00 crc kubenswrapper[4805]: I0226 18:01:00.157430 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1714aa07-983b-4521-814f-fb44b0ea75e5" containerName="registry-server" Feb 26 18:01:00 crc kubenswrapper[4805]: I0226 18:01:00.158271 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29535481-jqwcv" Feb 26 18:01:00 crc kubenswrapper[4805]: I0226 18:01:00.173506 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29535481-jqwcv"] Feb 26 18:01:00 crc kubenswrapper[4805]: I0226 18:01:00.334219 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg9p8\" (UniqueName: \"kubernetes.io/projected/b79390f8-65c0-4333-b6a1-a19baf15714c-kube-api-access-qg9p8\") pod \"keystone-cron-29535481-jqwcv\" (UID: \"b79390f8-65c0-4333-b6a1-a19baf15714c\") " pod="openstack/keystone-cron-29535481-jqwcv" Feb 26 18:01:00 crc kubenswrapper[4805]: I0226 18:01:00.334358 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b79390f8-65c0-4333-b6a1-a19baf15714c-config-data\") pod \"keystone-cron-29535481-jqwcv\" (UID: \"b79390f8-65c0-4333-b6a1-a19baf15714c\") " pod="openstack/keystone-cron-29535481-jqwcv" Feb 26 18:01:00 crc kubenswrapper[4805]: I0226 18:01:00.334378 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b79390f8-65c0-4333-b6a1-a19baf15714c-combined-ca-bundle\") pod \"keystone-cron-29535481-jqwcv\" (UID: \"b79390f8-65c0-4333-b6a1-a19baf15714c\") " pod="openstack/keystone-cron-29535481-jqwcv" Feb 26 18:01:00 crc kubenswrapper[4805]: I0226 18:01:00.334451 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b79390f8-65c0-4333-b6a1-a19baf15714c-fernet-keys\") pod \"keystone-cron-29535481-jqwcv\" (UID: \"b79390f8-65c0-4333-b6a1-a19baf15714c\") " pod="openstack/keystone-cron-29535481-jqwcv" Feb 26 18:01:00 crc kubenswrapper[4805]: I0226 18:01:00.437174 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b79390f8-65c0-4333-b6a1-a19baf15714c-config-data\") pod \"keystone-cron-29535481-jqwcv\" (UID: \"b79390f8-65c0-4333-b6a1-a19baf15714c\") " pod="openstack/keystone-cron-29535481-jqwcv" Feb 26 18:01:00 crc kubenswrapper[4805]: I0226 18:01:00.437253 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b79390f8-65c0-4333-b6a1-a19baf15714c-combined-ca-bundle\") pod \"keystone-cron-29535481-jqwcv\" (UID: \"b79390f8-65c0-4333-b6a1-a19baf15714c\") " pod="openstack/keystone-cron-29535481-jqwcv" Feb 26 18:01:00 crc kubenswrapper[4805]: I0226 18:01:00.437402 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b79390f8-65c0-4333-b6a1-a19baf15714c-fernet-keys\") pod \"keystone-cron-29535481-jqwcv\" (UID: \"b79390f8-65c0-4333-b6a1-a19baf15714c\") " pod="openstack/keystone-cron-29535481-jqwcv" Feb 26 18:01:00 crc kubenswrapper[4805]: I0226 18:01:00.437511 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg9p8\" (UniqueName: \"kubernetes.io/projected/b79390f8-65c0-4333-b6a1-a19baf15714c-kube-api-access-qg9p8\") pod \"keystone-cron-29535481-jqwcv\" (UID: \"b79390f8-65c0-4333-b6a1-a19baf15714c\") " pod="openstack/keystone-cron-29535481-jqwcv" Feb 26 18:01:00 crc kubenswrapper[4805]: I0226 18:01:00.442933 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b79390f8-65c0-4333-b6a1-a19baf15714c-config-data\") pod \"keystone-cron-29535481-jqwcv\" (UID: \"b79390f8-65c0-4333-b6a1-a19baf15714c\") " pod="openstack/keystone-cron-29535481-jqwcv" Feb 26 18:01:00 crc kubenswrapper[4805]: I0226 18:01:00.443943 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b79390f8-65c0-4333-b6a1-a19baf15714c-fernet-keys\") pod \"keystone-cron-29535481-jqwcv\" (UID: \"b79390f8-65c0-4333-b6a1-a19baf15714c\") " pod="openstack/keystone-cron-29535481-jqwcv" Feb 26 18:01:00 crc kubenswrapper[4805]: I0226 18:01:00.449808 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b79390f8-65c0-4333-b6a1-a19baf15714c-combined-ca-bundle\") pod \"keystone-cron-29535481-jqwcv\" (UID: \"b79390f8-65c0-4333-b6a1-a19baf15714c\") " pod="openstack/keystone-cron-29535481-jqwcv" Feb 26 18:01:00 crc kubenswrapper[4805]: I0226 18:01:00.456027 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg9p8\" (UniqueName: \"kubernetes.io/projected/b79390f8-65c0-4333-b6a1-a19baf15714c-kube-api-access-qg9p8\") pod \"keystone-cron-29535481-jqwcv\" (UID: \"b79390f8-65c0-4333-b6a1-a19baf15714c\") " pod="openstack/keystone-cron-29535481-jqwcv" Feb 26 18:01:00 crc kubenswrapper[4805]: I0226 18:01:00.492935 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29535481-jqwcv" Feb 26 18:01:01 crc kubenswrapper[4805]: I0226 18:01:01.018781 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29535481-jqwcv"] Feb 26 18:01:01 crc kubenswrapper[4805]: I0226 18:01:01.982156 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29535481-jqwcv" event={"ID":"b79390f8-65c0-4333-b6a1-a19baf15714c","Type":"ContainerStarted","Data":"e0a093f854b95e3c4e98f404a032f62f362de231c3540a557fddb19ac912c79a"} Feb 26 18:01:01 crc kubenswrapper[4805]: I0226 18:01:01.982510 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29535481-jqwcv" event={"ID":"b79390f8-65c0-4333-b6a1-a19baf15714c","Type":"ContainerStarted","Data":"68fb48c02ab45588e6d00c35546d3afa558c6f4bd3ebb3163e37d376f0bcd84d"} Feb 26 18:01:02 crc kubenswrapper[4805]: I0226 18:01:02.009580 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29535481-jqwcv" podStartSLOduration=2.009557101 podStartE2EDuration="2.009557101s" podCreationTimestamp="2026-02-26 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 18:01:02.003289652 +0000 UTC m=+2776.565043991" watchObservedRunningTime="2026-02-26 18:01:02.009557101 +0000 UTC m=+2776.571311440" Feb 26 18:01:05 crc kubenswrapper[4805]: I0226 18:01:05.014278 4805 generic.go:334] "Generic (PLEG): container finished" podID="b79390f8-65c0-4333-b6a1-a19baf15714c" containerID="e0a093f854b95e3c4e98f404a032f62f362de231c3540a557fddb19ac912c79a" exitCode=0 Feb 26 18:01:05 crc kubenswrapper[4805]: I0226 18:01:05.014383 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29535481-jqwcv" event={"ID":"b79390f8-65c0-4333-b6a1-a19baf15714c","Type":"ContainerDied","Data":"e0a093f854b95e3c4e98f404a032f62f362de231c3540a557fddb19ac912c79a"} Feb 26 18:01:06 crc kubenswrapper[4805]: I0226 18:01:06.424917 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29535481-jqwcv" Feb 26 18:01:06 crc kubenswrapper[4805]: I0226 18:01:06.594219 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b79390f8-65c0-4333-b6a1-a19baf15714c-combined-ca-bundle\") pod \"b79390f8-65c0-4333-b6a1-a19baf15714c\" (UID: \"b79390f8-65c0-4333-b6a1-a19baf15714c\") " Feb 26 18:01:06 crc kubenswrapper[4805]: I0226 18:01:06.594402 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b79390f8-65c0-4333-b6a1-a19baf15714c-config-data\") pod \"b79390f8-65c0-4333-b6a1-a19baf15714c\" (UID: \"b79390f8-65c0-4333-b6a1-a19baf15714c\") " Feb 26 18:01:06 crc kubenswrapper[4805]: I0226 18:01:06.594469 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b79390f8-65c0-4333-b6a1-a19baf15714c-fernet-keys\") pod \"b79390f8-65c0-4333-b6a1-a19baf15714c\" (UID: \"b79390f8-65c0-4333-b6a1-a19baf15714c\") " Feb 26 18:01:06 crc kubenswrapper[4805]: I0226 18:01:06.594842 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg9p8\" (UniqueName: \"kubernetes.io/projected/b79390f8-65c0-4333-b6a1-a19baf15714c-kube-api-access-qg9p8\") pod \"b79390f8-65c0-4333-b6a1-a19baf15714c\" (UID: \"b79390f8-65c0-4333-b6a1-a19baf15714c\") " Feb 26 18:01:06 crc kubenswrapper[4805]: I0226 18:01:06.601297 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b79390f8-65c0-4333-b6a1-a19baf15714c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b79390f8-65c0-4333-b6a1-a19baf15714c" (UID: "b79390f8-65c0-4333-b6a1-a19baf15714c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 18:01:06 crc kubenswrapper[4805]: I0226 18:01:06.601575 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b79390f8-65c0-4333-b6a1-a19baf15714c-kube-api-access-qg9p8" (OuterVolumeSpecName: "kube-api-access-qg9p8") pod "b79390f8-65c0-4333-b6a1-a19baf15714c" (UID: "b79390f8-65c0-4333-b6a1-a19baf15714c"). InnerVolumeSpecName "kube-api-access-qg9p8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:01:06 crc kubenswrapper[4805]: I0226 18:01:06.630976 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b79390f8-65c0-4333-b6a1-a19baf15714c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b79390f8-65c0-4333-b6a1-a19baf15714c" (UID: "b79390f8-65c0-4333-b6a1-a19baf15714c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 18:01:06 crc kubenswrapper[4805]: I0226 18:01:06.653324 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b79390f8-65c0-4333-b6a1-a19baf15714c-config-data" (OuterVolumeSpecName: "config-data") pod "b79390f8-65c0-4333-b6a1-a19baf15714c" (UID: "b79390f8-65c0-4333-b6a1-a19baf15714c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 18:01:06 crc kubenswrapper[4805]: I0226 18:01:06.697504 4805 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b79390f8-65c0-4333-b6a1-a19baf15714c-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 26 18:01:06 crc kubenswrapper[4805]: I0226 18:01:06.697539 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg9p8\" (UniqueName: \"kubernetes.io/projected/b79390f8-65c0-4333-b6a1-a19baf15714c-kube-api-access-qg9p8\") on node \"crc\" DevicePath \"\"" Feb 26 18:01:06 crc kubenswrapper[4805]: I0226 18:01:06.697552 4805 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b79390f8-65c0-4333-b6a1-a19baf15714c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 18:01:06 crc kubenswrapper[4805]: I0226 18:01:06.697565 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b79390f8-65c0-4333-b6a1-a19baf15714c-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 18:01:07 crc kubenswrapper[4805]: I0226 18:01:07.035176 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29535481-jqwcv" event={"ID":"b79390f8-65c0-4333-b6a1-a19baf15714c","Type":"ContainerDied","Data":"68fb48c02ab45588e6d00c35546d3afa558c6f4bd3ebb3163e37d376f0bcd84d"} Feb 26 18:01:07 crc kubenswrapper[4805]: I0226 18:01:07.035230 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68fb48c02ab45588e6d00c35546d3afa558c6f4bd3ebb3163e37d376f0bcd84d" Feb 26 18:01:07 crc kubenswrapper[4805]: I0226 18:01:07.035345 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29535481-jqwcv" Feb 26 18:01:12 crc kubenswrapper[4805]: I0226 18:01:12.953733 4805 scope.go:117] "RemoveContainer" containerID="1491bbd2becc7a38ab4213e51d123b16b67ad1e023fc2d8be2998a56175f8172" Feb 26 18:01:12 crc kubenswrapper[4805]: E0226 18:01:12.954497 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:01:26 crc kubenswrapper[4805]: I0226 18:01:26.964706 4805 scope.go:117] "RemoveContainer" containerID="1491bbd2becc7a38ab4213e51d123b16b67ad1e023fc2d8be2998a56175f8172" Feb 26 18:01:26 crc kubenswrapper[4805]: E0226 18:01:26.965805 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:01:38 crc kubenswrapper[4805]: I0226 18:01:38.954474 4805 scope.go:117] "RemoveContainer" containerID="1491bbd2becc7a38ab4213e51d123b16b67ad1e023fc2d8be2998a56175f8172" Feb 26 18:01:39 crc kubenswrapper[4805]: I0226 18:01:39.426550 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" event={"ID":"25e83477-65d0-41be-8e55-fdacfc5871a8","Type":"ContainerStarted","Data":"15dbabcf6bac43c80f9b32d154755c0ee038612c5fab62982466ef8dd39a9291"} Feb 26 18:02:00 crc kubenswrapper[4805]: I0226 18:02:00.146351 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535482-cbb7d"] Feb 26 18:02:00 crc kubenswrapper[4805]: E0226 18:02:00.147522 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b79390f8-65c0-4333-b6a1-a19baf15714c" containerName="keystone-cron" Feb 26 18:02:00 crc kubenswrapper[4805]: I0226 18:02:00.147542 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b79390f8-65c0-4333-b6a1-a19baf15714c" containerName="keystone-cron" Feb 26 18:02:00 crc kubenswrapper[4805]: I0226 18:02:00.147808 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="b79390f8-65c0-4333-b6a1-a19baf15714c" containerName="keystone-cron" Feb 26 18:02:00 crc kubenswrapper[4805]: I0226 18:02:00.148809 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535482-cbb7d" Feb 26 18:02:00 crc kubenswrapper[4805]: I0226 18:02:00.150820 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 18:02:00 crc kubenswrapper[4805]: I0226 18:02:00.151222 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 18:02:00 crc kubenswrapper[4805]: I0226 18:02:00.151916 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 18:02:00 crc kubenswrapper[4805]: I0226 18:02:00.167430 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535482-cbb7d"] Feb 26 18:02:00 crc kubenswrapper[4805]: I0226 18:02:00.271997 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqx5d\" (UniqueName: \"kubernetes.io/projected/d82fd202-18a2-4b84-98d1-f87c0381566b-kube-api-access-gqx5d\") pod \"auto-csr-approver-29535482-cbb7d\" (UID: \"d82fd202-18a2-4b84-98d1-f87c0381566b\") " pod="openshift-infra/auto-csr-approver-29535482-cbb7d" Feb 26 18:02:00 crc kubenswrapper[4805]: I0226 18:02:00.375757 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqx5d\" (UniqueName: \"kubernetes.io/projected/d82fd202-18a2-4b84-98d1-f87c0381566b-kube-api-access-gqx5d\") pod \"auto-csr-approver-29535482-cbb7d\" (UID: \"d82fd202-18a2-4b84-98d1-f87c0381566b\") " pod="openshift-infra/auto-csr-approver-29535482-cbb7d" Feb 26 18:02:00 crc kubenswrapper[4805]: I0226 18:02:00.399864 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqx5d\" (UniqueName: \"kubernetes.io/projected/d82fd202-18a2-4b84-98d1-f87c0381566b-kube-api-access-gqx5d\") pod \"auto-csr-approver-29535482-cbb7d\" (UID: \"d82fd202-18a2-4b84-98d1-f87c0381566b\") " pod="openshift-infra/auto-csr-approver-29535482-cbb7d" Feb 26 18:02:00 crc kubenswrapper[4805]: I0226 18:02:00.473821 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535482-cbb7d" Feb 26 18:02:00 crc kubenswrapper[4805]: I0226 18:02:00.978729 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535482-cbb7d"] Feb 26 18:02:01 crc kubenswrapper[4805]: I0226 18:02:01.646658 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535482-cbb7d" event={"ID":"d82fd202-18a2-4b84-98d1-f87c0381566b","Type":"ContainerStarted","Data":"24d4f5182e06484b970aef29f33f110be34096c0338f46a05fe78652c4f267b6"} Feb 26 18:02:03 crc kubenswrapper[4805]: I0226 18:02:03.665079 4805 generic.go:334] "Generic (PLEG): container finished" podID="d82fd202-18a2-4b84-98d1-f87c0381566b" containerID="4e2c384e9166d76c8abb9429c338d70f14480544f1a467d21433bd0448edd194" exitCode=0 Feb 26 18:02:03 crc kubenswrapper[4805]: I0226 18:02:03.665170 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535482-cbb7d" event={"ID":"d82fd202-18a2-4b84-98d1-f87c0381566b","Type":"ContainerDied","Data":"4e2c384e9166d76c8abb9429c338d70f14480544f1a467d21433bd0448edd194"} Feb 26 18:02:05 crc kubenswrapper[4805]: I0226 18:02:05.104085 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535482-cbb7d" Feb 26 18:02:05 crc kubenswrapper[4805]: I0226 18:02:05.173535 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqx5d\" (UniqueName: \"kubernetes.io/projected/d82fd202-18a2-4b84-98d1-f87c0381566b-kube-api-access-gqx5d\") pod \"d82fd202-18a2-4b84-98d1-f87c0381566b\" (UID: \"d82fd202-18a2-4b84-98d1-f87c0381566b\") " Feb 26 18:02:05 crc kubenswrapper[4805]: I0226 18:02:05.179410 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d82fd202-18a2-4b84-98d1-f87c0381566b-kube-api-access-gqx5d" (OuterVolumeSpecName: "kube-api-access-gqx5d") pod "d82fd202-18a2-4b84-98d1-f87c0381566b" (UID: "d82fd202-18a2-4b84-98d1-f87c0381566b"). InnerVolumeSpecName "kube-api-access-gqx5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:02:05 crc kubenswrapper[4805]: I0226 18:02:05.277692 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqx5d\" (UniqueName: \"kubernetes.io/projected/d82fd202-18a2-4b84-98d1-f87c0381566b-kube-api-access-gqx5d\") on node \"crc\" DevicePath \"\"" Feb 26 18:02:05 crc kubenswrapper[4805]: I0226 18:02:05.686978 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535482-cbb7d" event={"ID":"d82fd202-18a2-4b84-98d1-f87c0381566b","Type":"ContainerDied","Data":"24d4f5182e06484b970aef29f33f110be34096c0338f46a05fe78652c4f267b6"} Feb 26 18:02:05 crc kubenswrapper[4805]: I0226 18:02:05.687362 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24d4f5182e06484b970aef29f33f110be34096c0338f46a05fe78652c4f267b6" Feb 26 18:02:05 crc kubenswrapper[4805]: I0226 18:02:05.687105 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535482-cbb7d" Feb 26 18:02:06 crc kubenswrapper[4805]: I0226 18:02:06.182196 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535476-mnb4s"] Feb 26 18:02:06 crc kubenswrapper[4805]: I0226 18:02:06.194436 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535476-mnb4s"] Feb 26 18:02:06 crc kubenswrapper[4805]: I0226 18:02:06.965678 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="209d50d9-d2ec-4837-b292-a4b2a7f97825" path="/var/lib/kubelet/pods/209d50d9-d2ec-4837-b292-a4b2a7f97825/volumes" Feb 26 18:02:45 crc kubenswrapper[4805]: I0226 18:02:45.083367 4805 generic.go:334] "Generic (PLEG): container finished" podID="b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0" containerID="45b00c22861226a53dd35eaf2fe60f5b9c6fa50a2895ddebcc04ab3999e9ef8d" exitCode=0 Feb 26 18:02:45 crc kubenswrapper[4805]: I0226 18:02:45.083553 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xnt74" event={"ID":"b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0","Type":"ContainerDied","Data":"45b00c22861226a53dd35eaf2fe60f5b9c6fa50a2895ddebcc04ab3999e9ef8d"} Feb 26 18:02:46 crc kubenswrapper[4805]: I0226 18:02:46.876527 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xnt74" Feb 26 18:02:46 crc kubenswrapper[4805]: I0226 18:02:46.977419 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5w77\" (UniqueName: \"kubernetes.io/projected/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-kube-api-access-v5w77\") pod \"b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0\" (UID: \"b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0\") " Feb 26 18:02:46 crc kubenswrapper[4805]: I0226 18:02:46.977733 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-ceilometer-compute-config-data-1\") pod \"b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0\" (UID: \"b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0\") " Feb 26 18:02:46 crc kubenswrapper[4805]: I0226 18:02:46.977832 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-ssh-key-openstack-edpm-ipam\") pod \"b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0\" (UID: \"b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0\") " Feb 26 18:02:46 crc kubenswrapper[4805]: I0226 18:02:46.994324 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-kube-api-access-v5w77" (OuterVolumeSpecName: "kube-api-access-v5w77") pod "b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0" (UID: "b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0"). InnerVolumeSpecName "kube-api-access-v5w77". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:02:47 crc kubenswrapper[4805]: I0226 18:02:47.012335 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0" (UID: "b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 18:02:47 crc kubenswrapper[4805]: I0226 18:02:47.012445 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0" (UID: "b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 18:02:47 crc kubenswrapper[4805]: I0226 18:02:47.080083 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-ceilometer-compute-config-data-2\") pod \"b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0\" (UID: \"b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0\") " Feb 26 18:02:47 crc kubenswrapper[4805]: I0226 18:02:47.080147 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-ceilometer-compute-config-data-0\") pod \"b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0\" (UID: \"b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0\") " Feb 26 18:02:47 crc kubenswrapper[4805]: I0226 18:02:47.080262 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-telemetry-combined-ca-bundle\") pod \"b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0\" (UID: \"b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0\") " Feb 26 18:02:47 crc kubenswrapper[4805]: I0226 18:02:47.080300 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-inventory\") pod \"b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0\" (UID: \"b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0\") " Feb 26 18:02:47 crc kubenswrapper[4805]: I0226 18:02:47.081130 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 26 18:02:47 crc kubenswrapper[4805]: I0226 18:02:47.081153 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5w77\" (UniqueName: \"kubernetes.io/projected/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-kube-api-access-v5w77\") on node \"crc\" DevicePath \"\"" Feb 26 18:02:47 crc kubenswrapper[4805]: I0226 18:02:47.081166 4805 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 26 18:02:47 crc kubenswrapper[4805]: I0226 18:02:47.083332 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0" (UID: "b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 18:02:47 crc kubenswrapper[4805]: I0226 18:02:47.104798 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0" (UID: "b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 18:02:47 crc kubenswrapper[4805]: I0226 18:02:47.105460 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xnt74" event={"ID":"b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0","Type":"ContainerDied","Data":"19dfeb024d4fc8a2e5e424bb406f56b8add563d7d7fe2f48f397d1562be7812c"} Feb 26 18:02:47 crc kubenswrapper[4805]: I0226 18:02:47.105494 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19dfeb024d4fc8a2e5e424bb406f56b8add563d7d7fe2f48f397d1562be7812c" Feb 26 18:02:47 crc kubenswrapper[4805]: I0226 18:02:47.105544 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xnt74" Feb 26 18:02:47 crc kubenswrapper[4805]: I0226 18:02:47.112883 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-inventory" (OuterVolumeSpecName: "inventory") pod "b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0" (UID: "b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 18:02:47 crc kubenswrapper[4805]: I0226 18:02:47.113730 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0" (UID: "b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 18:02:47 crc kubenswrapper[4805]: I0226 18:02:47.182812 4805 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-inventory\") on node \"crc\" DevicePath \"\"" Feb 26 18:02:47 crc kubenswrapper[4805]: I0226 18:02:47.182855 4805 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 26 18:02:47 crc kubenswrapper[4805]: I0226 18:02:47.182871 4805 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 26 18:02:47 crc kubenswrapper[4805]: I0226 18:02:47.182884 4805 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 26 18:02:47 crc kubenswrapper[4805]: I0226 18:02:47.281066 4805 scope.go:117] "RemoveContainer" containerID="c9f0e9748e4b6f3bc8c07e2b8cdf2230275414981dd5a568c3717f672d33d5aa" Feb 26 18:03:03 crc kubenswrapper[4805]: I0226 18:03:03.208926 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qxrfc"] Feb 26 18:03:03 crc kubenswrapper[4805]: E0226 18:03:03.209797 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 26 18:03:03 crc kubenswrapper[4805]: I0226 18:03:03.209811 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 26 18:03:03 crc kubenswrapper[4805]: E0226 18:03:03.209844 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d82fd202-18a2-4b84-98d1-f87c0381566b" containerName="oc" Feb 26 18:03:03 crc kubenswrapper[4805]: I0226 18:03:03.209849 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d82fd202-18a2-4b84-98d1-f87c0381566b" containerName="oc" Feb 26 18:03:03 crc kubenswrapper[4805]: I0226 18:03:03.210029 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 26 18:03:03 crc kubenswrapper[4805]: I0226 18:03:03.210056 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="d82fd202-18a2-4b84-98d1-f87c0381566b" containerName="oc" Feb 26 18:03:03 crc kubenswrapper[4805]: I0226 18:03:03.231253 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qxrfc" Feb 26 18:03:03 crc kubenswrapper[4805]: I0226 18:03:03.242232 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b236e75d-415e-4101-9e99-956c10930bfd-catalog-content\") pod \"redhat-operators-qxrfc\" (UID: \"b236e75d-415e-4101-9e99-956c10930bfd\") " pod="openshift-marketplace/redhat-operators-qxrfc" Feb 26 18:03:03 crc kubenswrapper[4805]: I0226 18:03:03.242386 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b236e75d-415e-4101-9e99-956c10930bfd-utilities\") pod \"redhat-operators-qxrfc\" (UID: \"b236e75d-415e-4101-9e99-956c10930bfd\") " pod="openshift-marketplace/redhat-operators-qxrfc" Feb 26 18:03:03 crc kubenswrapper[4805]: I0226 18:03:03.242492 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xljc\" (UniqueName: \"kubernetes.io/projected/b236e75d-415e-4101-9e99-956c10930bfd-kube-api-access-6xljc\") pod \"redhat-operators-qxrfc\" (UID: \"b236e75d-415e-4101-9e99-956c10930bfd\") " pod="openshift-marketplace/redhat-operators-qxrfc" Feb 26 18:03:03 crc kubenswrapper[4805]: I0226 18:03:03.271776 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qxrfc"] Feb 26 18:03:03 crc kubenswrapper[4805]: I0226 18:03:03.345961 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b236e75d-415e-4101-9e99-956c10930bfd-catalog-content\") pod \"redhat-operators-qxrfc\" (UID: \"b236e75d-415e-4101-9e99-956c10930bfd\") " pod="openshift-marketplace/redhat-operators-qxrfc" Feb 26 18:03:03 crc kubenswrapper[4805]: I0226 18:03:03.346140 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b236e75d-415e-4101-9e99-956c10930bfd-utilities\") pod \"redhat-operators-qxrfc\" (UID: \"b236e75d-415e-4101-9e99-956c10930bfd\") " pod="openshift-marketplace/redhat-operators-qxrfc" Feb 26 18:03:03 crc kubenswrapper[4805]: I0226 18:03:03.346229 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xljc\" (UniqueName: \"kubernetes.io/projected/b236e75d-415e-4101-9e99-956c10930bfd-kube-api-access-6xljc\") pod \"redhat-operators-qxrfc\" (UID: \"b236e75d-415e-4101-9e99-956c10930bfd\") " pod="openshift-marketplace/redhat-operators-qxrfc" Feb 26 18:03:03 crc kubenswrapper[4805]: I0226 18:03:03.346482 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b236e75d-415e-4101-9e99-956c10930bfd-catalog-content\") pod \"redhat-operators-qxrfc\" (UID: \"b236e75d-415e-4101-9e99-956c10930bfd\") " pod="openshift-marketplace/redhat-operators-qxrfc" Feb 26 18:03:03 crc kubenswrapper[4805]: I0226 18:03:03.346788 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b236e75d-415e-4101-9e99-956c10930bfd-utilities\") pod \"redhat-operators-qxrfc\" (UID: \"b236e75d-415e-4101-9e99-956c10930bfd\") " pod="openshift-marketplace/redhat-operators-qxrfc" Feb 26 18:03:03 crc kubenswrapper[4805]: I0226 18:03:03.380469 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xljc\" (UniqueName: \"kubernetes.io/projected/b236e75d-415e-4101-9e99-956c10930bfd-kube-api-access-6xljc\") pod \"redhat-operators-qxrfc\" (UID: \"b236e75d-415e-4101-9e99-956c10930bfd\") " pod="openshift-marketplace/redhat-operators-qxrfc" Feb 26 18:03:03 crc kubenswrapper[4805]: I0226 18:03:03.550423 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qxrfc" Feb 26 18:03:04 crc kubenswrapper[4805]: I0226 18:03:04.100652 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qxrfc"] Feb 26 18:03:04 crc kubenswrapper[4805]: I0226 18:03:04.453123 4805 generic.go:334] "Generic (PLEG): container finished" podID="b236e75d-415e-4101-9e99-956c10930bfd" containerID="a5f38b8d967c1741eb3db6fe103e76534f657bef16d8be638d62bbb72a1f3ef0" exitCode=0 Feb 26 18:03:04 crc kubenswrapper[4805]: I0226 18:03:04.453172 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qxrfc" event={"ID":"b236e75d-415e-4101-9e99-956c10930bfd","Type":"ContainerDied","Data":"a5f38b8d967c1741eb3db6fe103e76534f657bef16d8be638d62bbb72a1f3ef0"} Feb 26 18:03:04 crc kubenswrapper[4805]: I0226 18:03:04.453204 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qxrfc" event={"ID":"b236e75d-415e-4101-9e99-956c10930bfd","Type":"ContainerStarted","Data":"fdda05f9c09c16d33ac0204aed052e7ae7c9a3ceb30df6896cb4f14b66ad9d80"} Feb 26 18:03:05 crc kubenswrapper[4805]: I0226 18:03:05.462583 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qxrfc" event={"ID":"b236e75d-415e-4101-9e99-956c10930bfd","Type":"ContainerStarted","Data":"920c15d1d66158fbbbede0b2906f36fb5937c387393de4cb318465680dcf7a64"} Feb 26 18:03:10 crc kubenswrapper[4805]: I0226 18:03:10.517900 4805 generic.go:334] "Generic (PLEG): container finished" podID="b236e75d-415e-4101-9e99-956c10930bfd" containerID="920c15d1d66158fbbbede0b2906f36fb5937c387393de4cb318465680dcf7a64" exitCode=0 Feb 26 18:03:10 crc kubenswrapper[4805]: I0226 18:03:10.518013 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qxrfc" event={"ID":"b236e75d-415e-4101-9e99-956c10930bfd","Type":"ContainerDied","Data":"920c15d1d66158fbbbede0b2906f36fb5937c387393de4cb318465680dcf7a64"} Feb 26 18:03:11 crc kubenswrapper[4805]: I0226 18:03:11.533434 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qxrfc" event={"ID":"b236e75d-415e-4101-9e99-956c10930bfd","Type":"ContainerStarted","Data":"284b83cc84f21f5fce5e55297d59a584d67d115c752292c503b45829619f97e3"} Feb 26 18:03:11 crc kubenswrapper[4805]: I0226 18:03:11.565302 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qxrfc" podStartSLOduration=2.095181998 podStartE2EDuration="8.565277843s" podCreationTimestamp="2026-02-26 18:03:03 +0000 UTC" firstStartedPulling="2026-02-26 18:03:04.454894532 +0000 UTC m=+2899.016648871" lastFinishedPulling="2026-02-26 18:03:10.924990357 +0000 UTC m=+2905.486744716" observedRunningTime="2026-02-26 18:03:11.553385701 +0000 UTC m=+2906.115140080" watchObservedRunningTime="2026-02-26 18:03:11.565277843 +0000 UTC m=+2906.127032182" Feb 26 18:03:13 crc kubenswrapper[4805]: I0226 18:03:13.551122 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qxrfc" Feb 26 18:03:13 crc kubenswrapper[4805]: I0226 18:03:13.551401 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qxrfc" Feb 26 18:03:14 crc kubenswrapper[4805]: I0226 18:03:14.599617 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qxrfc" podUID="b236e75d-415e-4101-9e99-956c10930bfd" containerName="registry-server" probeResult="failure" output=< Feb 26 18:03:14 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 26 18:03:14 crc kubenswrapper[4805]: > Feb 26 18:03:24 crc kubenswrapper[4805]: I0226 18:03:24.605315 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qxrfc" podUID="b236e75d-415e-4101-9e99-956c10930bfd" containerName="registry-server" probeResult="failure" output=< Feb 26 18:03:24 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 26 18:03:24 crc kubenswrapper[4805]: > Feb 26 18:03:33 crc kubenswrapper[4805]: I0226 18:03:33.628959 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qxrfc" Feb 26 18:03:33 crc kubenswrapper[4805]: I0226 18:03:33.705075 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qxrfc" Feb 26 18:03:34 crc kubenswrapper[4805]: I0226 18:03:34.409432 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qxrfc"] Feb 26 18:03:34 crc kubenswrapper[4805]: I0226 18:03:34.767998 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qxrfc" podUID="b236e75d-415e-4101-9e99-956c10930bfd" containerName="registry-server" containerID="cri-o://284b83cc84f21f5fce5e55297d59a584d67d115c752292c503b45829619f97e3" gracePeriod=2 Feb 26 18:03:35 crc kubenswrapper[4805]: I0226 18:03:35.370615 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qxrfc" Feb 26 18:03:35 crc kubenswrapper[4805]: I0226 18:03:35.513336 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b236e75d-415e-4101-9e99-956c10930bfd-utilities\") pod \"b236e75d-415e-4101-9e99-956c10930bfd\" (UID: \"b236e75d-415e-4101-9e99-956c10930bfd\") " Feb 26 18:03:35 crc kubenswrapper[4805]: I0226 18:03:35.513649 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b236e75d-415e-4101-9e99-956c10930bfd-catalog-content\") pod \"b236e75d-415e-4101-9e99-956c10930bfd\" (UID: \"b236e75d-415e-4101-9e99-956c10930bfd\") " Feb 26 18:03:35 crc kubenswrapper[4805]: I0226 18:03:35.513701 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xljc\" (UniqueName: \"kubernetes.io/projected/b236e75d-415e-4101-9e99-956c10930bfd-kube-api-access-6xljc\") pod \"b236e75d-415e-4101-9e99-956c10930bfd\" (UID: \"b236e75d-415e-4101-9e99-956c10930bfd\") " Feb 26 18:03:35 crc kubenswrapper[4805]: I0226 18:03:35.515124 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b236e75d-415e-4101-9e99-956c10930bfd-utilities" (OuterVolumeSpecName: "utilities") pod "b236e75d-415e-4101-9e99-956c10930bfd" (UID: "b236e75d-415e-4101-9e99-956c10930bfd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 18:03:35 crc kubenswrapper[4805]: I0226 18:03:35.521852 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b236e75d-415e-4101-9e99-956c10930bfd-kube-api-access-6xljc" (OuterVolumeSpecName: "kube-api-access-6xljc") pod "b236e75d-415e-4101-9e99-956c10930bfd" (UID: "b236e75d-415e-4101-9e99-956c10930bfd"). InnerVolumeSpecName "kube-api-access-6xljc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:03:35 crc kubenswrapper[4805]: I0226 18:03:35.616634 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b236e75d-415e-4101-9e99-956c10930bfd-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 18:03:35 crc kubenswrapper[4805]: I0226 18:03:35.616671 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xljc\" (UniqueName: \"kubernetes.io/projected/b236e75d-415e-4101-9e99-956c10930bfd-kube-api-access-6xljc\") on node \"crc\" DevicePath \"\"" Feb 26 18:03:35 crc kubenswrapper[4805]: I0226 18:03:35.657152 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b236e75d-415e-4101-9e99-956c10930bfd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b236e75d-415e-4101-9e99-956c10930bfd" (UID: "b236e75d-415e-4101-9e99-956c10930bfd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 18:03:35 crc kubenswrapper[4805]: I0226 18:03:35.718614 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b236e75d-415e-4101-9e99-956c10930bfd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 18:03:35 crc kubenswrapper[4805]: I0226 18:03:35.780306 4805 generic.go:334] "Generic (PLEG): container finished" podID="b236e75d-415e-4101-9e99-956c10930bfd" containerID="284b83cc84f21f5fce5e55297d59a584d67d115c752292c503b45829619f97e3" exitCode=0 Feb 26 18:03:35 crc kubenswrapper[4805]: I0226 18:03:35.780354 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qxrfc" event={"ID":"b236e75d-415e-4101-9e99-956c10930bfd","Type":"ContainerDied","Data":"284b83cc84f21f5fce5e55297d59a584d67d115c752292c503b45829619f97e3"} Feb 26 18:03:35 crc kubenswrapper[4805]: I0226 18:03:35.780387 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qxrfc" event={"ID":"b236e75d-415e-4101-9e99-956c10930bfd","Type":"ContainerDied","Data":"fdda05f9c09c16d33ac0204aed052e7ae7c9a3ceb30df6896cb4f14b66ad9d80"} Feb 26 18:03:35 crc kubenswrapper[4805]: I0226 18:03:35.780404 4805 scope.go:117] "RemoveContainer" containerID="284b83cc84f21f5fce5e55297d59a584d67d115c752292c503b45829619f97e3" Feb 26 18:03:35 crc kubenswrapper[4805]: I0226 18:03:35.780454 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qxrfc" Feb 26 18:03:35 crc kubenswrapper[4805]: I0226 18:03:35.803998 4805 scope.go:117] "RemoveContainer" containerID="920c15d1d66158fbbbede0b2906f36fb5937c387393de4cb318465680dcf7a64" Feb 26 18:03:35 crc kubenswrapper[4805]: I0226 18:03:35.832716 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qxrfc"] Feb 26 18:03:35 crc kubenswrapper[4805]: I0226 18:03:35.849634 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qxrfc"] Feb 26 18:03:35 crc kubenswrapper[4805]: I0226 18:03:35.858337 4805 scope.go:117] "RemoveContainer" containerID="a5f38b8d967c1741eb3db6fe103e76534f657bef16d8be638d62bbb72a1f3ef0" Feb 26 18:03:35 crc kubenswrapper[4805]: I0226 18:03:35.882139 4805 scope.go:117] "RemoveContainer" containerID="284b83cc84f21f5fce5e55297d59a584d67d115c752292c503b45829619f97e3" Feb 26 18:03:35 crc kubenswrapper[4805]: E0226 18:03:35.882803 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"284b83cc84f21f5fce5e55297d59a584d67d115c752292c503b45829619f97e3\": container with ID starting with 284b83cc84f21f5fce5e55297d59a584d67d115c752292c503b45829619f97e3 not found: ID does not exist" containerID="284b83cc84f21f5fce5e55297d59a584d67d115c752292c503b45829619f97e3" Feb 26 18:03:35 crc kubenswrapper[4805]: I0226 18:03:35.882939 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"284b83cc84f21f5fce5e55297d59a584d67d115c752292c503b45829619f97e3"} err="failed to get container status \"284b83cc84f21f5fce5e55297d59a584d67d115c752292c503b45829619f97e3\": rpc error: code = NotFound desc = could not find container \"284b83cc84f21f5fce5e55297d59a584d67d115c752292c503b45829619f97e3\": container with ID starting with 284b83cc84f21f5fce5e55297d59a584d67d115c752292c503b45829619f97e3 not found: ID does not exist" Feb 26 18:03:35 crc kubenswrapper[4805]: I0226 18:03:35.882983 4805 scope.go:117] "RemoveContainer" containerID="920c15d1d66158fbbbede0b2906f36fb5937c387393de4cb318465680dcf7a64" Feb 26 18:03:35 crc kubenswrapper[4805]: E0226 18:03:35.883478 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"920c15d1d66158fbbbede0b2906f36fb5937c387393de4cb318465680dcf7a64\": container with ID starting with 920c15d1d66158fbbbede0b2906f36fb5937c387393de4cb318465680dcf7a64 not found: ID does not exist" containerID="920c15d1d66158fbbbede0b2906f36fb5937c387393de4cb318465680dcf7a64" Feb 26 18:03:35 crc kubenswrapper[4805]: I0226 18:03:35.883515 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"920c15d1d66158fbbbede0b2906f36fb5937c387393de4cb318465680dcf7a64"} err="failed to get container status \"920c15d1d66158fbbbede0b2906f36fb5937c387393de4cb318465680dcf7a64\": rpc error: code = NotFound desc = could not find container \"920c15d1d66158fbbbede0b2906f36fb5937c387393de4cb318465680dcf7a64\": container with ID starting with 920c15d1d66158fbbbede0b2906f36fb5937c387393de4cb318465680dcf7a64 not found: ID does not exist" Feb 26 18:03:35 crc kubenswrapper[4805]: I0226 18:03:35.883539 4805 scope.go:117] "RemoveContainer" containerID="a5f38b8d967c1741eb3db6fe103e76534f657bef16d8be638d62bbb72a1f3ef0" Feb 26 18:03:35 crc kubenswrapper[4805]: E0226 18:03:35.883831 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5f38b8d967c1741eb3db6fe103e76534f657bef16d8be638d62bbb72a1f3ef0\": container with ID starting with a5f38b8d967c1741eb3db6fe103e76534f657bef16d8be638d62bbb72a1f3ef0 not found: ID does not exist" containerID="a5f38b8d967c1741eb3db6fe103e76534f657bef16d8be638d62bbb72a1f3ef0" Feb 26 18:03:35 crc kubenswrapper[4805]: I0226 18:03:35.883871 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5f38b8d967c1741eb3db6fe103e76534f657bef16d8be638d62bbb72a1f3ef0"} err="failed to get container status \"a5f38b8d967c1741eb3db6fe103e76534f657bef16d8be638d62bbb72a1f3ef0\": rpc error: code = NotFound desc = could not find container \"a5f38b8d967c1741eb3db6fe103e76534f657bef16d8be638d62bbb72a1f3ef0\": container with ID starting with a5f38b8d967c1741eb3db6fe103e76534f657bef16d8be638d62bbb72a1f3ef0 not found: ID does not exist" Feb 26 18:03:36 crc kubenswrapper[4805]: I0226 18:03:36.967325 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b236e75d-415e-4101-9e99-956c10930bfd" path="/var/lib/kubelet/pods/b236e75d-415e-4101-9e99-956c10930bfd/volumes" Feb 26 18:03:38 crc kubenswrapper[4805]: I0226 18:03:38.626358 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mpgmt"] Feb 26 18:03:38 crc kubenswrapper[4805]: E0226 18:03:38.627174 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b236e75d-415e-4101-9e99-956c10930bfd" containerName="extract-utilities" Feb 26 18:03:38 crc kubenswrapper[4805]: I0226 18:03:38.627193 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b236e75d-415e-4101-9e99-956c10930bfd" containerName="extract-utilities" Feb 26 18:03:38 crc kubenswrapper[4805]: E0226 18:03:38.627223 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b236e75d-415e-4101-9e99-956c10930bfd" containerName="extract-content" Feb 26 18:03:38 crc kubenswrapper[4805]: I0226 18:03:38.627229 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b236e75d-415e-4101-9e99-956c10930bfd" containerName="extract-content" Feb 26 18:03:38 crc kubenswrapper[4805]: E0226 18:03:38.627261 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b236e75d-415e-4101-9e99-956c10930bfd" containerName="registry-server" Feb 26 18:03:38 crc kubenswrapper[4805]: I0226 18:03:38.627268 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b236e75d-415e-4101-9e99-956c10930bfd" containerName="registry-server" Feb 26 18:03:38 crc kubenswrapper[4805]: I0226 18:03:38.627487 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="b236e75d-415e-4101-9e99-956c10930bfd" containerName="registry-server" Feb 26 18:03:38 crc kubenswrapper[4805]: I0226 18:03:38.629368 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mpgmt" Feb 26 18:03:38 crc kubenswrapper[4805]: I0226 18:03:38.636684 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mpgmt"] Feb 26 18:03:38 crc kubenswrapper[4805]: I0226 18:03:38.799200 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76da996e-6832-4174-ba93-dfbcb0eea305-catalog-content\") pod \"certified-operators-mpgmt\" (UID: \"76da996e-6832-4174-ba93-dfbcb0eea305\") " pod="openshift-marketplace/certified-operators-mpgmt" Feb 26 18:03:38 crc kubenswrapper[4805]: I0226 18:03:38.799247 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlgwd\" (UniqueName: \"kubernetes.io/projected/76da996e-6832-4174-ba93-dfbcb0eea305-kube-api-access-vlgwd\") pod \"certified-operators-mpgmt\" (UID: \"76da996e-6832-4174-ba93-dfbcb0eea305\") " pod="openshift-marketplace/certified-operators-mpgmt" Feb 26 18:03:38 crc kubenswrapper[4805]: I0226 18:03:38.799617 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76da996e-6832-4174-ba93-dfbcb0eea305-utilities\") pod \"certified-operators-mpgmt\" (UID: \"76da996e-6832-4174-ba93-dfbcb0eea305\") " pod="openshift-marketplace/certified-operators-mpgmt" Feb 26 18:03:38 crc kubenswrapper[4805]: I0226 18:03:38.901975 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76da996e-6832-4174-ba93-dfbcb0eea305-catalog-content\") pod \"certified-operators-mpgmt\" (UID: \"76da996e-6832-4174-ba93-dfbcb0eea305\") " pod="openshift-marketplace/certified-operators-mpgmt" Feb 26 18:03:38 crc kubenswrapper[4805]: I0226 18:03:38.902068 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlgwd\" (UniqueName: \"kubernetes.io/projected/76da996e-6832-4174-ba93-dfbcb0eea305-kube-api-access-vlgwd\") pod \"certified-operators-mpgmt\" (UID: \"76da996e-6832-4174-ba93-dfbcb0eea305\") " pod="openshift-marketplace/certified-operators-mpgmt" Feb 26 18:03:38 crc kubenswrapper[4805]: I0226 18:03:38.902226 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76da996e-6832-4174-ba93-dfbcb0eea305-utilities\") pod \"certified-operators-mpgmt\" (UID: \"76da996e-6832-4174-ba93-dfbcb0eea305\") " pod="openshift-marketplace/certified-operators-mpgmt" Feb 26 18:03:38 crc kubenswrapper[4805]: I0226 18:03:38.902564 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76da996e-6832-4174-ba93-dfbcb0eea305-catalog-content\") pod \"certified-operators-mpgmt\" (UID: \"76da996e-6832-4174-ba93-dfbcb0eea305\") " pod="openshift-marketplace/certified-operators-mpgmt" Feb 26 18:03:38 crc kubenswrapper[4805]: I0226 18:03:38.902742 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76da996e-6832-4174-ba93-dfbcb0eea305-utilities\") pod \"certified-operators-mpgmt\" (UID: \"76da996e-6832-4174-ba93-dfbcb0eea305\") " pod="openshift-marketplace/certified-operators-mpgmt" Feb 26 18:03:38 crc kubenswrapper[4805]: I0226 18:03:38.929225 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlgwd\" (UniqueName: \"kubernetes.io/projected/76da996e-6832-4174-ba93-dfbcb0eea305-kube-api-access-vlgwd\") pod \"certified-operators-mpgmt\" (UID: \"76da996e-6832-4174-ba93-dfbcb0eea305\") " pod="openshift-marketplace/certified-operators-mpgmt" Feb 26 18:03:38 crc kubenswrapper[4805]: I0226 18:03:38.950581 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mpgmt" Feb 26 18:03:39 crc kubenswrapper[4805]: I0226 18:03:39.526692 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mpgmt"] Feb 26 18:03:39 crc kubenswrapper[4805]: W0226 18:03:39.537654 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76da996e_6832_4174_ba93_dfbcb0eea305.slice/crio-b3b011b94422019f4ec847b158cdf57ecc678c4dbe0c252ff8bac0014db97857 WatchSource:0}: Error finding container b3b011b94422019f4ec847b158cdf57ecc678c4dbe0c252ff8bac0014db97857: Status 404 returned error can't find the container with id b3b011b94422019f4ec847b158cdf57ecc678c4dbe0c252ff8bac0014db97857 Feb 26 18:03:39 crc kubenswrapper[4805]: I0226 18:03:39.835319 4805 generic.go:334] "Generic (PLEG): container finished" podID="76da996e-6832-4174-ba93-dfbcb0eea305" containerID="4069ab81c566ed0a86ec23a902f2963789117a1b1f47dc34eccd757bac92dcac" exitCode=0 Feb 26 18:03:39 crc kubenswrapper[4805]: I0226 18:03:39.835358 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpgmt" event={"ID":"76da996e-6832-4174-ba93-dfbcb0eea305","Type":"ContainerDied","Data":"4069ab81c566ed0a86ec23a902f2963789117a1b1f47dc34eccd757bac92dcac"} Feb 26 18:03:39 crc kubenswrapper[4805]: I0226 18:03:39.835383 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpgmt" event={"ID":"76da996e-6832-4174-ba93-dfbcb0eea305","Type":"ContainerStarted","Data":"b3b011b94422019f4ec847b158cdf57ecc678c4dbe0c252ff8bac0014db97857"} Feb 26 18:03:40 crc kubenswrapper[4805]: I0226 18:03:40.851565 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpgmt" event={"ID":"76da996e-6832-4174-ba93-dfbcb0eea305","Type":"ContainerStarted","Data":"c597eaa560416cf3f50e6bc92c24787613eadff13927908e43ac41d6a6fb0ab4"} Feb 26 18:03:42 crc kubenswrapper[4805]: I0226 18:03:42.878367 4805 generic.go:334] "Generic (PLEG): container finished" podID="76da996e-6832-4174-ba93-dfbcb0eea305" containerID="c597eaa560416cf3f50e6bc92c24787613eadff13927908e43ac41d6a6fb0ab4" exitCode=0 Feb 26 18:03:42 crc kubenswrapper[4805]: I0226 18:03:42.878454 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpgmt" event={"ID":"76da996e-6832-4174-ba93-dfbcb0eea305","Type":"ContainerDied","Data":"c597eaa560416cf3f50e6bc92c24787613eadff13927908e43ac41d6a6fb0ab4"} Feb 26 18:03:43 crc kubenswrapper[4805]: I0226 18:03:43.893688 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpgmt" event={"ID":"76da996e-6832-4174-ba93-dfbcb0eea305","Type":"ContainerStarted","Data":"bb00c73c8b1ff11c0a551718f14f451f5b9618780449c3ffa899c31d9fa9e976"} Feb 26 18:03:43 crc kubenswrapper[4805]: I0226 18:03:43.930896 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mpgmt" podStartSLOduration=2.492370647 podStartE2EDuration="5.930876884s" podCreationTimestamp="2026-02-26 18:03:38 +0000 UTC" firstStartedPulling="2026-02-26 18:03:39.837694974 +0000 UTC m=+2934.399449313" lastFinishedPulling="2026-02-26 18:03:43.276201211 +0000 UTC m=+2937.837955550" observedRunningTime="2026-02-26 18:03:43.913237907 +0000 UTC m=+2938.474992276" watchObservedRunningTime="2026-02-26 18:03:43.930876884 +0000 UTC m=+2938.492631233" Feb 26 18:03:48 crc kubenswrapper[4805]: I0226 18:03:48.951259 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mpgmt" Feb 26 18:03:48 crc kubenswrapper[4805]: I0226 18:03:48.952795 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mpgmt" Feb 26 18:03:49 crc kubenswrapper[4805]: I0226 18:03:49.038290 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mpgmt" Feb 26 18:03:50 crc kubenswrapper[4805]: I0226 18:03:50.069418 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mpgmt" Feb 26 18:03:50 crc kubenswrapper[4805]: I0226 18:03:50.138565 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mpgmt"] Feb 26 18:03:52 crc kubenswrapper[4805]: I0226 18:03:52.032055 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mpgmt" podUID="76da996e-6832-4174-ba93-dfbcb0eea305" containerName="registry-server" containerID="cri-o://bb00c73c8b1ff11c0a551718f14f451f5b9618780449c3ffa899c31d9fa9e976" gracePeriod=2 Feb 26 18:03:52 crc kubenswrapper[4805]: I0226 18:03:52.581082 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mpgmt" Feb 26 18:03:52 crc kubenswrapper[4805]: I0226 18:03:52.714484 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76da996e-6832-4174-ba93-dfbcb0eea305-catalog-content\") pod \"76da996e-6832-4174-ba93-dfbcb0eea305\" (UID: \"76da996e-6832-4174-ba93-dfbcb0eea305\") " Feb 26 18:03:52 crc kubenswrapper[4805]: I0226 18:03:52.714645 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76da996e-6832-4174-ba93-dfbcb0eea305-utilities\") pod \"76da996e-6832-4174-ba93-dfbcb0eea305\" (UID: \"76da996e-6832-4174-ba93-dfbcb0eea305\") " Feb 26 18:03:52 crc kubenswrapper[4805]: I0226 18:03:52.714846 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlgwd\" (UniqueName: \"kubernetes.io/projected/76da996e-6832-4174-ba93-dfbcb0eea305-kube-api-access-vlgwd\") pod \"76da996e-6832-4174-ba93-dfbcb0eea305\" (UID: \"76da996e-6832-4174-ba93-dfbcb0eea305\") " Feb 26 18:03:52 crc kubenswrapper[4805]: I0226 18:03:52.716811 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76da996e-6832-4174-ba93-dfbcb0eea305-utilities" (OuterVolumeSpecName: "utilities") pod "76da996e-6832-4174-ba93-dfbcb0eea305" (UID: "76da996e-6832-4174-ba93-dfbcb0eea305"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 18:03:52 crc kubenswrapper[4805]: I0226 18:03:52.721315 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76da996e-6832-4174-ba93-dfbcb0eea305-kube-api-access-vlgwd" (OuterVolumeSpecName: "kube-api-access-vlgwd") pod "76da996e-6832-4174-ba93-dfbcb0eea305" (UID: "76da996e-6832-4174-ba93-dfbcb0eea305"). InnerVolumeSpecName "kube-api-access-vlgwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:03:52 crc kubenswrapper[4805]: I0226 18:03:52.771166 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76da996e-6832-4174-ba93-dfbcb0eea305-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76da996e-6832-4174-ba93-dfbcb0eea305" (UID: "76da996e-6832-4174-ba93-dfbcb0eea305"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 18:03:52 crc kubenswrapper[4805]: I0226 18:03:52.816875 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlgwd\" (UniqueName: \"kubernetes.io/projected/76da996e-6832-4174-ba93-dfbcb0eea305-kube-api-access-vlgwd\") on node \"crc\" DevicePath \"\"" Feb 26 18:03:52 crc kubenswrapper[4805]: I0226 18:03:52.816904 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76da996e-6832-4174-ba93-dfbcb0eea305-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 18:03:52 crc kubenswrapper[4805]: I0226 18:03:52.816919 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76da996e-6832-4174-ba93-dfbcb0eea305-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 18:03:53 crc kubenswrapper[4805]: I0226 18:03:53.043798 4805 generic.go:334] "Generic (PLEG): container finished" podID="76da996e-6832-4174-ba93-dfbcb0eea305" containerID="bb00c73c8b1ff11c0a551718f14f451f5b9618780449c3ffa899c31d9fa9e976" exitCode=0 Feb 26 18:03:53 crc kubenswrapper[4805]: I0226 18:03:53.043834 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpgmt" event={"ID":"76da996e-6832-4174-ba93-dfbcb0eea305","Type":"ContainerDied","Data":"bb00c73c8b1ff11c0a551718f14f451f5b9618780449c3ffa899c31d9fa9e976"} Feb 26 18:03:53 crc kubenswrapper[4805]: I0226 18:03:53.043860 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpgmt" event={"ID":"76da996e-6832-4174-ba93-dfbcb0eea305","Type":"ContainerDied","Data":"b3b011b94422019f4ec847b158cdf57ecc678c4dbe0c252ff8bac0014db97857"} Feb 26 18:03:53 crc kubenswrapper[4805]: I0226 18:03:53.043881 4805 scope.go:117] "RemoveContainer" containerID="bb00c73c8b1ff11c0a551718f14f451f5b9618780449c3ffa899c31d9fa9e976" Feb 26 18:03:53 crc kubenswrapper[4805]: I0226 18:03:53.045551 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mpgmt" Feb 26 18:03:53 crc kubenswrapper[4805]: I0226 18:03:53.074669 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mpgmt"] Feb 26 18:03:53 crc kubenswrapper[4805]: I0226 18:03:53.076241 4805 scope.go:117] "RemoveContainer" containerID="c597eaa560416cf3f50e6bc92c24787613eadff13927908e43ac41d6a6fb0ab4" Feb 26 18:03:53 crc kubenswrapper[4805]: I0226 18:03:53.110866 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mpgmt"] Feb 26 18:03:53 crc kubenswrapper[4805]: I0226 18:03:53.113890 4805 scope.go:117] "RemoveContainer" containerID="4069ab81c566ed0a86ec23a902f2963789117a1b1f47dc34eccd757bac92dcac" Feb 26 18:03:53 crc kubenswrapper[4805]: I0226 18:03:53.185118 4805 scope.go:117] "RemoveContainer" containerID="bb00c73c8b1ff11c0a551718f14f451f5b9618780449c3ffa899c31d9fa9e976" Feb 26 18:03:53 crc kubenswrapper[4805]: E0226 18:03:53.185785 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb00c73c8b1ff11c0a551718f14f451f5b9618780449c3ffa899c31d9fa9e976\": container with ID starting with bb00c73c8b1ff11c0a551718f14f451f5b9618780449c3ffa899c31d9fa9e976 not found: ID does not exist" containerID="bb00c73c8b1ff11c0a551718f14f451f5b9618780449c3ffa899c31d9fa9e976" Feb 26 18:03:53 crc kubenswrapper[4805]: I0226 18:03:53.185838 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb00c73c8b1ff11c0a551718f14f451f5b9618780449c3ffa899c31d9fa9e976"} err="failed to get container status \"bb00c73c8b1ff11c0a551718f14f451f5b9618780449c3ffa899c31d9fa9e976\": rpc error: code = NotFound desc = could not find container \"bb00c73c8b1ff11c0a551718f14f451f5b9618780449c3ffa899c31d9fa9e976\": container with ID starting with bb00c73c8b1ff11c0a551718f14f451f5b9618780449c3ffa899c31d9fa9e976 not found: ID does not exist" Feb 26 18:03:53 crc kubenswrapper[4805]: I0226 18:03:53.185865 4805 scope.go:117] "RemoveContainer" containerID="c597eaa560416cf3f50e6bc92c24787613eadff13927908e43ac41d6a6fb0ab4" Feb 26 18:03:53 crc kubenswrapper[4805]: E0226 18:03:53.186303 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c597eaa560416cf3f50e6bc92c24787613eadff13927908e43ac41d6a6fb0ab4\": container with ID starting with c597eaa560416cf3f50e6bc92c24787613eadff13927908e43ac41d6a6fb0ab4 not found: ID does not exist" containerID="c597eaa560416cf3f50e6bc92c24787613eadff13927908e43ac41d6a6fb0ab4" Feb 26 18:03:53 crc kubenswrapper[4805]: I0226 18:03:53.186345 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c597eaa560416cf3f50e6bc92c24787613eadff13927908e43ac41d6a6fb0ab4"} err="failed to get container status \"c597eaa560416cf3f50e6bc92c24787613eadff13927908e43ac41d6a6fb0ab4\": rpc error: code = NotFound desc = could not find container \"c597eaa560416cf3f50e6bc92c24787613eadff13927908e43ac41d6a6fb0ab4\": container with ID starting with c597eaa560416cf3f50e6bc92c24787613eadff13927908e43ac41d6a6fb0ab4 not found: ID does not exist" Feb 26 18:03:53 crc kubenswrapper[4805]: I0226 18:03:53.186371 4805 scope.go:117] "RemoveContainer" containerID="4069ab81c566ed0a86ec23a902f2963789117a1b1f47dc34eccd757bac92dcac" Feb 26 18:03:53 crc kubenswrapper[4805]: E0226 18:03:53.186938 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4069ab81c566ed0a86ec23a902f2963789117a1b1f47dc34eccd757bac92dcac\": container with ID starting with 4069ab81c566ed0a86ec23a902f2963789117a1b1f47dc34eccd757bac92dcac not found: ID does not exist" containerID="4069ab81c566ed0a86ec23a902f2963789117a1b1f47dc34eccd757bac92dcac" Feb 26 18:03:53 crc kubenswrapper[4805]: I0226 18:03:53.187140 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4069ab81c566ed0a86ec23a902f2963789117a1b1f47dc34eccd757bac92dcac"} err="failed to get container status \"4069ab81c566ed0a86ec23a902f2963789117a1b1f47dc34eccd757bac92dcac\": rpc error: code = NotFound desc = could not find container \"4069ab81c566ed0a86ec23a902f2963789117a1b1f47dc34eccd757bac92dcac\": container with ID starting with 4069ab81c566ed0a86ec23a902f2963789117a1b1f47dc34eccd757bac92dcac not found: ID does not exist" Feb 26 18:03:54 crc kubenswrapper[4805]: I0226 18:03:54.968549 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76da996e-6832-4174-ba93-dfbcb0eea305" path="/var/lib/kubelet/pods/76da996e-6832-4174-ba93-dfbcb0eea305/volumes" Feb 26 18:04:00 crc kubenswrapper[4805]: I0226 18:04:00.165967 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535484-lqj4h"] Feb 26 18:04:00 crc kubenswrapper[4805]: E0226 18:04:00.166949 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76da996e-6832-4174-ba93-dfbcb0eea305" containerName="extract-content" Feb 26 18:04:00 crc kubenswrapper[4805]: I0226 18:04:00.166966 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="76da996e-6832-4174-ba93-dfbcb0eea305" containerName="extract-content" Feb 26 18:04:00 crc kubenswrapper[4805]: E0226 18:04:00.166983 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76da996e-6832-4174-ba93-dfbcb0eea305" containerName="extract-utilities" Feb 26 18:04:00 crc kubenswrapper[4805]: I0226 18:04:00.166990 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="76da996e-6832-4174-ba93-dfbcb0eea305" containerName="extract-utilities" Feb 26 18:04:00 crc kubenswrapper[4805]: E0226 18:04:00.167049 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76da996e-6832-4174-ba93-dfbcb0eea305" containerName="registry-server" Feb 26 18:04:00 crc kubenswrapper[4805]: I0226 18:04:00.167057 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="76da996e-6832-4174-ba93-dfbcb0eea305" containerName="registry-server" Feb 26 18:04:00 crc kubenswrapper[4805]: I0226 18:04:00.167256 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="76da996e-6832-4174-ba93-dfbcb0eea305" containerName="registry-server" Feb 26 18:04:00 crc kubenswrapper[4805]: I0226 18:04:00.168095 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535484-lqj4h" Feb 26 18:04:00 crc kubenswrapper[4805]: I0226 18:04:00.173218 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 18:04:00 crc kubenswrapper[4805]: I0226 18:04:00.173437 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 18:04:00 crc kubenswrapper[4805]: I0226 18:04:00.174230 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 18:04:00 crc kubenswrapper[4805]: I0226 18:04:00.188426 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535484-lqj4h"] Feb 26 18:04:00 crc kubenswrapper[4805]: I0226 18:04:00.288763 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5cs4\" (UniqueName: \"kubernetes.io/projected/54d572b5-47a3-4ca9-a5dc-d1cd09d005fd-kube-api-access-v5cs4\") pod \"auto-csr-approver-29535484-lqj4h\" (UID: \"54d572b5-47a3-4ca9-a5dc-d1cd09d005fd\") " pod="openshift-infra/auto-csr-approver-29535484-lqj4h" Feb 26 18:04:00 crc kubenswrapper[4805]: I0226 18:04:00.390517 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5cs4\" (UniqueName: \"kubernetes.io/projected/54d572b5-47a3-4ca9-a5dc-d1cd09d005fd-kube-api-access-v5cs4\") pod \"auto-csr-approver-29535484-lqj4h\" (UID: \"54d572b5-47a3-4ca9-a5dc-d1cd09d005fd\") " pod="openshift-infra/auto-csr-approver-29535484-lqj4h" Feb 26 18:04:00 crc kubenswrapper[4805]: I0226 18:04:00.412416 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5cs4\" (UniqueName: \"kubernetes.io/projected/54d572b5-47a3-4ca9-a5dc-d1cd09d005fd-kube-api-access-v5cs4\") pod \"auto-csr-approver-29535484-lqj4h\" (UID: \"54d572b5-47a3-4ca9-a5dc-d1cd09d005fd\") " pod="openshift-infra/auto-csr-approver-29535484-lqj4h" Feb 26 18:04:00 crc kubenswrapper[4805]: I0226 18:04:00.500398 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535484-lqj4h" Feb 26 18:04:00 crc kubenswrapper[4805]: I0226 18:04:00.965758 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535484-lqj4h"] Feb 26 18:04:01 crc kubenswrapper[4805]: I0226 18:04:01.145400 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535484-lqj4h" event={"ID":"54d572b5-47a3-4ca9-a5dc-d1cd09d005fd","Type":"ContainerStarted","Data":"5d1b9decd78ce329585022b4d6248ee5cacc54cdf7395ee269f36cbdbd239389"} Feb 26 18:04:02 crc kubenswrapper[4805]: I0226 18:04:02.977821 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 18:04:02 crc kubenswrapper[4805]: I0226 18:04:02.978379 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 18:04:03 crc kubenswrapper[4805]: I0226 18:04:03.167042 4805 generic.go:334] "Generic (PLEG): container finished" podID="54d572b5-47a3-4ca9-a5dc-d1cd09d005fd" containerID="1f1d52e8831a1bf5a4a374a0da7cb5b63e0fe23fdfdfa1662d4660509163ea5b" exitCode=0 Feb 26 18:04:03 crc kubenswrapper[4805]: I0226 18:04:03.167099 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535484-lqj4h" event={"ID":"54d572b5-47a3-4ca9-a5dc-d1cd09d005fd","Type":"ContainerDied","Data":"1f1d52e8831a1bf5a4a374a0da7cb5b63e0fe23fdfdfa1662d4660509163ea5b"} Feb 26 18:04:04 crc kubenswrapper[4805]: I0226 18:04:04.663152 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535484-lqj4h" Feb 26 18:04:04 crc kubenswrapper[4805]: I0226 18:04:04.797240 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5cs4\" (UniqueName: \"kubernetes.io/projected/54d572b5-47a3-4ca9-a5dc-d1cd09d005fd-kube-api-access-v5cs4\") pod \"54d572b5-47a3-4ca9-a5dc-d1cd09d005fd\" (UID: \"54d572b5-47a3-4ca9-a5dc-d1cd09d005fd\") " Feb 26 18:04:04 crc kubenswrapper[4805]: I0226 18:04:04.803458 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54d572b5-47a3-4ca9-a5dc-d1cd09d005fd-kube-api-access-v5cs4" (OuterVolumeSpecName: "kube-api-access-v5cs4") pod "54d572b5-47a3-4ca9-a5dc-d1cd09d005fd" (UID: "54d572b5-47a3-4ca9-a5dc-d1cd09d005fd"). InnerVolumeSpecName "kube-api-access-v5cs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:04:04 crc kubenswrapper[4805]: I0226 18:04:04.900425 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5cs4\" (UniqueName: \"kubernetes.io/projected/54d572b5-47a3-4ca9-a5dc-d1cd09d005fd-kube-api-access-v5cs4\") on node \"crc\" DevicePath \"\"" Feb 26 18:04:05 crc kubenswrapper[4805]: I0226 18:04:05.191272 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535484-lqj4h" event={"ID":"54d572b5-47a3-4ca9-a5dc-d1cd09d005fd","Type":"ContainerDied","Data":"5d1b9decd78ce329585022b4d6248ee5cacc54cdf7395ee269f36cbdbd239389"} Feb 26 18:04:05 crc kubenswrapper[4805]: I0226 18:04:05.191316 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d1b9decd78ce329585022b4d6248ee5cacc54cdf7395ee269f36cbdbd239389" Feb 26 18:04:05 crc kubenswrapper[4805]: I0226 18:04:05.191353 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535484-lqj4h" Feb 26 18:04:05 crc kubenswrapper[4805]: I0226 18:04:05.756639 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535478-j6qkw"] Feb 26 18:04:05 crc kubenswrapper[4805]: I0226 18:04:05.764819 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535478-j6qkw"] Feb 26 18:04:06 crc kubenswrapper[4805]: I0226 18:04:06.970963 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da01a5e5-b523-47bc-a444-dbd5cb857330" path="/var/lib/kubelet/pods/da01a5e5-b523-47bc-a444-dbd5cb857330/volumes" Feb 26 18:04:32 crc kubenswrapper[4805]: I0226 18:04:32.977953 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 18:04:32 crc kubenswrapper[4805]: I0226 18:04:32.978783 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 18:04:47 crc kubenswrapper[4805]: I0226 18:04:47.387085 4805 scope.go:117] "RemoveContainer" containerID="088f703254843fc8336d7e6aa2addc1d3ee78ed8239ca3a1aebe16f9b30b3441" Feb 26 18:05:02 crc kubenswrapper[4805]: I0226 18:05:02.977915 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 18:05:02 crc kubenswrapper[4805]: I0226 18:05:02.978684 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 18:05:02 crc kubenswrapper[4805]: I0226 18:05:02.978767 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" Feb 26 18:05:02 crc kubenswrapper[4805]: I0226 18:05:02.979950 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"15dbabcf6bac43c80f9b32d154755c0ee038612c5fab62982466ef8dd39a9291"} pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 18:05:02 crc kubenswrapper[4805]: I0226 18:05:02.980088 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" containerID="cri-o://15dbabcf6bac43c80f9b32d154755c0ee038612c5fab62982466ef8dd39a9291" gracePeriod=600 Feb 26 18:05:03 crc kubenswrapper[4805]: I0226 18:05:03.844377 4805 generic.go:334] "Generic (PLEG): container finished" podID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerID="15dbabcf6bac43c80f9b32d154755c0ee038612c5fab62982466ef8dd39a9291" exitCode=0 Feb 26 18:05:03 crc kubenswrapper[4805]: I0226 18:05:03.844446 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" event={"ID":"25e83477-65d0-41be-8e55-fdacfc5871a8","Type":"ContainerDied","Data":"15dbabcf6bac43c80f9b32d154755c0ee038612c5fab62982466ef8dd39a9291"} Feb 26 18:05:03 crc kubenswrapper[4805]: I0226 18:05:03.845043 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" event={"ID":"25e83477-65d0-41be-8e55-fdacfc5871a8","Type":"ContainerStarted","Data":"3dde3ae0c6957002421c38b652b72ebb5280728ffecf8fb3fb8bf0c13b250691"} Feb 26 18:05:03 crc kubenswrapper[4805]: I0226 18:05:03.845076 4805 scope.go:117] "RemoveContainer" containerID="1491bbd2becc7a38ab4213e51d123b16b67ad1e023fc2d8be2998a56175f8172" Feb 26 18:06:00 crc kubenswrapper[4805]: I0226 18:06:00.181471 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535486-76rct"] Feb 26 18:06:00 crc kubenswrapper[4805]: E0226 18:06:00.182622 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54d572b5-47a3-4ca9-a5dc-d1cd09d005fd" containerName="oc" Feb 26 18:06:00 crc kubenswrapper[4805]: I0226 18:06:00.182637 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="54d572b5-47a3-4ca9-a5dc-d1cd09d005fd" containerName="oc" Feb 26 18:06:00 crc kubenswrapper[4805]: I0226 18:06:00.182917 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="54d572b5-47a3-4ca9-a5dc-d1cd09d005fd" containerName="oc" Feb 26 18:06:00 crc kubenswrapper[4805]: I0226 18:06:00.184071 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535486-76rct"] Feb 26 18:06:00 crc kubenswrapper[4805]: I0226 18:06:00.184158 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535486-76rct" Feb 26 18:06:00 crc kubenswrapper[4805]: I0226 18:06:00.193707 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 18:06:00 crc kubenswrapper[4805]: I0226 18:06:00.193953 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 18:06:00 crc kubenswrapper[4805]: I0226 18:06:00.194146 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 18:06:00 crc kubenswrapper[4805]: I0226 18:06:00.254335 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8l8s\" (UniqueName: \"kubernetes.io/projected/acbee537-f41b-4360-b00f-a303e9ac4465-kube-api-access-f8l8s\") pod \"auto-csr-approver-29535486-76rct\" (UID: \"acbee537-f41b-4360-b00f-a303e9ac4465\") " pod="openshift-infra/auto-csr-approver-29535486-76rct" Feb 26 18:06:00 crc kubenswrapper[4805]: I0226 18:06:00.356720 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8l8s\" (UniqueName: \"kubernetes.io/projected/acbee537-f41b-4360-b00f-a303e9ac4465-kube-api-access-f8l8s\") pod \"auto-csr-approver-29535486-76rct\" (UID: \"acbee537-f41b-4360-b00f-a303e9ac4465\") " pod="openshift-infra/auto-csr-approver-29535486-76rct" Feb 26 18:06:00 crc kubenswrapper[4805]: I0226 18:06:00.375592 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8l8s\" (UniqueName: \"kubernetes.io/projected/acbee537-f41b-4360-b00f-a303e9ac4465-kube-api-access-f8l8s\") pod \"auto-csr-approver-29535486-76rct\" (UID: \"acbee537-f41b-4360-b00f-a303e9ac4465\") " pod="openshift-infra/auto-csr-approver-29535486-76rct" Feb 26 18:06:00 crc kubenswrapper[4805]: I0226 18:06:00.521442 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535486-76rct" Feb 26 18:06:00 crc kubenswrapper[4805]: I0226 18:06:00.970657 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535486-76rct"] Feb 26 18:06:00 crc kubenswrapper[4805]: I0226 18:06:00.977940 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 18:06:01 crc kubenswrapper[4805]: I0226 18:06:01.515073 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535486-76rct" event={"ID":"acbee537-f41b-4360-b00f-a303e9ac4465","Type":"ContainerStarted","Data":"e9b647f2374d0bd852bce16479ee0f28457630f337f5f98d1f18d2e547c60340"} Feb 26 18:06:02 crc kubenswrapper[4805]: I0226 18:06:02.524480 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535486-76rct" event={"ID":"acbee537-f41b-4360-b00f-a303e9ac4465","Type":"ContainerStarted","Data":"9a9353c37fe195c1555391589b64faad7b92d590b36b5dc5f3ae5e8b81feacd3"} Feb 26 18:06:02 crc kubenswrapper[4805]: I0226 18:06:02.539217 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535486-76rct" podStartSLOduration=1.4410819049999999 podStartE2EDuration="2.539197038s" podCreationTimestamp="2026-02-26 18:06:00 +0000 UTC" firstStartedPulling="2026-02-26 18:06:00.977609868 +0000 UTC m=+3075.539364207" lastFinishedPulling="2026-02-26 18:06:02.075725001 +0000 UTC m=+3076.637479340" observedRunningTime="2026-02-26 18:06:02.536836878 +0000 UTC m=+3077.098591207" watchObservedRunningTime="2026-02-26 18:06:02.539197038 +0000 UTC m=+3077.100951397" Feb 26 18:06:03 crc kubenswrapper[4805]: I0226 18:06:03.545577 4805 generic.go:334] "Generic (PLEG): container finished" podID="acbee537-f41b-4360-b00f-a303e9ac4465" containerID="9a9353c37fe195c1555391589b64faad7b92d590b36b5dc5f3ae5e8b81feacd3" exitCode=0 Feb 26 18:06:03 crc kubenswrapper[4805]: I0226 18:06:03.545708 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535486-76rct" event={"ID":"acbee537-f41b-4360-b00f-a303e9ac4465","Type":"ContainerDied","Data":"9a9353c37fe195c1555391589b64faad7b92d590b36b5dc5f3ae5e8b81feacd3"} Feb 26 18:06:05 crc kubenswrapper[4805]: I0226 18:06:05.046423 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535486-76rct" Feb 26 18:06:05 crc kubenswrapper[4805]: I0226 18:06:05.166249 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8l8s\" (UniqueName: \"kubernetes.io/projected/acbee537-f41b-4360-b00f-a303e9ac4465-kube-api-access-f8l8s\") pod \"acbee537-f41b-4360-b00f-a303e9ac4465\" (UID: \"acbee537-f41b-4360-b00f-a303e9ac4465\") " Feb 26 18:06:05 crc kubenswrapper[4805]: I0226 18:06:05.174453 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acbee537-f41b-4360-b00f-a303e9ac4465-kube-api-access-f8l8s" (OuterVolumeSpecName: "kube-api-access-f8l8s") pod "acbee537-f41b-4360-b00f-a303e9ac4465" (UID: "acbee537-f41b-4360-b00f-a303e9ac4465"). InnerVolumeSpecName "kube-api-access-f8l8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:06:05 crc kubenswrapper[4805]: I0226 18:06:05.270274 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8l8s\" (UniqueName: \"kubernetes.io/projected/acbee537-f41b-4360-b00f-a303e9ac4465-kube-api-access-f8l8s\") on node \"crc\" DevicePath \"\"" Feb 26 18:06:05 crc kubenswrapper[4805]: I0226 18:06:05.571493 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535486-76rct" event={"ID":"acbee537-f41b-4360-b00f-a303e9ac4465","Type":"ContainerDied","Data":"e9b647f2374d0bd852bce16479ee0f28457630f337f5f98d1f18d2e547c60340"} Feb 26 18:06:05 crc kubenswrapper[4805]: I0226 18:06:05.571570 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9b647f2374d0bd852bce16479ee0f28457630f337f5f98d1f18d2e547c60340" Feb 26 18:06:05 crc kubenswrapper[4805]: I0226 18:06:05.571639 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535486-76rct" Feb 26 18:06:05 crc kubenswrapper[4805]: I0226 18:06:05.630220 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535480-4fqk4"] Feb 26 18:06:05 crc kubenswrapper[4805]: I0226 18:06:05.639754 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535480-4fqk4"] Feb 26 18:06:06 crc kubenswrapper[4805]: I0226 18:06:06.967661 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c164d473-a7f5-4a34-bd10-83e762547f46" path="/var/lib/kubelet/pods/c164d473-a7f5-4a34-bd10-83e762547f46/volumes" Feb 26 18:06:47 crc kubenswrapper[4805]: I0226 18:06:47.506187 4805 scope.go:117] "RemoveContainer" containerID="09b0abbfc876806cdc905c0e45224fffe27d589c2db03fffce73c5b0507f7209" Feb 26 18:07:32 crc kubenswrapper[4805]: I0226 18:07:32.978285 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 18:07:32 crc kubenswrapper[4805]: I0226 18:07:32.978891 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 18:08:00 crc kubenswrapper[4805]: I0226 18:08:00.143666 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535488-rnhtp"] Feb 26 18:08:00 crc kubenswrapper[4805]: E0226 18:08:00.144624 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acbee537-f41b-4360-b00f-a303e9ac4465" containerName="oc" Feb 26 18:08:00 crc kubenswrapper[4805]: I0226 18:08:00.144637 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="acbee537-f41b-4360-b00f-a303e9ac4465" containerName="oc" Feb 26 18:08:00 crc kubenswrapper[4805]: I0226 18:08:00.144842 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="acbee537-f41b-4360-b00f-a303e9ac4465" containerName="oc" Feb 26 18:08:00 crc kubenswrapper[4805]: I0226 18:08:00.145698 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535488-rnhtp" Feb 26 18:08:00 crc kubenswrapper[4805]: I0226 18:08:00.149810 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 18:08:00 crc kubenswrapper[4805]: I0226 18:08:00.150524 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 18:08:00 crc kubenswrapper[4805]: I0226 18:08:00.154753 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 18:08:00 crc kubenswrapper[4805]: I0226 18:08:00.164105 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535488-rnhtp"] Feb 26 18:08:00 crc kubenswrapper[4805]: I0226 18:08:00.173886 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4js6f\" (UniqueName: \"kubernetes.io/projected/645f1d1b-d370-4524-bdde-068cddfcf41a-kube-api-access-4js6f\") pod \"auto-csr-approver-29535488-rnhtp\" (UID: \"645f1d1b-d370-4524-bdde-068cddfcf41a\") " pod="openshift-infra/auto-csr-approver-29535488-rnhtp" Feb 26 18:08:00 crc kubenswrapper[4805]: I0226 18:08:00.276390 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4js6f\" (UniqueName: \"kubernetes.io/projected/645f1d1b-d370-4524-bdde-068cddfcf41a-kube-api-access-4js6f\") pod \"auto-csr-approver-29535488-rnhtp\" (UID: \"645f1d1b-d370-4524-bdde-068cddfcf41a\") " pod="openshift-infra/auto-csr-approver-29535488-rnhtp" Feb 26 18:08:00 crc kubenswrapper[4805]: I0226 18:08:00.304285 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4js6f\" (UniqueName: \"kubernetes.io/projected/645f1d1b-d370-4524-bdde-068cddfcf41a-kube-api-access-4js6f\") pod \"auto-csr-approver-29535488-rnhtp\" (UID: \"645f1d1b-d370-4524-bdde-068cddfcf41a\") " pod="openshift-infra/auto-csr-approver-29535488-rnhtp" Feb 26 18:08:00 crc kubenswrapper[4805]: I0226 18:08:00.512594 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535488-rnhtp" Feb 26 18:08:01 crc kubenswrapper[4805]: I0226 18:08:01.003850 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535488-rnhtp"] Feb 26 18:08:01 crc kubenswrapper[4805]: W0226 18:08:01.009248 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod645f1d1b_d370_4524_bdde_068cddfcf41a.slice/crio-eb420de0653c374cad3ce39b8f1c5411c8b14c50cfcca107d3b74711d69ff864 WatchSource:0}: Error finding container eb420de0653c374cad3ce39b8f1c5411c8b14c50cfcca107d3b74711d69ff864: Status 404 returned error can't find the container with id eb420de0653c374cad3ce39b8f1c5411c8b14c50cfcca107d3b74711d69ff864 Feb 26 18:08:01 crc kubenswrapper[4805]: I0226 18:08:01.857007 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535488-rnhtp" event={"ID":"645f1d1b-d370-4524-bdde-068cddfcf41a","Type":"ContainerStarted","Data":"eb420de0653c374cad3ce39b8f1c5411c8b14c50cfcca107d3b74711d69ff864"} Feb 26 18:08:02 crc kubenswrapper[4805]: I0226 18:08:02.866324 4805 generic.go:334] "Generic (PLEG): container finished" podID="645f1d1b-d370-4524-bdde-068cddfcf41a" containerID="e6b4b7a0124934599dd08cd60161f5fa01f7cca67d6b665b528ba7db9a62e58c" exitCode=0 Feb 26 18:08:02 crc kubenswrapper[4805]: I0226 18:08:02.867197 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535488-rnhtp" event={"ID":"645f1d1b-d370-4524-bdde-068cddfcf41a","Type":"ContainerDied","Data":"e6b4b7a0124934599dd08cd60161f5fa01f7cca67d6b665b528ba7db9a62e58c"} Feb 26 18:08:02 crc kubenswrapper[4805]: I0226 18:08:02.977425 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 18:08:02 crc kubenswrapper[4805]: I0226 18:08:02.977479 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 18:08:04 crc kubenswrapper[4805]: I0226 18:08:04.264122 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535488-rnhtp" Feb 26 18:08:04 crc kubenswrapper[4805]: I0226 18:08:04.376461 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4js6f\" (UniqueName: \"kubernetes.io/projected/645f1d1b-d370-4524-bdde-068cddfcf41a-kube-api-access-4js6f\") pod \"645f1d1b-d370-4524-bdde-068cddfcf41a\" (UID: \"645f1d1b-d370-4524-bdde-068cddfcf41a\") " Feb 26 18:08:04 crc kubenswrapper[4805]: I0226 18:08:04.385274 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/645f1d1b-d370-4524-bdde-068cddfcf41a-kube-api-access-4js6f" (OuterVolumeSpecName: "kube-api-access-4js6f") pod "645f1d1b-d370-4524-bdde-068cddfcf41a" (UID: "645f1d1b-d370-4524-bdde-068cddfcf41a"). InnerVolumeSpecName "kube-api-access-4js6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:08:04 crc kubenswrapper[4805]: I0226 18:08:04.479235 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4js6f\" (UniqueName: \"kubernetes.io/projected/645f1d1b-d370-4524-bdde-068cddfcf41a-kube-api-access-4js6f\") on node \"crc\" DevicePath \"\"" Feb 26 18:08:04 crc kubenswrapper[4805]: I0226 18:08:04.887744 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535488-rnhtp" event={"ID":"645f1d1b-d370-4524-bdde-068cddfcf41a","Type":"ContainerDied","Data":"eb420de0653c374cad3ce39b8f1c5411c8b14c50cfcca107d3b74711d69ff864"} Feb 26 18:08:04 crc kubenswrapper[4805]: I0226 18:08:04.887790 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb420de0653c374cad3ce39b8f1c5411c8b14c50cfcca107d3b74711d69ff864" Feb 26 18:08:04 crc kubenswrapper[4805]: I0226 18:08:04.887803 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535488-rnhtp" Feb 26 18:08:05 crc kubenswrapper[4805]: I0226 18:08:05.369933 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535482-cbb7d"] Feb 26 18:08:05 crc kubenswrapper[4805]: I0226 18:08:05.382246 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535482-cbb7d"] Feb 26 18:08:06 crc kubenswrapper[4805]: I0226 18:08:06.968942 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d82fd202-18a2-4b84-98d1-f87c0381566b" path="/var/lib/kubelet/pods/d82fd202-18a2-4b84-98d1-f87c0381566b/volumes" Feb 26 18:08:19 crc kubenswrapper[4805]: I0226 18:08:19.737473 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gmmpt"] Feb 26 18:08:19 crc kubenswrapper[4805]: E0226 18:08:19.738604 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="645f1d1b-d370-4524-bdde-068cddfcf41a" containerName="oc" Feb 26 18:08:19 crc kubenswrapper[4805]: I0226 18:08:19.738624 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="645f1d1b-d370-4524-bdde-068cddfcf41a" containerName="oc" Feb 26 18:08:19 crc kubenswrapper[4805]: I0226 18:08:19.738885 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="645f1d1b-d370-4524-bdde-068cddfcf41a" containerName="oc" Feb 26 18:08:19 crc kubenswrapper[4805]: I0226 18:08:19.741835 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmmpt" Feb 26 18:08:19 crc kubenswrapper[4805]: I0226 18:08:19.750655 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gmmpt"] Feb 26 18:08:19 crc kubenswrapper[4805]: I0226 18:08:19.857763 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4grb\" (UniqueName: \"kubernetes.io/projected/d0e42686-60cb-41d8-af2f-fd5832383c6a-kube-api-access-d4grb\") pod \"community-operators-gmmpt\" (UID: \"d0e42686-60cb-41d8-af2f-fd5832383c6a\") " pod="openshift-marketplace/community-operators-gmmpt" Feb 26 18:08:19 crc kubenswrapper[4805]: I0226 18:08:19.857834 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0e42686-60cb-41d8-af2f-fd5832383c6a-utilities\") pod \"community-operators-gmmpt\" (UID: \"d0e42686-60cb-41d8-af2f-fd5832383c6a\") " pod="openshift-marketplace/community-operators-gmmpt" Feb 26 18:08:19 crc kubenswrapper[4805]: I0226 18:08:19.857909 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0e42686-60cb-41d8-af2f-fd5832383c6a-catalog-content\") pod \"community-operators-gmmpt\" (UID: \"d0e42686-60cb-41d8-af2f-fd5832383c6a\") " pod="openshift-marketplace/community-operators-gmmpt" Feb 26 18:08:19 crc kubenswrapper[4805]: I0226 18:08:19.959230 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4grb\" (UniqueName: \"kubernetes.io/projected/d0e42686-60cb-41d8-af2f-fd5832383c6a-kube-api-access-d4grb\") pod \"community-operators-gmmpt\" (UID: \"d0e42686-60cb-41d8-af2f-fd5832383c6a\") " pod="openshift-marketplace/community-operators-gmmpt" Feb 26 18:08:19 crc kubenswrapper[4805]: I0226 18:08:19.959307 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0e42686-60cb-41d8-af2f-fd5832383c6a-utilities\") pod \"community-operators-gmmpt\" (UID: \"d0e42686-60cb-41d8-af2f-fd5832383c6a\") " pod="openshift-marketplace/community-operators-gmmpt" Feb 26 18:08:19 crc kubenswrapper[4805]: I0226 18:08:19.959376 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0e42686-60cb-41d8-af2f-fd5832383c6a-catalog-content\") pod \"community-operators-gmmpt\" (UID: \"d0e42686-60cb-41d8-af2f-fd5832383c6a\") " pod="openshift-marketplace/community-operators-gmmpt" Feb 26 18:08:19 crc kubenswrapper[4805]: I0226 18:08:19.959825 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0e42686-60cb-41d8-af2f-fd5832383c6a-utilities\") pod \"community-operators-gmmpt\" (UID: \"d0e42686-60cb-41d8-af2f-fd5832383c6a\") " pod="openshift-marketplace/community-operators-gmmpt" Feb 26 18:08:19 crc kubenswrapper[4805]: I0226 18:08:19.959963 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0e42686-60cb-41d8-af2f-fd5832383c6a-catalog-content\") pod \"community-operators-gmmpt\" (UID: \"d0e42686-60cb-41d8-af2f-fd5832383c6a\") " pod="openshift-marketplace/community-operators-gmmpt" Feb 26 18:08:20 crc kubenswrapper[4805]: I0226 18:08:20.007103 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4grb\" (UniqueName: \"kubernetes.io/projected/d0e42686-60cb-41d8-af2f-fd5832383c6a-kube-api-access-d4grb\") pod \"community-operators-gmmpt\" (UID: \"d0e42686-60cb-41d8-af2f-fd5832383c6a\") " pod="openshift-marketplace/community-operators-gmmpt" Feb 26 18:08:20 crc kubenswrapper[4805]: I0226 18:08:20.077701 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmmpt" Feb 26 18:08:20 crc kubenswrapper[4805]: I0226 18:08:20.600895 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gmmpt"] Feb 26 18:08:21 crc kubenswrapper[4805]: I0226 18:08:21.118199 4805 generic.go:334] "Generic (PLEG): container finished" podID="d0e42686-60cb-41d8-af2f-fd5832383c6a" containerID="8e2543b4bf0a0608aba17204b65d25d986919521fd634eec996b8f5f44f4aa18" exitCode=0 Feb 26 18:08:21 crc kubenswrapper[4805]: I0226 18:08:21.118309 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmmpt" event={"ID":"d0e42686-60cb-41d8-af2f-fd5832383c6a","Type":"ContainerDied","Data":"8e2543b4bf0a0608aba17204b65d25d986919521fd634eec996b8f5f44f4aa18"} Feb 26 18:08:21 crc kubenswrapper[4805]: I0226 18:08:21.118482 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmmpt" event={"ID":"d0e42686-60cb-41d8-af2f-fd5832383c6a","Type":"ContainerStarted","Data":"ca8cb279c56f9edac8ab49dd665af9a82af9e2603984b72627a20fbe09397ed6"} Feb 26 18:08:22 crc kubenswrapper[4805]: I0226 18:08:22.131253 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmmpt" event={"ID":"d0e42686-60cb-41d8-af2f-fd5832383c6a","Type":"ContainerStarted","Data":"08f55ed995e6021a0775a0294f118631777577d6d5628ce7d7bd1085d699e912"} Feb 26 18:08:24 crc kubenswrapper[4805]: I0226 18:08:24.152792 4805 generic.go:334] "Generic (PLEG): container finished" podID="d0e42686-60cb-41d8-af2f-fd5832383c6a" containerID="08f55ed995e6021a0775a0294f118631777577d6d5628ce7d7bd1085d699e912" exitCode=0 Feb 26 18:08:24 crc kubenswrapper[4805]: I0226 18:08:24.152840 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmmpt" event={"ID":"d0e42686-60cb-41d8-af2f-fd5832383c6a","Type":"ContainerDied","Data":"08f55ed995e6021a0775a0294f118631777577d6d5628ce7d7bd1085d699e912"} Feb 26 18:08:25 crc kubenswrapper[4805]: I0226 18:08:25.167569 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmmpt" event={"ID":"d0e42686-60cb-41d8-af2f-fd5832383c6a","Type":"ContainerStarted","Data":"2c464856ec44c6fb1a8433d328741a1543c564a67fd44bafbef60ebe6a91a9d3"} Feb 26 18:08:25 crc kubenswrapper[4805]: I0226 18:08:25.190608 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gmmpt" podStartSLOduration=2.7552874750000003 podStartE2EDuration="6.190581826s" podCreationTimestamp="2026-02-26 18:08:19 +0000 UTC" firstStartedPulling="2026-02-26 18:08:21.120626058 +0000 UTC m=+3215.682380397" lastFinishedPulling="2026-02-26 18:08:24.555920409 +0000 UTC m=+3219.117674748" observedRunningTime="2026-02-26 18:08:25.189255893 +0000 UTC m=+3219.751010252" watchObservedRunningTime="2026-02-26 18:08:25.190581826 +0000 UTC m=+3219.752336185" Feb 26 18:08:30 crc kubenswrapper[4805]: I0226 18:08:30.078764 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gmmpt" Feb 26 18:08:30 crc kubenswrapper[4805]: I0226 18:08:30.079456 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gmmpt" Feb 26 18:08:30 crc kubenswrapper[4805]: I0226 18:08:30.156849 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gmmpt" Feb 26 18:08:30 crc kubenswrapper[4805]: I0226 18:08:30.318426 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gmmpt" Feb 26 18:08:30 crc kubenswrapper[4805]: I0226 18:08:30.411815 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gmmpt"] Feb 26 18:08:32 crc kubenswrapper[4805]: I0226 18:08:32.265702 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gmmpt" podUID="d0e42686-60cb-41d8-af2f-fd5832383c6a" containerName="registry-server" containerID="cri-o://2c464856ec44c6fb1a8433d328741a1543c564a67fd44bafbef60ebe6a91a9d3" gracePeriod=2 Feb 26 18:08:32 crc kubenswrapper[4805]: I0226 18:08:32.848968 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmmpt" Feb 26 18:08:32 crc kubenswrapper[4805]: I0226 18:08:32.950047 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0e42686-60cb-41d8-af2f-fd5832383c6a-utilities\") pod \"d0e42686-60cb-41d8-af2f-fd5832383c6a\" (UID: \"d0e42686-60cb-41d8-af2f-fd5832383c6a\") " Feb 26 18:08:32 crc kubenswrapper[4805]: I0226 18:08:32.950268 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0e42686-60cb-41d8-af2f-fd5832383c6a-catalog-content\") pod \"d0e42686-60cb-41d8-af2f-fd5832383c6a\" (UID: \"d0e42686-60cb-41d8-af2f-fd5832383c6a\") " Feb 26 18:08:32 crc kubenswrapper[4805]: I0226 18:08:32.950390 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4grb\" (UniqueName: \"kubernetes.io/projected/d0e42686-60cb-41d8-af2f-fd5832383c6a-kube-api-access-d4grb\") pod \"d0e42686-60cb-41d8-af2f-fd5832383c6a\" (UID: \"d0e42686-60cb-41d8-af2f-fd5832383c6a\") " Feb 26 18:08:32 crc kubenswrapper[4805]: I0226 18:08:32.950954 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0e42686-60cb-41d8-af2f-fd5832383c6a-utilities" (OuterVolumeSpecName: "utilities") pod "d0e42686-60cb-41d8-af2f-fd5832383c6a" (UID: "d0e42686-60cb-41d8-af2f-fd5832383c6a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 18:08:32 crc kubenswrapper[4805]: I0226 18:08:32.968956 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0e42686-60cb-41d8-af2f-fd5832383c6a-kube-api-access-d4grb" (OuterVolumeSpecName: "kube-api-access-d4grb") pod "d0e42686-60cb-41d8-af2f-fd5832383c6a" (UID: "d0e42686-60cb-41d8-af2f-fd5832383c6a"). InnerVolumeSpecName "kube-api-access-d4grb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:08:32 crc kubenswrapper[4805]: I0226 18:08:32.978754 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 18:08:32 crc kubenswrapper[4805]: I0226 18:08:32.978806 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 18:08:33 crc kubenswrapper[4805]: I0226 18:08:33.003042 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" Feb 26 18:08:33 crc kubenswrapper[4805]: I0226 18:08:33.004031 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3dde3ae0c6957002421c38b652b72ebb5280728ffecf8fb3fb8bf0c13b250691"} pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 18:08:33 crc kubenswrapper[4805]: I0226 18:08:33.004091 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" containerID="cri-o://3dde3ae0c6957002421c38b652b72ebb5280728ffecf8fb3fb8bf0c13b250691" gracePeriod=600 Feb 26 18:08:33 crc kubenswrapper[4805]: I0226 18:08:33.021476 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0e42686-60cb-41d8-af2f-fd5832383c6a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d0e42686-60cb-41d8-af2f-fd5832383c6a" (UID: "d0e42686-60cb-41d8-af2f-fd5832383c6a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 18:08:33 crc kubenswrapper[4805]: I0226 18:08:33.052873 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0e42686-60cb-41d8-af2f-fd5832383c6a-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 18:08:33 crc kubenswrapper[4805]: I0226 18:08:33.052909 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0e42686-60cb-41d8-af2f-fd5832383c6a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 18:08:33 crc kubenswrapper[4805]: I0226 18:08:33.052920 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4grb\" (UniqueName: \"kubernetes.io/projected/d0e42686-60cb-41d8-af2f-fd5832383c6a-kube-api-access-d4grb\") on node \"crc\" DevicePath \"\"" Feb 26 18:08:33 crc kubenswrapper[4805]: E0226 18:08:33.128689 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:08:33 crc kubenswrapper[4805]: I0226 18:08:33.280661 4805 generic.go:334] "Generic (PLEG): container finished" podID="d0e42686-60cb-41d8-af2f-fd5832383c6a" containerID="2c464856ec44c6fb1a8433d328741a1543c564a67fd44bafbef60ebe6a91a9d3" exitCode=0 Feb 26 18:08:33 crc kubenswrapper[4805]: I0226 18:08:33.280733 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmmpt" event={"ID":"d0e42686-60cb-41d8-af2f-fd5832383c6a","Type":"ContainerDied","Data":"2c464856ec44c6fb1a8433d328741a1543c564a67fd44bafbef60ebe6a91a9d3"} Feb 26 18:08:33 crc kubenswrapper[4805]: I0226 18:08:33.280805 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmmpt" event={"ID":"d0e42686-60cb-41d8-af2f-fd5832383c6a","Type":"ContainerDied","Data":"ca8cb279c56f9edac8ab49dd665af9a82af9e2603984b72627a20fbe09397ed6"} Feb 26 18:08:33 crc kubenswrapper[4805]: I0226 18:08:33.280757 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmmpt" Feb 26 18:08:33 crc kubenswrapper[4805]: I0226 18:08:33.280825 4805 scope.go:117] "RemoveContainer" containerID="2c464856ec44c6fb1a8433d328741a1543c564a67fd44bafbef60ebe6a91a9d3" Feb 26 18:08:33 crc kubenswrapper[4805]: I0226 18:08:33.287325 4805 generic.go:334] "Generic (PLEG): container finished" podID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerID="3dde3ae0c6957002421c38b652b72ebb5280728ffecf8fb3fb8bf0c13b250691" exitCode=0 Feb 26 18:08:33 crc kubenswrapper[4805]: I0226 18:08:33.287372 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" event={"ID":"25e83477-65d0-41be-8e55-fdacfc5871a8","Type":"ContainerDied","Data":"3dde3ae0c6957002421c38b652b72ebb5280728ffecf8fb3fb8bf0c13b250691"} Feb 26 18:08:33 crc kubenswrapper[4805]: I0226 18:08:33.288189 4805 scope.go:117] "RemoveContainer" containerID="3dde3ae0c6957002421c38b652b72ebb5280728ffecf8fb3fb8bf0c13b250691" Feb 26 18:08:33 crc kubenswrapper[4805]: E0226 18:08:33.288481 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:08:33 crc kubenswrapper[4805]: I0226 18:08:33.325527 4805 scope.go:117] "RemoveContainer" containerID="08f55ed995e6021a0775a0294f118631777577d6d5628ce7d7bd1085d699e912" Feb 26 18:08:33 crc kubenswrapper[4805]: I0226 18:08:33.335179 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gmmpt"] Feb 26 18:08:33 crc kubenswrapper[4805]: I0226 18:08:33.343718 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gmmpt"] Feb 26 18:08:33 crc kubenswrapper[4805]: I0226 18:08:33.359190 4805 scope.go:117] "RemoveContainer" containerID="8e2543b4bf0a0608aba17204b65d25d986919521fd634eec996b8f5f44f4aa18" Feb 26 18:08:33 crc kubenswrapper[4805]: I0226 18:08:33.447139 4805 scope.go:117] "RemoveContainer" containerID="2c464856ec44c6fb1a8433d328741a1543c564a67fd44bafbef60ebe6a91a9d3" Feb 26 18:08:33 crc kubenswrapper[4805]: E0226 18:08:33.447677 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c464856ec44c6fb1a8433d328741a1543c564a67fd44bafbef60ebe6a91a9d3\": container with ID starting with 2c464856ec44c6fb1a8433d328741a1543c564a67fd44bafbef60ebe6a91a9d3 not found: ID does not exist" containerID="2c464856ec44c6fb1a8433d328741a1543c564a67fd44bafbef60ebe6a91a9d3" Feb 26 18:08:33 crc kubenswrapper[4805]: I0226 18:08:33.447727 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c464856ec44c6fb1a8433d328741a1543c564a67fd44bafbef60ebe6a91a9d3"} err="failed to get container status \"2c464856ec44c6fb1a8433d328741a1543c564a67fd44bafbef60ebe6a91a9d3\": rpc error: code = NotFound desc = could not find container \"2c464856ec44c6fb1a8433d328741a1543c564a67fd44bafbef60ebe6a91a9d3\": container with ID starting with 2c464856ec44c6fb1a8433d328741a1543c564a67fd44bafbef60ebe6a91a9d3 not found: ID does not exist" Feb 26 18:08:33 crc kubenswrapper[4805]: I0226 18:08:33.447762 4805 scope.go:117] "RemoveContainer" containerID="08f55ed995e6021a0775a0294f118631777577d6d5628ce7d7bd1085d699e912" Feb 26 18:08:33 crc kubenswrapper[4805]: E0226 18:08:33.448214 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08f55ed995e6021a0775a0294f118631777577d6d5628ce7d7bd1085d699e912\": container with ID starting with 08f55ed995e6021a0775a0294f118631777577d6d5628ce7d7bd1085d699e912 not found: ID does not exist" containerID="08f55ed995e6021a0775a0294f118631777577d6d5628ce7d7bd1085d699e912" Feb 26 18:08:33 crc kubenswrapper[4805]: I0226 18:08:33.448252 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08f55ed995e6021a0775a0294f118631777577d6d5628ce7d7bd1085d699e912"} err="failed to get container status \"08f55ed995e6021a0775a0294f118631777577d6d5628ce7d7bd1085d699e912\": rpc error: code = NotFound desc = could not find container \"08f55ed995e6021a0775a0294f118631777577d6d5628ce7d7bd1085d699e912\": container with ID starting with 08f55ed995e6021a0775a0294f118631777577d6d5628ce7d7bd1085d699e912 not found: ID does not exist" Feb 26 18:08:33 crc kubenswrapper[4805]: I0226 18:08:33.448278 4805 scope.go:117] "RemoveContainer" containerID="8e2543b4bf0a0608aba17204b65d25d986919521fd634eec996b8f5f44f4aa18" Feb 26 18:08:33 crc kubenswrapper[4805]: E0226 18:08:33.450505 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e2543b4bf0a0608aba17204b65d25d986919521fd634eec996b8f5f44f4aa18\": container with ID starting with 8e2543b4bf0a0608aba17204b65d25d986919521fd634eec996b8f5f44f4aa18 not found: ID does not exist" containerID="8e2543b4bf0a0608aba17204b65d25d986919521fd634eec996b8f5f44f4aa18" Feb 26 18:08:33 crc kubenswrapper[4805]: I0226 18:08:33.450538 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e2543b4bf0a0608aba17204b65d25d986919521fd634eec996b8f5f44f4aa18"} err="failed to get container status \"8e2543b4bf0a0608aba17204b65d25d986919521fd634eec996b8f5f44f4aa18\": rpc error: code = NotFound desc = could not find container \"8e2543b4bf0a0608aba17204b65d25d986919521fd634eec996b8f5f44f4aa18\": container with ID starting with 8e2543b4bf0a0608aba17204b65d25d986919521fd634eec996b8f5f44f4aa18 not found: ID does not exist" Feb 26 18:08:33 crc kubenswrapper[4805]: I0226 18:08:33.450559 4805 scope.go:117] "RemoveContainer" containerID="15dbabcf6bac43c80f9b32d154755c0ee038612c5fab62982466ef8dd39a9291" Feb 26 18:08:34 crc kubenswrapper[4805]: I0226 18:08:34.967193 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0e42686-60cb-41d8-af2f-fd5832383c6a" path="/var/lib/kubelet/pods/d0e42686-60cb-41d8-af2f-fd5832383c6a/volumes" Feb 26 18:08:44 crc kubenswrapper[4805]: I0226 18:08:44.953795 4805 scope.go:117] "RemoveContainer" containerID="3dde3ae0c6957002421c38b652b72ebb5280728ffecf8fb3fb8bf0c13b250691" Feb 26 18:08:44 crc kubenswrapper[4805]: E0226 18:08:44.954602 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:08:47 crc kubenswrapper[4805]: I0226 18:08:47.609380 4805 scope.go:117] "RemoveContainer" containerID="4e2c384e9166d76c8abb9429c338d70f14480544f1a467d21433bd0448edd194" Feb 26 18:08:58 crc kubenswrapper[4805]: I0226 18:08:58.953882 4805 scope.go:117] "RemoveContainer" containerID="3dde3ae0c6957002421c38b652b72ebb5280728ffecf8fb3fb8bf0c13b250691" Feb 26 18:08:58 crc kubenswrapper[4805]: E0226 18:08:58.954819 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:09:04 crc kubenswrapper[4805]: E0226 18:09:04.983334 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Feb 26 18:09:09 crc kubenswrapper[4805]: I0226 18:09:09.954091 4805 scope.go:117] "RemoveContainer" containerID="3dde3ae0c6957002421c38b652b72ebb5280728ffecf8fb3fb8bf0c13b250691" Feb 26 18:09:09 crc kubenswrapper[4805]: E0226 18:09:09.955173 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:09:24 crc kubenswrapper[4805]: I0226 18:09:24.955945 4805 scope.go:117] "RemoveContainer" containerID="3dde3ae0c6957002421c38b652b72ebb5280728ffecf8fb3fb8bf0c13b250691" Feb 26 18:09:24 crc kubenswrapper[4805]: E0226 18:09:24.958827 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:09:35 crc kubenswrapper[4805]: I0226 18:09:35.953414 4805 scope.go:117] "RemoveContainer" containerID="3dde3ae0c6957002421c38b652b72ebb5280728ffecf8fb3fb8bf0c13b250691" Feb 26 18:09:35 crc kubenswrapper[4805]: E0226 18:09:35.954161 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:09:46 crc kubenswrapper[4805]: I0226 18:09:46.961589 4805 scope.go:117] "RemoveContainer" containerID="3dde3ae0c6957002421c38b652b72ebb5280728ffecf8fb3fb8bf0c13b250691" Feb 26 18:09:46 crc kubenswrapper[4805]: E0226 18:09:46.962157 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:10:00 crc kubenswrapper[4805]: I0226 18:10:00.165156 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535490-pmznq"] Feb 26 18:10:00 crc kubenswrapper[4805]: E0226 18:10:00.166221 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0e42686-60cb-41d8-af2f-fd5832383c6a" containerName="registry-server" Feb 26 18:10:00 crc kubenswrapper[4805]: I0226 18:10:00.166239 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0e42686-60cb-41d8-af2f-fd5832383c6a" containerName="registry-server" Feb 26 18:10:00 crc kubenswrapper[4805]: E0226 18:10:00.166259 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0e42686-60cb-41d8-af2f-fd5832383c6a" containerName="extract-content" Feb 26 18:10:00 crc kubenswrapper[4805]: I0226 18:10:00.166271 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0e42686-60cb-41d8-af2f-fd5832383c6a" containerName="extract-content" Feb 26 18:10:00 crc kubenswrapper[4805]: E0226 18:10:00.166309 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0e42686-60cb-41d8-af2f-fd5832383c6a" containerName="extract-utilities" Feb 26 18:10:00 crc kubenswrapper[4805]: I0226 18:10:00.166318 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0e42686-60cb-41d8-af2f-fd5832383c6a" containerName="extract-utilities" Feb 26 18:10:00 crc kubenswrapper[4805]: I0226 18:10:00.166583 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0e42686-60cb-41d8-af2f-fd5832383c6a" containerName="registry-server" Feb 26 18:10:00 crc kubenswrapper[4805]: I0226 18:10:00.167547 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535490-pmznq" Feb 26 18:10:00 crc kubenswrapper[4805]: I0226 18:10:00.175064 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 18:10:00 crc kubenswrapper[4805]: I0226 18:10:00.175523 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 18:10:00 crc kubenswrapper[4805]: I0226 18:10:00.175709 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 18:10:00 crc kubenswrapper[4805]: I0226 18:10:00.207217 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535490-pmznq"] Feb 26 18:10:00 crc kubenswrapper[4805]: I0226 18:10:00.226660 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cpzs\" (UniqueName: \"kubernetes.io/projected/9670664a-aeea-4a81-8789-cd93b87c8a03-kube-api-access-9cpzs\") pod \"auto-csr-approver-29535490-pmznq\" (UID: \"9670664a-aeea-4a81-8789-cd93b87c8a03\") " pod="openshift-infra/auto-csr-approver-29535490-pmznq" Feb 26 18:10:00 crc kubenswrapper[4805]: I0226 18:10:00.329603 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cpzs\" (UniqueName: \"kubernetes.io/projected/9670664a-aeea-4a81-8789-cd93b87c8a03-kube-api-access-9cpzs\") pod \"auto-csr-approver-29535490-pmznq\" (UID: \"9670664a-aeea-4a81-8789-cd93b87c8a03\") " pod="openshift-infra/auto-csr-approver-29535490-pmznq" Feb 26 18:10:00 crc kubenswrapper[4805]: I0226 18:10:00.362537 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cpzs\" (UniqueName: \"kubernetes.io/projected/9670664a-aeea-4a81-8789-cd93b87c8a03-kube-api-access-9cpzs\") pod \"auto-csr-approver-29535490-pmznq\" (UID: \"9670664a-aeea-4a81-8789-cd93b87c8a03\") " pod="openshift-infra/auto-csr-approver-29535490-pmznq" Feb 26 18:10:00 crc kubenswrapper[4805]: I0226 18:10:00.498488 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535490-pmznq" Feb 26 18:10:00 crc kubenswrapper[4805]: I0226 18:10:00.984152 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535490-pmznq"] Feb 26 18:10:01 crc kubenswrapper[4805]: I0226 18:10:01.171097 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535490-pmznq" event={"ID":"9670664a-aeea-4a81-8789-cd93b87c8a03","Type":"ContainerStarted","Data":"c90b54c8d5433ff7eda765ad6da173493d7208d0fc5329bbe58044961e4f38ad"} Feb 26 18:10:01 crc kubenswrapper[4805]: I0226 18:10:01.952841 4805 scope.go:117] "RemoveContainer" containerID="3dde3ae0c6957002421c38b652b72ebb5280728ffecf8fb3fb8bf0c13b250691" Feb 26 18:10:01 crc kubenswrapper[4805]: E0226 18:10:01.953309 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:10:03 crc kubenswrapper[4805]: I0226 18:10:03.193215 4805 generic.go:334] "Generic (PLEG): container finished" podID="9670664a-aeea-4a81-8789-cd93b87c8a03" containerID="c4e594f4db8223809056923bfddcb40a1f82f6ca5b2bd1d26e326a5fdb1df968" exitCode=0 Feb 26 18:10:03 crc kubenswrapper[4805]: I0226 18:10:03.193308 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535490-pmznq" event={"ID":"9670664a-aeea-4a81-8789-cd93b87c8a03","Type":"ContainerDied","Data":"c4e594f4db8223809056923bfddcb40a1f82f6ca5b2bd1d26e326a5fdb1df968"} Feb 26 18:10:04 crc kubenswrapper[4805]: I0226 18:10:04.610194 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535490-pmznq" Feb 26 18:10:04 crc kubenswrapper[4805]: I0226 18:10:04.727279 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cpzs\" (UniqueName: \"kubernetes.io/projected/9670664a-aeea-4a81-8789-cd93b87c8a03-kube-api-access-9cpzs\") pod \"9670664a-aeea-4a81-8789-cd93b87c8a03\" (UID: \"9670664a-aeea-4a81-8789-cd93b87c8a03\") " Feb 26 18:10:04 crc kubenswrapper[4805]: I0226 18:10:04.734506 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9670664a-aeea-4a81-8789-cd93b87c8a03-kube-api-access-9cpzs" (OuterVolumeSpecName: "kube-api-access-9cpzs") pod "9670664a-aeea-4a81-8789-cd93b87c8a03" (UID: "9670664a-aeea-4a81-8789-cd93b87c8a03"). InnerVolumeSpecName "kube-api-access-9cpzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:10:04 crc kubenswrapper[4805]: I0226 18:10:04.832891 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cpzs\" (UniqueName: \"kubernetes.io/projected/9670664a-aeea-4a81-8789-cd93b87c8a03-kube-api-access-9cpzs\") on node \"crc\" DevicePath \"\"" Feb 26 18:10:05 crc kubenswrapper[4805]: I0226 18:10:05.223144 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535490-pmznq" event={"ID":"9670664a-aeea-4a81-8789-cd93b87c8a03","Type":"ContainerDied","Data":"c90b54c8d5433ff7eda765ad6da173493d7208d0fc5329bbe58044961e4f38ad"} Feb 26 18:10:05 crc kubenswrapper[4805]: I0226 18:10:05.223193 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c90b54c8d5433ff7eda765ad6da173493d7208d0fc5329bbe58044961e4f38ad" Feb 26 18:10:05 crc kubenswrapper[4805]: I0226 18:10:05.223220 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535490-pmznq" Feb 26 18:10:05 crc kubenswrapper[4805]: I0226 18:10:05.709281 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535484-lqj4h"] Feb 26 18:10:05 crc kubenswrapper[4805]: I0226 18:10:05.720655 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535484-lqj4h"] Feb 26 18:10:06 crc kubenswrapper[4805]: I0226 18:10:06.963859 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54d572b5-47a3-4ca9-a5dc-d1cd09d005fd" path="/var/lib/kubelet/pods/54d572b5-47a3-4ca9-a5dc-d1cd09d005fd/volumes" Feb 26 18:10:15 crc kubenswrapper[4805]: I0226 18:10:15.954377 4805 scope.go:117] "RemoveContainer" containerID="3dde3ae0c6957002421c38b652b72ebb5280728ffecf8fb3fb8bf0c13b250691" Feb 26 18:10:15 crc kubenswrapper[4805]: E0226 18:10:15.955086 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:10:30 crc kubenswrapper[4805]: I0226 18:10:30.953263 4805 scope.go:117] "RemoveContainer" containerID="3dde3ae0c6957002421c38b652b72ebb5280728ffecf8fb3fb8bf0c13b250691" Feb 26 18:10:30 crc kubenswrapper[4805]: E0226 18:10:30.954072 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:10:43 crc kubenswrapper[4805]: I0226 18:10:43.953453 4805 scope.go:117] "RemoveContainer" containerID="3dde3ae0c6957002421c38b652b72ebb5280728ffecf8fb3fb8bf0c13b250691" Feb 26 18:10:43 crc kubenswrapper[4805]: E0226 18:10:43.954354 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:10:47 crc kubenswrapper[4805]: I0226 18:10:47.717172 4805 scope.go:117] "RemoveContainer" containerID="1f1d52e8831a1bf5a4a374a0da7cb5b63e0fe23fdfdfa1662d4660509163ea5b" Feb 26 18:10:56 crc kubenswrapper[4805]: I0226 18:10:56.966442 4805 scope.go:117] "RemoveContainer" containerID="3dde3ae0c6957002421c38b652b72ebb5280728ffecf8fb3fb8bf0c13b250691" Feb 26 18:10:56 crc kubenswrapper[4805]: E0226 18:10:56.967639 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:11:10 crc kubenswrapper[4805]: I0226 18:11:10.953978 4805 scope.go:117] "RemoveContainer" containerID="3dde3ae0c6957002421c38b652b72ebb5280728ffecf8fb3fb8bf0c13b250691" Feb 26 18:11:10 crc kubenswrapper[4805]: E0226 18:11:10.954930 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:11:24 crc kubenswrapper[4805]: I0226 18:11:24.953322 4805 scope.go:117] "RemoveContainer" containerID="3dde3ae0c6957002421c38b652b72ebb5280728ffecf8fb3fb8bf0c13b250691" Feb 26 18:11:24 crc kubenswrapper[4805]: E0226 18:11:24.954521 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:11:37 crc kubenswrapper[4805]: I0226 18:11:37.953011 4805 scope.go:117] "RemoveContainer" containerID="3dde3ae0c6957002421c38b652b72ebb5280728ffecf8fb3fb8bf0c13b250691" Feb 26 18:11:37 crc kubenswrapper[4805]: E0226 18:11:37.954235 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:11:51 crc kubenswrapper[4805]: I0226 18:11:51.955852 4805 scope.go:117] "RemoveContainer" containerID="3dde3ae0c6957002421c38b652b72ebb5280728ffecf8fb3fb8bf0c13b250691" Feb 26 18:11:51 crc kubenswrapper[4805]: E0226 18:11:51.956641 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:12:00 crc kubenswrapper[4805]: I0226 18:12:00.191447 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535492-cqnbn"] Feb 26 18:12:00 crc kubenswrapper[4805]: E0226 18:12:00.192657 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9670664a-aeea-4a81-8789-cd93b87c8a03" containerName="oc" Feb 26 18:12:00 crc kubenswrapper[4805]: I0226 18:12:00.192678 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="9670664a-aeea-4a81-8789-cd93b87c8a03" containerName="oc" Feb 26 18:12:00 crc kubenswrapper[4805]: I0226 18:12:00.192996 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="9670664a-aeea-4a81-8789-cd93b87c8a03" containerName="oc" Feb 26 18:12:00 crc kubenswrapper[4805]: I0226 18:12:00.193984 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535492-cqnbn" Feb 26 18:12:00 crc kubenswrapper[4805]: I0226 18:12:00.196747 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 18:12:00 crc kubenswrapper[4805]: I0226 18:12:00.197474 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 18:12:00 crc kubenswrapper[4805]: I0226 18:12:00.198272 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 18:12:00 crc kubenswrapper[4805]: I0226 18:12:00.206147 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535492-cqnbn"] Feb 26 18:12:00 crc kubenswrapper[4805]: I0226 18:12:00.242374 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc5wd\" (UniqueName: \"kubernetes.io/projected/b2ec298b-b595-48c0-8bab-35dbefa4a5e5-kube-api-access-rc5wd\") pod \"auto-csr-approver-29535492-cqnbn\" (UID: \"b2ec298b-b595-48c0-8bab-35dbefa4a5e5\") " pod="openshift-infra/auto-csr-approver-29535492-cqnbn" Feb 26 18:12:00 crc kubenswrapper[4805]: I0226 18:12:00.344460 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc5wd\" (UniqueName: \"kubernetes.io/projected/b2ec298b-b595-48c0-8bab-35dbefa4a5e5-kube-api-access-rc5wd\") pod \"auto-csr-approver-29535492-cqnbn\" (UID: \"b2ec298b-b595-48c0-8bab-35dbefa4a5e5\") " pod="openshift-infra/auto-csr-approver-29535492-cqnbn" Feb 26 18:12:00 crc kubenswrapper[4805]: I0226 18:12:00.363717 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc5wd\" (UniqueName: \"kubernetes.io/projected/b2ec298b-b595-48c0-8bab-35dbefa4a5e5-kube-api-access-rc5wd\") pod \"auto-csr-approver-29535492-cqnbn\" (UID: \"b2ec298b-b595-48c0-8bab-35dbefa4a5e5\") " pod="openshift-infra/auto-csr-approver-29535492-cqnbn" Feb 26 18:12:00 crc kubenswrapper[4805]: I0226 18:12:00.522530 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535492-cqnbn" Feb 26 18:12:01 crc kubenswrapper[4805]: W0226 18:12:01.014823 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2ec298b_b595_48c0_8bab_35dbefa4a5e5.slice/crio-60235f3cc2dbe8f855cc9c53e6ee9a07cb57b217203bf08154e718f32bc67ac4 WatchSource:0}: Error finding container 60235f3cc2dbe8f855cc9c53e6ee9a07cb57b217203bf08154e718f32bc67ac4: Status 404 returned error can't find the container with id 60235f3cc2dbe8f855cc9c53e6ee9a07cb57b217203bf08154e718f32bc67ac4 Feb 26 18:12:01 crc kubenswrapper[4805]: I0226 18:12:01.017538 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535492-cqnbn"] Feb 26 18:12:01 crc kubenswrapper[4805]: I0226 18:12:01.019236 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 18:12:01 crc kubenswrapper[4805]: I0226 18:12:01.344477 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535492-cqnbn" event={"ID":"b2ec298b-b595-48c0-8bab-35dbefa4a5e5","Type":"ContainerStarted","Data":"60235f3cc2dbe8f855cc9c53e6ee9a07cb57b217203bf08154e718f32bc67ac4"} Feb 26 18:12:03 crc kubenswrapper[4805]: I0226 18:12:03.369434 4805 generic.go:334] "Generic (PLEG): container finished" podID="b2ec298b-b595-48c0-8bab-35dbefa4a5e5" containerID="cc81d72902506a817bf18e2c9a26e1d0d9e5464925ce589eb1a2ebe52627508a" exitCode=0 Feb 26 18:12:03 crc kubenswrapper[4805]: I0226 18:12:03.369497 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535492-cqnbn" event={"ID":"b2ec298b-b595-48c0-8bab-35dbefa4a5e5","Type":"ContainerDied","Data":"cc81d72902506a817bf18e2c9a26e1d0d9e5464925ce589eb1a2ebe52627508a"} Feb 26 18:12:04 crc kubenswrapper[4805]: I0226 18:12:04.854621 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535492-cqnbn" Feb 26 18:12:05 crc kubenswrapper[4805]: I0226 18:12:05.053425 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rc5wd\" (UniqueName: \"kubernetes.io/projected/b2ec298b-b595-48c0-8bab-35dbefa4a5e5-kube-api-access-rc5wd\") pod \"b2ec298b-b595-48c0-8bab-35dbefa4a5e5\" (UID: \"b2ec298b-b595-48c0-8bab-35dbefa4a5e5\") " Feb 26 18:12:05 crc kubenswrapper[4805]: I0226 18:12:05.065277 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2ec298b-b595-48c0-8bab-35dbefa4a5e5-kube-api-access-rc5wd" (OuterVolumeSpecName: "kube-api-access-rc5wd") pod "b2ec298b-b595-48c0-8bab-35dbefa4a5e5" (UID: "b2ec298b-b595-48c0-8bab-35dbefa4a5e5"). InnerVolumeSpecName "kube-api-access-rc5wd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:12:05 crc kubenswrapper[4805]: I0226 18:12:05.158428 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rc5wd\" (UniqueName: \"kubernetes.io/projected/b2ec298b-b595-48c0-8bab-35dbefa4a5e5-kube-api-access-rc5wd\") on node \"crc\" DevicePath \"\"" Feb 26 18:12:05 crc kubenswrapper[4805]: I0226 18:12:05.391387 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535492-cqnbn" event={"ID":"b2ec298b-b595-48c0-8bab-35dbefa4a5e5","Type":"ContainerDied","Data":"60235f3cc2dbe8f855cc9c53e6ee9a07cb57b217203bf08154e718f32bc67ac4"} Feb 26 18:12:05 crc kubenswrapper[4805]: I0226 18:12:05.391442 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60235f3cc2dbe8f855cc9c53e6ee9a07cb57b217203bf08154e718f32bc67ac4" Feb 26 18:12:05 crc kubenswrapper[4805]: I0226 18:12:05.391808 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535492-cqnbn" Feb 26 18:12:05 crc kubenswrapper[4805]: I0226 18:12:05.925564 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535486-76rct"] Feb 26 18:12:05 crc kubenswrapper[4805]: I0226 18:12:05.934451 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535486-76rct"] Feb 26 18:12:06 crc kubenswrapper[4805]: I0226 18:12:06.969253 4805 scope.go:117] "RemoveContainer" containerID="3dde3ae0c6957002421c38b652b72ebb5280728ffecf8fb3fb8bf0c13b250691" Feb 26 18:12:06 crc kubenswrapper[4805]: E0226 18:12:06.969875 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:12:06 crc kubenswrapper[4805]: I0226 18:12:06.969885 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acbee537-f41b-4360-b00f-a303e9ac4465" path="/var/lib/kubelet/pods/acbee537-f41b-4360-b00f-a303e9ac4465/volumes" Feb 26 18:12:21 crc kubenswrapper[4805]: I0226 18:12:21.954400 4805 scope.go:117] "RemoveContainer" containerID="3dde3ae0c6957002421c38b652b72ebb5280728ffecf8fb3fb8bf0c13b250691" Feb 26 18:12:21 crc kubenswrapper[4805]: E0226 18:12:21.955584 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:12:36 crc kubenswrapper[4805]: I0226 18:12:36.963554 4805 scope.go:117] "RemoveContainer" containerID="3dde3ae0c6957002421c38b652b72ebb5280728ffecf8fb3fb8bf0c13b250691" Feb 26 18:12:36 crc kubenswrapper[4805]: E0226 18:12:36.964488 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:12:47 crc kubenswrapper[4805]: I0226 18:12:47.881087 4805 scope.go:117] "RemoveContainer" containerID="9a9353c37fe195c1555391589b64faad7b92d590b36b5dc5f3ae5e8b81feacd3" Feb 26 18:12:51 crc kubenswrapper[4805]: I0226 18:12:51.952739 4805 scope.go:117] "RemoveContainer" containerID="3dde3ae0c6957002421c38b652b72ebb5280728ffecf8fb3fb8bf0c13b250691" Feb 26 18:12:51 crc kubenswrapper[4805]: E0226 18:12:51.953387 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:13:03 crc kubenswrapper[4805]: I0226 18:13:03.953698 4805 scope.go:117] "RemoveContainer" containerID="3dde3ae0c6957002421c38b652b72ebb5280728ffecf8fb3fb8bf0c13b250691" Feb 26 18:13:03 crc kubenswrapper[4805]: E0226 18:13:03.954482 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:13:18 crc kubenswrapper[4805]: I0226 18:13:18.954634 4805 scope.go:117] "RemoveContainer" containerID="3dde3ae0c6957002421c38b652b72ebb5280728ffecf8fb3fb8bf0c13b250691" Feb 26 18:13:18 crc kubenswrapper[4805]: E0226 18:13:18.955410 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:13:32 crc kubenswrapper[4805]: I0226 18:13:32.978676 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5bw8x"] Feb 26 18:13:32 crc kubenswrapper[4805]: E0226 18:13:32.980247 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2ec298b-b595-48c0-8bab-35dbefa4a5e5" containerName="oc" Feb 26 18:13:32 crc kubenswrapper[4805]: I0226 18:13:32.980270 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2ec298b-b595-48c0-8bab-35dbefa4a5e5" containerName="oc" Feb 26 18:13:32 crc kubenswrapper[4805]: I0226 18:13:32.980609 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2ec298b-b595-48c0-8bab-35dbefa4a5e5" containerName="oc" Feb 26 18:13:32 crc kubenswrapper[4805]: I0226 18:13:32.983245 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5bw8x" Feb 26 18:13:33 crc kubenswrapper[4805]: I0226 18:13:33.005226 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5bw8x"] Feb 26 18:13:33 crc kubenswrapper[4805]: I0226 18:13:33.182246 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqkf4\" (UniqueName: \"kubernetes.io/projected/a03acdf6-9e4e-444d-ad99-f1d56533ccf8-kube-api-access-gqkf4\") pod \"redhat-operators-5bw8x\" (UID: \"a03acdf6-9e4e-444d-ad99-f1d56533ccf8\") " pod="openshift-marketplace/redhat-operators-5bw8x" Feb 26 18:13:33 crc kubenswrapper[4805]: I0226 18:13:33.182352 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a03acdf6-9e4e-444d-ad99-f1d56533ccf8-catalog-content\") pod \"redhat-operators-5bw8x\" (UID: \"a03acdf6-9e4e-444d-ad99-f1d56533ccf8\") " pod="openshift-marketplace/redhat-operators-5bw8x" Feb 26 18:13:33 crc kubenswrapper[4805]: I0226 18:13:33.182387 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a03acdf6-9e4e-444d-ad99-f1d56533ccf8-utilities\") pod \"redhat-operators-5bw8x\" (UID: \"a03acdf6-9e4e-444d-ad99-f1d56533ccf8\") " pod="openshift-marketplace/redhat-operators-5bw8x" Feb 26 18:13:33 crc kubenswrapper[4805]: I0226 18:13:33.283657 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqkf4\" (UniqueName: \"kubernetes.io/projected/a03acdf6-9e4e-444d-ad99-f1d56533ccf8-kube-api-access-gqkf4\") pod \"redhat-operators-5bw8x\" (UID: \"a03acdf6-9e4e-444d-ad99-f1d56533ccf8\") " pod="openshift-marketplace/redhat-operators-5bw8x" Feb 26 18:13:33 crc kubenswrapper[4805]: I0226 18:13:33.283749 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a03acdf6-9e4e-444d-ad99-f1d56533ccf8-catalog-content\") pod \"redhat-operators-5bw8x\" (UID: \"a03acdf6-9e4e-444d-ad99-f1d56533ccf8\") " pod="openshift-marketplace/redhat-operators-5bw8x" Feb 26 18:13:33 crc kubenswrapper[4805]: I0226 18:13:33.283783 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a03acdf6-9e4e-444d-ad99-f1d56533ccf8-utilities\") pod \"redhat-operators-5bw8x\" (UID: \"a03acdf6-9e4e-444d-ad99-f1d56533ccf8\") " pod="openshift-marketplace/redhat-operators-5bw8x" Feb 26 18:13:33 crc kubenswrapper[4805]: I0226 18:13:33.284350 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a03acdf6-9e4e-444d-ad99-f1d56533ccf8-catalog-content\") pod \"redhat-operators-5bw8x\" (UID: \"a03acdf6-9e4e-444d-ad99-f1d56533ccf8\") " pod="openshift-marketplace/redhat-operators-5bw8x" Feb 26 18:13:33 crc kubenswrapper[4805]: I0226 18:13:33.284420 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a03acdf6-9e4e-444d-ad99-f1d56533ccf8-utilities\") pod \"redhat-operators-5bw8x\" (UID: \"a03acdf6-9e4e-444d-ad99-f1d56533ccf8\") " pod="openshift-marketplace/redhat-operators-5bw8x" Feb 26 18:13:33 crc kubenswrapper[4805]: I0226 18:13:33.302650 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqkf4\" (UniqueName: \"kubernetes.io/projected/a03acdf6-9e4e-444d-ad99-f1d56533ccf8-kube-api-access-gqkf4\") pod \"redhat-operators-5bw8x\" (UID: \"a03acdf6-9e4e-444d-ad99-f1d56533ccf8\") " pod="openshift-marketplace/redhat-operators-5bw8x" Feb 26 18:13:33 crc kubenswrapper[4805]: I0226 18:13:33.330426 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5bw8x" Feb 26 18:13:33 crc kubenswrapper[4805]: I0226 18:13:33.791916 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5bw8x"] Feb 26 18:13:33 crc kubenswrapper[4805]: I0226 18:13:33.953698 4805 scope.go:117] "RemoveContainer" containerID="3dde3ae0c6957002421c38b652b72ebb5280728ffecf8fb3fb8bf0c13b250691" Feb 26 18:13:34 crc kubenswrapper[4805]: I0226 18:13:34.356825 4805 generic.go:334] "Generic (PLEG): container finished" podID="a03acdf6-9e4e-444d-ad99-f1d56533ccf8" containerID="e4065ec0fab45d3a063162bbd9a1d56d7b39ad6c3647a7d06f4443fcda261b52" exitCode=0 Feb 26 18:13:34 crc kubenswrapper[4805]: I0226 18:13:34.357083 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5bw8x" event={"ID":"a03acdf6-9e4e-444d-ad99-f1d56533ccf8","Type":"ContainerDied","Data":"e4065ec0fab45d3a063162bbd9a1d56d7b39ad6c3647a7d06f4443fcda261b52"} Feb 26 18:13:34 crc kubenswrapper[4805]: I0226 18:13:34.357259 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5bw8x" event={"ID":"a03acdf6-9e4e-444d-ad99-f1d56533ccf8","Type":"ContainerStarted","Data":"e0973e29455fe46058cd36715db89701091dd4f65dbb7b2bc3f63a6465afaab0"} Feb 26 18:13:34 crc kubenswrapper[4805]: I0226 18:13:34.364710 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" event={"ID":"25e83477-65d0-41be-8e55-fdacfc5871a8","Type":"ContainerStarted","Data":"e1edd14159ce6432f8e6114fdcbd7c7a338238d8dfa653a4f4dfc4085024378c"} Feb 26 18:13:35 crc kubenswrapper[4805]: I0226 18:13:35.375286 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5bw8x" event={"ID":"a03acdf6-9e4e-444d-ad99-f1d56533ccf8","Type":"ContainerStarted","Data":"75b754e28ca0f39e8dab0d1dc1ac50322277c0aedf0b36dfd4e3ce110abaa852"} Feb 26 18:13:41 crc kubenswrapper[4805]: I0226 18:13:41.447433 4805 generic.go:334] "Generic (PLEG): container finished" podID="a03acdf6-9e4e-444d-ad99-f1d56533ccf8" containerID="75b754e28ca0f39e8dab0d1dc1ac50322277c0aedf0b36dfd4e3ce110abaa852" exitCode=0 Feb 26 18:13:41 crc kubenswrapper[4805]: I0226 18:13:41.447528 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5bw8x" event={"ID":"a03acdf6-9e4e-444d-ad99-f1d56533ccf8","Type":"ContainerDied","Data":"75b754e28ca0f39e8dab0d1dc1ac50322277c0aedf0b36dfd4e3ce110abaa852"} Feb 26 18:13:42 crc kubenswrapper[4805]: I0226 18:13:42.460787 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5bw8x" event={"ID":"a03acdf6-9e4e-444d-ad99-f1d56533ccf8","Type":"ContainerStarted","Data":"55af667139b7a39e1b99c4721c897346a9de143b0e4ef6ac15670708eeffdfd1"} Feb 26 18:13:42 crc kubenswrapper[4805]: I0226 18:13:42.488275 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5bw8x" podStartSLOduration=2.849621114 podStartE2EDuration="10.488253164s" podCreationTimestamp="2026-02-26 18:13:32 +0000 UTC" firstStartedPulling="2026-02-26 18:13:34.359005209 +0000 UTC m=+3528.920759548" lastFinishedPulling="2026-02-26 18:13:41.997637259 +0000 UTC m=+3536.559391598" observedRunningTime="2026-02-26 18:13:42.479153363 +0000 UTC m=+3537.040907742" watchObservedRunningTime="2026-02-26 18:13:42.488253164 +0000 UTC m=+3537.050007513" Feb 26 18:13:43 crc kubenswrapper[4805]: I0226 18:13:43.330773 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5bw8x" Feb 26 18:13:43 crc kubenswrapper[4805]: I0226 18:13:43.331083 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5bw8x" Feb 26 18:13:44 crc kubenswrapper[4805]: I0226 18:13:44.382443 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5bw8x" podUID="a03acdf6-9e4e-444d-ad99-f1d56533ccf8" containerName="registry-server" probeResult="failure" output=< Feb 26 18:13:44 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 26 18:13:44 crc kubenswrapper[4805]: > Feb 26 18:13:53 crc kubenswrapper[4805]: I0226 18:13:53.374949 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5bw8x" Feb 26 18:13:53 crc kubenswrapper[4805]: I0226 18:13:53.434326 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5bw8x" Feb 26 18:13:53 crc kubenswrapper[4805]: I0226 18:13:53.619923 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5bw8x"] Feb 26 18:13:54 crc kubenswrapper[4805]: I0226 18:13:54.585512 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5bw8x" podUID="a03acdf6-9e4e-444d-ad99-f1d56533ccf8" containerName="registry-server" containerID="cri-o://55af667139b7a39e1b99c4721c897346a9de143b0e4ef6ac15670708eeffdfd1" gracePeriod=2 Feb 26 18:13:55 crc kubenswrapper[4805]: I0226 18:13:55.247693 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5bw8x" Feb 26 18:13:55 crc kubenswrapper[4805]: I0226 18:13:55.386948 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a03acdf6-9e4e-444d-ad99-f1d56533ccf8-utilities\") pod \"a03acdf6-9e4e-444d-ad99-f1d56533ccf8\" (UID: \"a03acdf6-9e4e-444d-ad99-f1d56533ccf8\") " Feb 26 18:13:55 crc kubenswrapper[4805]: I0226 18:13:55.387091 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqkf4\" (UniqueName: \"kubernetes.io/projected/a03acdf6-9e4e-444d-ad99-f1d56533ccf8-kube-api-access-gqkf4\") pod \"a03acdf6-9e4e-444d-ad99-f1d56533ccf8\" (UID: \"a03acdf6-9e4e-444d-ad99-f1d56533ccf8\") " Feb 26 18:13:55 crc kubenswrapper[4805]: I0226 18:13:55.387127 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a03acdf6-9e4e-444d-ad99-f1d56533ccf8-catalog-content\") pod \"a03acdf6-9e4e-444d-ad99-f1d56533ccf8\" (UID: \"a03acdf6-9e4e-444d-ad99-f1d56533ccf8\") " Feb 26 18:13:55 crc kubenswrapper[4805]: I0226 18:13:55.387934 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a03acdf6-9e4e-444d-ad99-f1d56533ccf8-utilities" (OuterVolumeSpecName: "utilities") pod "a03acdf6-9e4e-444d-ad99-f1d56533ccf8" (UID: "a03acdf6-9e4e-444d-ad99-f1d56533ccf8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 18:13:55 crc kubenswrapper[4805]: I0226 18:13:55.392783 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a03acdf6-9e4e-444d-ad99-f1d56533ccf8-kube-api-access-gqkf4" (OuterVolumeSpecName: "kube-api-access-gqkf4") pod "a03acdf6-9e4e-444d-ad99-f1d56533ccf8" (UID: "a03acdf6-9e4e-444d-ad99-f1d56533ccf8"). InnerVolumeSpecName "kube-api-access-gqkf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:13:55 crc kubenswrapper[4805]: I0226 18:13:55.489446 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a03acdf6-9e4e-444d-ad99-f1d56533ccf8-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 18:13:55 crc kubenswrapper[4805]: I0226 18:13:55.489485 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqkf4\" (UniqueName: \"kubernetes.io/projected/a03acdf6-9e4e-444d-ad99-f1d56533ccf8-kube-api-access-gqkf4\") on node \"crc\" DevicePath \"\"" Feb 26 18:13:55 crc kubenswrapper[4805]: I0226 18:13:55.517492 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a03acdf6-9e4e-444d-ad99-f1d56533ccf8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a03acdf6-9e4e-444d-ad99-f1d56533ccf8" (UID: "a03acdf6-9e4e-444d-ad99-f1d56533ccf8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 18:13:55 crc kubenswrapper[4805]: I0226 18:13:55.590846 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a03acdf6-9e4e-444d-ad99-f1d56533ccf8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 18:13:55 crc kubenswrapper[4805]: I0226 18:13:55.596855 4805 generic.go:334] "Generic (PLEG): container finished" podID="a03acdf6-9e4e-444d-ad99-f1d56533ccf8" containerID="55af667139b7a39e1b99c4721c897346a9de143b0e4ef6ac15670708eeffdfd1" exitCode=0 Feb 26 18:13:55 crc kubenswrapper[4805]: I0226 18:13:55.596890 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5bw8x" event={"ID":"a03acdf6-9e4e-444d-ad99-f1d56533ccf8","Type":"ContainerDied","Data":"55af667139b7a39e1b99c4721c897346a9de143b0e4ef6ac15670708eeffdfd1"} Feb 26 18:13:55 crc kubenswrapper[4805]: I0226 18:13:55.596923 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5bw8x" event={"ID":"a03acdf6-9e4e-444d-ad99-f1d56533ccf8","Type":"ContainerDied","Data":"e0973e29455fe46058cd36715db89701091dd4f65dbb7b2bc3f63a6465afaab0"} Feb 26 18:13:55 crc kubenswrapper[4805]: I0226 18:13:55.596924 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5bw8x" Feb 26 18:13:55 crc kubenswrapper[4805]: I0226 18:13:55.596943 4805 scope.go:117] "RemoveContainer" containerID="55af667139b7a39e1b99c4721c897346a9de143b0e4ef6ac15670708eeffdfd1" Feb 26 18:13:55 crc kubenswrapper[4805]: I0226 18:13:55.614591 4805 scope.go:117] "RemoveContainer" containerID="75b754e28ca0f39e8dab0d1dc1ac50322277c0aedf0b36dfd4e3ce110abaa852" Feb 26 18:13:55 crc kubenswrapper[4805]: I0226 18:13:55.641923 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5bw8x"] Feb 26 18:13:55 crc kubenswrapper[4805]: I0226 18:13:55.645253 4805 scope.go:117] "RemoveContainer" containerID="e4065ec0fab45d3a063162bbd9a1d56d7b39ad6c3647a7d06f4443fcda261b52" Feb 26 18:13:55 crc kubenswrapper[4805]: I0226 18:13:55.651144 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5bw8x"] Feb 26 18:13:55 crc kubenswrapper[4805]: I0226 18:13:55.683602 4805 scope.go:117] "RemoveContainer" containerID="55af667139b7a39e1b99c4721c897346a9de143b0e4ef6ac15670708eeffdfd1" Feb 26 18:13:55 crc kubenswrapper[4805]: E0226 18:13:55.684521 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55af667139b7a39e1b99c4721c897346a9de143b0e4ef6ac15670708eeffdfd1\": container with ID starting with 55af667139b7a39e1b99c4721c897346a9de143b0e4ef6ac15670708eeffdfd1 not found: ID does not exist" containerID="55af667139b7a39e1b99c4721c897346a9de143b0e4ef6ac15670708eeffdfd1" Feb 26 18:13:55 crc kubenswrapper[4805]: I0226 18:13:55.684561 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55af667139b7a39e1b99c4721c897346a9de143b0e4ef6ac15670708eeffdfd1"} err="failed to get container status \"55af667139b7a39e1b99c4721c897346a9de143b0e4ef6ac15670708eeffdfd1\": rpc error: code = NotFound desc = could not find container \"55af667139b7a39e1b99c4721c897346a9de143b0e4ef6ac15670708eeffdfd1\": container with ID starting with 55af667139b7a39e1b99c4721c897346a9de143b0e4ef6ac15670708eeffdfd1 not found: ID does not exist" Feb 26 18:13:55 crc kubenswrapper[4805]: I0226 18:13:55.684586 4805 scope.go:117] "RemoveContainer" containerID="75b754e28ca0f39e8dab0d1dc1ac50322277c0aedf0b36dfd4e3ce110abaa852" Feb 26 18:13:55 crc kubenswrapper[4805]: E0226 18:13:55.685073 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75b754e28ca0f39e8dab0d1dc1ac50322277c0aedf0b36dfd4e3ce110abaa852\": container with ID starting with 75b754e28ca0f39e8dab0d1dc1ac50322277c0aedf0b36dfd4e3ce110abaa852 not found: ID does not exist" containerID="75b754e28ca0f39e8dab0d1dc1ac50322277c0aedf0b36dfd4e3ce110abaa852" Feb 26 18:13:55 crc kubenswrapper[4805]: I0226 18:13:55.685197 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75b754e28ca0f39e8dab0d1dc1ac50322277c0aedf0b36dfd4e3ce110abaa852"} err="failed to get container status \"75b754e28ca0f39e8dab0d1dc1ac50322277c0aedf0b36dfd4e3ce110abaa852\": rpc error: code = NotFound desc = could not find container \"75b754e28ca0f39e8dab0d1dc1ac50322277c0aedf0b36dfd4e3ce110abaa852\": container with ID starting with 75b754e28ca0f39e8dab0d1dc1ac50322277c0aedf0b36dfd4e3ce110abaa852 not found: ID does not exist" Feb 26 18:13:55 crc kubenswrapper[4805]: I0226 18:13:55.685278 4805 scope.go:117] "RemoveContainer" containerID="e4065ec0fab45d3a063162bbd9a1d56d7b39ad6c3647a7d06f4443fcda261b52" Feb 26 18:13:55 crc kubenswrapper[4805]: E0226 18:13:55.685616 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4065ec0fab45d3a063162bbd9a1d56d7b39ad6c3647a7d06f4443fcda261b52\": container with ID starting with e4065ec0fab45d3a063162bbd9a1d56d7b39ad6c3647a7d06f4443fcda261b52 not found: ID does not exist" containerID="e4065ec0fab45d3a063162bbd9a1d56d7b39ad6c3647a7d06f4443fcda261b52" Feb 26 18:13:55 crc kubenswrapper[4805]: I0226 18:13:55.685643 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4065ec0fab45d3a063162bbd9a1d56d7b39ad6c3647a7d06f4443fcda261b52"} err="failed to get container status \"e4065ec0fab45d3a063162bbd9a1d56d7b39ad6c3647a7d06f4443fcda261b52\": rpc error: code = NotFound desc = could not find container \"e4065ec0fab45d3a063162bbd9a1d56d7b39ad6c3647a7d06f4443fcda261b52\": container with ID starting with e4065ec0fab45d3a063162bbd9a1d56d7b39ad6c3647a7d06f4443fcda261b52 not found: ID does not exist" Feb 26 18:13:56 crc kubenswrapper[4805]: I0226 18:13:56.975822 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a03acdf6-9e4e-444d-ad99-f1d56533ccf8" path="/var/lib/kubelet/pods/a03acdf6-9e4e-444d-ad99-f1d56533ccf8/volumes" Feb 26 18:14:00 crc kubenswrapper[4805]: I0226 18:14:00.169457 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535494-fggpt"] Feb 26 18:14:00 crc kubenswrapper[4805]: E0226 18:14:00.171203 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a03acdf6-9e4e-444d-ad99-f1d56533ccf8" containerName="registry-server" Feb 26 18:14:00 crc kubenswrapper[4805]: I0226 18:14:00.171234 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a03acdf6-9e4e-444d-ad99-f1d56533ccf8" containerName="registry-server" Feb 26 18:14:00 crc kubenswrapper[4805]: E0226 18:14:00.171271 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a03acdf6-9e4e-444d-ad99-f1d56533ccf8" containerName="extract-utilities" Feb 26 18:14:00 crc kubenswrapper[4805]: I0226 18:14:00.171290 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a03acdf6-9e4e-444d-ad99-f1d56533ccf8" containerName="extract-utilities" Feb 26 18:14:00 crc kubenswrapper[4805]: E0226 18:14:00.171325 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a03acdf6-9e4e-444d-ad99-f1d56533ccf8" containerName="extract-content" Feb 26 18:14:00 crc kubenswrapper[4805]: I0226 18:14:00.171342 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a03acdf6-9e4e-444d-ad99-f1d56533ccf8" containerName="extract-content" Feb 26 18:14:00 crc kubenswrapper[4805]: I0226 18:14:00.171853 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a03acdf6-9e4e-444d-ad99-f1d56533ccf8" containerName="registry-server" Feb 26 18:14:00 crc kubenswrapper[4805]: I0226 18:14:00.173270 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535494-fggpt" Feb 26 18:14:00 crc kubenswrapper[4805]: I0226 18:14:00.178774 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 18:14:00 crc kubenswrapper[4805]: I0226 18:14:00.179316 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 18:14:00 crc kubenswrapper[4805]: I0226 18:14:00.179409 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 18:14:00 crc kubenswrapper[4805]: I0226 18:14:00.191962 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535494-fggpt"] Feb 26 18:14:00 crc kubenswrapper[4805]: I0226 18:14:00.289133 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rh57\" (UniqueName: \"kubernetes.io/projected/0069ef07-ee19-4617-8617-9db194e2e3d9-kube-api-access-9rh57\") pod \"auto-csr-approver-29535494-fggpt\" (UID: \"0069ef07-ee19-4617-8617-9db194e2e3d9\") " pod="openshift-infra/auto-csr-approver-29535494-fggpt" Feb 26 18:14:00 crc kubenswrapper[4805]: I0226 18:14:00.391604 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rh57\" (UniqueName: \"kubernetes.io/projected/0069ef07-ee19-4617-8617-9db194e2e3d9-kube-api-access-9rh57\") pod \"auto-csr-approver-29535494-fggpt\" (UID: \"0069ef07-ee19-4617-8617-9db194e2e3d9\") " pod="openshift-infra/auto-csr-approver-29535494-fggpt" Feb 26 18:14:00 crc kubenswrapper[4805]: I0226 18:14:00.415125 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rh57\" (UniqueName: \"kubernetes.io/projected/0069ef07-ee19-4617-8617-9db194e2e3d9-kube-api-access-9rh57\") pod \"auto-csr-approver-29535494-fggpt\" (UID: \"0069ef07-ee19-4617-8617-9db194e2e3d9\") " pod="openshift-infra/auto-csr-approver-29535494-fggpt" Feb 26 18:14:00 crc kubenswrapper[4805]: I0226 18:14:00.518365 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535494-fggpt" Feb 26 18:14:00 crc kubenswrapper[4805]: I0226 18:14:00.973522 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535494-fggpt"] Feb 26 18:14:01 crc kubenswrapper[4805]: I0226 18:14:01.677467 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535494-fggpt" event={"ID":"0069ef07-ee19-4617-8617-9db194e2e3d9","Type":"ContainerStarted","Data":"7336410023d7b0189512d3ad83ccbb7247e4ee9233a8e95da6e32c9e7aa055bb"} Feb 26 18:14:02 crc kubenswrapper[4805]: I0226 18:14:02.140502 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w445x"] Feb 26 18:14:02 crc kubenswrapper[4805]: I0226 18:14:02.143137 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w445x" Feb 26 18:14:02 crc kubenswrapper[4805]: I0226 18:14:02.176131 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w445x"] Feb 26 18:14:02 crc kubenswrapper[4805]: I0226 18:14:02.235371 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6043ad32-e6d8-4bc8-a157-2bc38e25e093-utilities\") pod \"certified-operators-w445x\" (UID: \"6043ad32-e6d8-4bc8-a157-2bc38e25e093\") " pod="openshift-marketplace/certified-operators-w445x" Feb 26 18:14:02 crc kubenswrapper[4805]: I0226 18:14:02.235455 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhxxq\" (UniqueName: \"kubernetes.io/projected/6043ad32-e6d8-4bc8-a157-2bc38e25e093-kube-api-access-jhxxq\") pod \"certified-operators-w445x\" (UID: \"6043ad32-e6d8-4bc8-a157-2bc38e25e093\") " pod="openshift-marketplace/certified-operators-w445x" Feb 26 18:14:02 crc kubenswrapper[4805]: I0226 18:14:02.235660 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6043ad32-e6d8-4bc8-a157-2bc38e25e093-catalog-content\") pod \"certified-operators-w445x\" (UID: \"6043ad32-e6d8-4bc8-a157-2bc38e25e093\") " pod="openshift-marketplace/certified-operators-w445x" Feb 26 18:14:02 crc kubenswrapper[4805]: I0226 18:14:02.337572 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6043ad32-e6d8-4bc8-a157-2bc38e25e093-catalog-content\") pod \"certified-operators-w445x\" (UID: \"6043ad32-e6d8-4bc8-a157-2bc38e25e093\") " pod="openshift-marketplace/certified-operators-w445x" Feb 26 18:14:02 crc kubenswrapper[4805]: I0226 18:14:02.337691 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6043ad32-e6d8-4bc8-a157-2bc38e25e093-utilities\") pod \"certified-operators-w445x\" (UID: \"6043ad32-e6d8-4bc8-a157-2bc38e25e093\") " pod="openshift-marketplace/certified-operators-w445x" Feb 26 18:14:02 crc kubenswrapper[4805]: I0226 18:14:02.337745 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhxxq\" (UniqueName: \"kubernetes.io/projected/6043ad32-e6d8-4bc8-a157-2bc38e25e093-kube-api-access-jhxxq\") pod \"certified-operators-w445x\" (UID: \"6043ad32-e6d8-4bc8-a157-2bc38e25e093\") " pod="openshift-marketplace/certified-operators-w445x" Feb 26 18:14:02 crc kubenswrapper[4805]: I0226 18:14:02.338225 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6043ad32-e6d8-4bc8-a157-2bc38e25e093-catalog-content\") pod \"certified-operators-w445x\" (UID: \"6043ad32-e6d8-4bc8-a157-2bc38e25e093\") " pod="openshift-marketplace/certified-operators-w445x" Feb 26 18:14:02 crc kubenswrapper[4805]: I0226 18:14:02.338277 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6043ad32-e6d8-4bc8-a157-2bc38e25e093-utilities\") pod \"certified-operators-w445x\" (UID: \"6043ad32-e6d8-4bc8-a157-2bc38e25e093\") " pod="openshift-marketplace/certified-operators-w445x" Feb 26 18:14:02 crc kubenswrapper[4805]: I0226 18:14:02.360106 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhxxq\" (UniqueName: \"kubernetes.io/projected/6043ad32-e6d8-4bc8-a157-2bc38e25e093-kube-api-access-jhxxq\") pod \"certified-operators-w445x\" (UID: \"6043ad32-e6d8-4bc8-a157-2bc38e25e093\") " pod="openshift-marketplace/certified-operators-w445x" Feb 26 18:14:02 crc kubenswrapper[4805]: I0226 18:14:02.479688 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w445x" Feb 26 18:14:02 crc kubenswrapper[4805]: I0226 18:14:02.993557 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w445x"] Feb 26 18:14:03 crc kubenswrapper[4805]: I0226 18:14:03.721265 4805 generic.go:334] "Generic (PLEG): container finished" podID="0069ef07-ee19-4617-8617-9db194e2e3d9" containerID="d32bfb0323b61455b413ac10fcceb3bd4d53887be96a5fcb7beb43578304bc9a" exitCode=0 Feb 26 18:14:03 crc kubenswrapper[4805]: I0226 18:14:03.721595 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535494-fggpt" event={"ID":"0069ef07-ee19-4617-8617-9db194e2e3d9","Type":"ContainerDied","Data":"d32bfb0323b61455b413ac10fcceb3bd4d53887be96a5fcb7beb43578304bc9a"} Feb 26 18:14:03 crc kubenswrapper[4805]: I0226 18:14:03.726429 4805 generic.go:334] "Generic (PLEG): container finished" podID="6043ad32-e6d8-4bc8-a157-2bc38e25e093" containerID="10c7e9606dbad915058f0b44dad48d50d31e443bcf99621e171b63eb7d1a1dab" exitCode=0 Feb 26 18:14:03 crc kubenswrapper[4805]: I0226 18:14:03.726466 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w445x" event={"ID":"6043ad32-e6d8-4bc8-a157-2bc38e25e093","Type":"ContainerDied","Data":"10c7e9606dbad915058f0b44dad48d50d31e443bcf99621e171b63eb7d1a1dab"} Feb 26 18:14:03 crc kubenswrapper[4805]: I0226 18:14:03.726489 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w445x" event={"ID":"6043ad32-e6d8-4bc8-a157-2bc38e25e093","Type":"ContainerStarted","Data":"2020940cf3b9face6b7befbeee17bbc4a8f3b1681f59777a54a6ae7b5c32cc44"} Feb 26 18:14:05 crc kubenswrapper[4805]: I0226 18:14:05.217418 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535494-fggpt" Feb 26 18:14:05 crc kubenswrapper[4805]: I0226 18:14:05.307062 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rh57\" (UniqueName: \"kubernetes.io/projected/0069ef07-ee19-4617-8617-9db194e2e3d9-kube-api-access-9rh57\") pod \"0069ef07-ee19-4617-8617-9db194e2e3d9\" (UID: \"0069ef07-ee19-4617-8617-9db194e2e3d9\") " Feb 26 18:14:05 crc kubenswrapper[4805]: I0226 18:14:05.313715 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0069ef07-ee19-4617-8617-9db194e2e3d9-kube-api-access-9rh57" (OuterVolumeSpecName: "kube-api-access-9rh57") pod "0069ef07-ee19-4617-8617-9db194e2e3d9" (UID: "0069ef07-ee19-4617-8617-9db194e2e3d9"). InnerVolumeSpecName "kube-api-access-9rh57". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:14:05 crc kubenswrapper[4805]: I0226 18:14:05.409522 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rh57\" (UniqueName: \"kubernetes.io/projected/0069ef07-ee19-4617-8617-9db194e2e3d9-kube-api-access-9rh57\") on node \"crc\" DevicePath \"\"" Feb 26 18:14:05 crc kubenswrapper[4805]: I0226 18:14:05.756935 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w445x" event={"ID":"6043ad32-e6d8-4bc8-a157-2bc38e25e093","Type":"ContainerStarted","Data":"3f8645ee4e5d46117b22d0043893e778523d56c6dd00bbb951a1893a3da7cc72"} Feb 26 18:14:05 crc kubenswrapper[4805]: I0226 18:14:05.758710 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535494-fggpt" event={"ID":"0069ef07-ee19-4617-8617-9db194e2e3d9","Type":"ContainerDied","Data":"7336410023d7b0189512d3ad83ccbb7247e4ee9233a8e95da6e32c9e7aa055bb"} Feb 26 18:14:05 crc kubenswrapper[4805]: I0226 18:14:05.758889 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7336410023d7b0189512d3ad83ccbb7247e4ee9233a8e95da6e32c9e7aa055bb" Feb 26 18:14:05 crc kubenswrapper[4805]: I0226 18:14:05.758732 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535494-fggpt" Feb 26 18:14:06 crc kubenswrapper[4805]: I0226 18:14:06.303097 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535488-rnhtp"] Feb 26 18:14:06 crc kubenswrapper[4805]: I0226 18:14:06.315559 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535488-rnhtp"] Feb 26 18:14:06 crc kubenswrapper[4805]: I0226 18:14:06.773942 4805 generic.go:334] "Generic (PLEG): container finished" podID="6043ad32-e6d8-4bc8-a157-2bc38e25e093" containerID="3f8645ee4e5d46117b22d0043893e778523d56c6dd00bbb951a1893a3da7cc72" exitCode=0 Feb 26 18:14:06 crc kubenswrapper[4805]: I0226 18:14:06.773985 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w445x" event={"ID":"6043ad32-e6d8-4bc8-a157-2bc38e25e093","Type":"ContainerDied","Data":"3f8645ee4e5d46117b22d0043893e778523d56c6dd00bbb951a1893a3da7cc72"} Feb 26 18:14:06 crc kubenswrapper[4805]: I0226 18:14:06.969886 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="645f1d1b-d370-4524-bdde-068cddfcf41a" path="/var/lib/kubelet/pods/645f1d1b-d370-4524-bdde-068cddfcf41a/volumes" Feb 26 18:14:07 crc kubenswrapper[4805]: I0226 18:14:07.788719 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w445x" event={"ID":"6043ad32-e6d8-4bc8-a157-2bc38e25e093","Type":"ContainerStarted","Data":"840efb7f920cf5bfcde664e9a310c5b410e626d78766eb3ecdb914e3f3e9785e"} Feb 26 18:14:07 crc kubenswrapper[4805]: I0226 18:14:07.807918 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w445x" podStartSLOduration=2.391438831 podStartE2EDuration="5.807894921s" podCreationTimestamp="2026-02-26 18:14:02 +0000 UTC" firstStartedPulling="2026-02-26 18:14:03.728477925 +0000 UTC m=+3558.290232274" lastFinishedPulling="2026-02-26 18:14:07.144933985 +0000 UTC m=+3561.706688364" observedRunningTime="2026-02-26 18:14:07.806177187 +0000 UTC m=+3562.367931546" watchObservedRunningTime="2026-02-26 18:14:07.807894921 +0000 UTC m=+3562.369649270" Feb 26 18:14:12 crc kubenswrapper[4805]: I0226 18:14:12.480473 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w445x" Feb 26 18:14:12 crc kubenswrapper[4805]: I0226 18:14:12.480859 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w445x" Feb 26 18:14:12 crc kubenswrapper[4805]: I0226 18:14:12.564400 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w445x" Feb 26 18:14:12 crc kubenswrapper[4805]: I0226 18:14:12.909438 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w445x" Feb 26 18:14:12 crc kubenswrapper[4805]: I0226 18:14:12.977750 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w445x"] Feb 26 18:14:14 crc kubenswrapper[4805]: I0226 18:14:14.865491 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w445x" podUID="6043ad32-e6d8-4bc8-a157-2bc38e25e093" containerName="registry-server" containerID="cri-o://840efb7f920cf5bfcde664e9a310c5b410e626d78766eb3ecdb914e3f3e9785e" gracePeriod=2 Feb 26 18:14:15 crc kubenswrapper[4805]: I0226 18:14:15.490371 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w445x" Feb 26 18:14:15 crc kubenswrapper[4805]: I0226 18:14:15.541166 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhxxq\" (UniqueName: \"kubernetes.io/projected/6043ad32-e6d8-4bc8-a157-2bc38e25e093-kube-api-access-jhxxq\") pod \"6043ad32-e6d8-4bc8-a157-2bc38e25e093\" (UID: \"6043ad32-e6d8-4bc8-a157-2bc38e25e093\") " Feb 26 18:14:15 crc kubenswrapper[4805]: I0226 18:14:15.541243 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6043ad32-e6d8-4bc8-a157-2bc38e25e093-utilities\") pod \"6043ad32-e6d8-4bc8-a157-2bc38e25e093\" (UID: \"6043ad32-e6d8-4bc8-a157-2bc38e25e093\") " Feb 26 18:14:15 crc kubenswrapper[4805]: I0226 18:14:15.541267 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6043ad32-e6d8-4bc8-a157-2bc38e25e093-catalog-content\") pod \"6043ad32-e6d8-4bc8-a157-2bc38e25e093\" (UID: \"6043ad32-e6d8-4bc8-a157-2bc38e25e093\") " Feb 26 18:14:15 crc kubenswrapper[4805]: I0226 18:14:15.542873 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6043ad32-e6d8-4bc8-a157-2bc38e25e093-utilities" (OuterVolumeSpecName: "utilities") pod "6043ad32-e6d8-4bc8-a157-2bc38e25e093" (UID: "6043ad32-e6d8-4bc8-a157-2bc38e25e093"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 18:14:15 crc kubenswrapper[4805]: I0226 18:14:15.549140 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6043ad32-e6d8-4bc8-a157-2bc38e25e093-kube-api-access-jhxxq" (OuterVolumeSpecName: "kube-api-access-jhxxq") pod "6043ad32-e6d8-4bc8-a157-2bc38e25e093" (UID: "6043ad32-e6d8-4bc8-a157-2bc38e25e093"). InnerVolumeSpecName "kube-api-access-jhxxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:14:15 crc kubenswrapper[4805]: I0226 18:14:15.643688 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhxxq\" (UniqueName: \"kubernetes.io/projected/6043ad32-e6d8-4bc8-a157-2bc38e25e093-kube-api-access-jhxxq\") on node \"crc\" DevicePath \"\"" Feb 26 18:14:15 crc kubenswrapper[4805]: I0226 18:14:15.643957 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6043ad32-e6d8-4bc8-a157-2bc38e25e093-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 18:14:15 crc kubenswrapper[4805]: I0226 18:14:15.877701 4805 generic.go:334] "Generic (PLEG): container finished" podID="6043ad32-e6d8-4bc8-a157-2bc38e25e093" containerID="840efb7f920cf5bfcde664e9a310c5b410e626d78766eb3ecdb914e3f3e9785e" exitCode=0 Feb 26 18:14:15 crc kubenswrapper[4805]: I0226 18:14:15.877752 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w445x" Feb 26 18:14:15 crc kubenswrapper[4805]: I0226 18:14:15.877762 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w445x" event={"ID":"6043ad32-e6d8-4bc8-a157-2bc38e25e093","Type":"ContainerDied","Data":"840efb7f920cf5bfcde664e9a310c5b410e626d78766eb3ecdb914e3f3e9785e"} Feb 26 18:14:15 crc kubenswrapper[4805]: I0226 18:14:15.877825 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w445x" event={"ID":"6043ad32-e6d8-4bc8-a157-2bc38e25e093","Type":"ContainerDied","Data":"2020940cf3b9face6b7befbeee17bbc4a8f3b1681f59777a54a6ae7b5c32cc44"} Feb 26 18:14:15 crc kubenswrapper[4805]: I0226 18:14:15.877850 4805 scope.go:117] "RemoveContainer" containerID="840efb7f920cf5bfcde664e9a310c5b410e626d78766eb3ecdb914e3f3e9785e" Feb 26 18:14:15 crc kubenswrapper[4805]: I0226 18:14:15.907510 4805 scope.go:117] "RemoveContainer" containerID="3f8645ee4e5d46117b22d0043893e778523d56c6dd00bbb951a1893a3da7cc72" Feb 26 18:14:15 crc kubenswrapper[4805]: I0226 18:14:15.933182 4805 scope.go:117] "RemoveContainer" containerID="10c7e9606dbad915058f0b44dad48d50d31e443bcf99621e171b63eb7d1a1dab" Feb 26 18:14:16 crc kubenswrapper[4805]: I0226 18:14:16.008058 4805 scope.go:117] "RemoveContainer" containerID="840efb7f920cf5bfcde664e9a310c5b410e626d78766eb3ecdb914e3f3e9785e" Feb 26 18:14:16 crc kubenswrapper[4805]: E0226 18:14:16.008465 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"840efb7f920cf5bfcde664e9a310c5b410e626d78766eb3ecdb914e3f3e9785e\": container with ID starting with 840efb7f920cf5bfcde664e9a310c5b410e626d78766eb3ecdb914e3f3e9785e not found: ID does not exist" containerID="840efb7f920cf5bfcde664e9a310c5b410e626d78766eb3ecdb914e3f3e9785e" Feb 26 18:14:16 crc kubenswrapper[4805]: I0226 18:14:16.008504 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"840efb7f920cf5bfcde664e9a310c5b410e626d78766eb3ecdb914e3f3e9785e"} err="failed to get container status \"840efb7f920cf5bfcde664e9a310c5b410e626d78766eb3ecdb914e3f3e9785e\": rpc error: code = NotFound desc = could not find container \"840efb7f920cf5bfcde664e9a310c5b410e626d78766eb3ecdb914e3f3e9785e\": container with ID starting with 840efb7f920cf5bfcde664e9a310c5b410e626d78766eb3ecdb914e3f3e9785e not found: ID does not exist" Feb 26 18:14:16 crc kubenswrapper[4805]: I0226 18:14:16.008531 4805 scope.go:117] "RemoveContainer" containerID="3f8645ee4e5d46117b22d0043893e778523d56c6dd00bbb951a1893a3da7cc72" Feb 26 18:14:16 crc kubenswrapper[4805]: E0226 18:14:16.008948 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f8645ee4e5d46117b22d0043893e778523d56c6dd00bbb951a1893a3da7cc72\": container with ID starting with 3f8645ee4e5d46117b22d0043893e778523d56c6dd00bbb951a1893a3da7cc72 not found: ID does not exist" containerID="3f8645ee4e5d46117b22d0043893e778523d56c6dd00bbb951a1893a3da7cc72" Feb 26 18:14:16 crc kubenswrapper[4805]: I0226 18:14:16.008976 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f8645ee4e5d46117b22d0043893e778523d56c6dd00bbb951a1893a3da7cc72"} err="failed to get container status \"3f8645ee4e5d46117b22d0043893e778523d56c6dd00bbb951a1893a3da7cc72\": rpc error: code = NotFound desc = could not find container \"3f8645ee4e5d46117b22d0043893e778523d56c6dd00bbb951a1893a3da7cc72\": container with ID starting with 3f8645ee4e5d46117b22d0043893e778523d56c6dd00bbb951a1893a3da7cc72 not found: ID does not exist" Feb 26 18:14:16 crc kubenswrapper[4805]: I0226 18:14:16.008995 4805 scope.go:117] "RemoveContainer" containerID="10c7e9606dbad915058f0b44dad48d50d31e443bcf99621e171b63eb7d1a1dab" Feb 26 18:14:16 crc kubenswrapper[4805]: E0226 18:14:16.009451 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10c7e9606dbad915058f0b44dad48d50d31e443bcf99621e171b63eb7d1a1dab\": container with ID starting with 10c7e9606dbad915058f0b44dad48d50d31e443bcf99621e171b63eb7d1a1dab not found: ID does not exist" containerID="10c7e9606dbad915058f0b44dad48d50d31e443bcf99621e171b63eb7d1a1dab" Feb 26 18:14:16 crc kubenswrapper[4805]: I0226 18:14:16.009499 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10c7e9606dbad915058f0b44dad48d50d31e443bcf99621e171b63eb7d1a1dab"} err="failed to get container status \"10c7e9606dbad915058f0b44dad48d50d31e443bcf99621e171b63eb7d1a1dab\": rpc error: code = NotFound desc = could not find container \"10c7e9606dbad915058f0b44dad48d50d31e443bcf99621e171b63eb7d1a1dab\": container with ID starting with 10c7e9606dbad915058f0b44dad48d50d31e443bcf99621e171b63eb7d1a1dab not found: ID does not exist" Feb 26 18:14:16 crc kubenswrapper[4805]: I0226 18:14:16.374318 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6043ad32-e6d8-4bc8-a157-2bc38e25e093-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6043ad32-e6d8-4bc8-a157-2bc38e25e093" (UID: "6043ad32-e6d8-4bc8-a157-2bc38e25e093"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 18:14:16 crc kubenswrapper[4805]: I0226 18:14:16.460542 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6043ad32-e6d8-4bc8-a157-2bc38e25e093-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 18:14:16 crc kubenswrapper[4805]: I0226 18:14:16.523195 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w445x"] Feb 26 18:14:16 crc kubenswrapper[4805]: I0226 18:14:16.532568 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w445x"] Feb 26 18:14:16 crc kubenswrapper[4805]: I0226 18:14:16.975885 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6043ad32-e6d8-4bc8-a157-2bc38e25e093" path="/var/lib/kubelet/pods/6043ad32-e6d8-4bc8-a157-2bc38e25e093/volumes" Feb 26 18:14:32 crc kubenswrapper[4805]: I0226 18:14:32.205218 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8c6hb"] Feb 26 18:14:32 crc kubenswrapper[4805]: E0226 18:14:32.206361 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6043ad32-e6d8-4bc8-a157-2bc38e25e093" containerName="extract-utilities" Feb 26 18:14:32 crc kubenswrapper[4805]: I0226 18:14:32.206382 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="6043ad32-e6d8-4bc8-a157-2bc38e25e093" containerName="extract-utilities" Feb 26 18:14:32 crc kubenswrapper[4805]: E0226 18:14:32.206407 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6043ad32-e6d8-4bc8-a157-2bc38e25e093" containerName="registry-server" Feb 26 18:14:32 crc kubenswrapper[4805]: I0226 18:14:32.206417 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="6043ad32-e6d8-4bc8-a157-2bc38e25e093" containerName="registry-server" Feb 26 18:14:32 crc kubenswrapper[4805]: E0226 18:14:32.206440 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6043ad32-e6d8-4bc8-a157-2bc38e25e093" containerName="extract-content" Feb 26 18:14:32 crc kubenswrapper[4805]: I0226 18:14:32.206451 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="6043ad32-e6d8-4bc8-a157-2bc38e25e093" containerName="extract-content" Feb 26 18:14:32 crc kubenswrapper[4805]: E0226 18:14:32.206491 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0069ef07-ee19-4617-8617-9db194e2e3d9" containerName="oc" Feb 26 18:14:32 crc kubenswrapper[4805]: I0226 18:14:32.206502 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0069ef07-ee19-4617-8617-9db194e2e3d9" containerName="oc" Feb 26 18:14:32 crc kubenswrapper[4805]: I0226 18:14:32.206881 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="0069ef07-ee19-4617-8617-9db194e2e3d9" containerName="oc" Feb 26 18:14:32 crc kubenswrapper[4805]: I0226 18:14:32.206914 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="6043ad32-e6d8-4bc8-a157-2bc38e25e093" containerName="registry-server" Feb 26 18:14:32 crc kubenswrapper[4805]: I0226 18:14:32.209344 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8c6hb" Feb 26 18:14:32 crc kubenswrapper[4805]: I0226 18:14:32.222520 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8c6hb"] Feb 26 18:14:32 crc kubenswrapper[4805]: I0226 18:14:32.307188 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd83e24f-0a04-4d06-801b-27202aec7cba-utilities\") pod \"redhat-marketplace-8c6hb\" (UID: \"fd83e24f-0a04-4d06-801b-27202aec7cba\") " pod="openshift-marketplace/redhat-marketplace-8c6hb" Feb 26 18:14:32 crc kubenswrapper[4805]: I0226 18:14:32.307349 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd83e24f-0a04-4d06-801b-27202aec7cba-catalog-content\") pod \"redhat-marketplace-8c6hb\" (UID: \"fd83e24f-0a04-4d06-801b-27202aec7cba\") " pod="openshift-marketplace/redhat-marketplace-8c6hb" Feb 26 18:14:32 crc kubenswrapper[4805]: I0226 18:14:32.307389 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-462w7\" (UniqueName: \"kubernetes.io/projected/fd83e24f-0a04-4d06-801b-27202aec7cba-kube-api-access-462w7\") pod \"redhat-marketplace-8c6hb\" (UID: \"fd83e24f-0a04-4d06-801b-27202aec7cba\") " pod="openshift-marketplace/redhat-marketplace-8c6hb" Feb 26 18:14:32 crc kubenswrapper[4805]: I0226 18:14:32.410233 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd83e24f-0a04-4d06-801b-27202aec7cba-utilities\") pod \"redhat-marketplace-8c6hb\" (UID: \"fd83e24f-0a04-4d06-801b-27202aec7cba\") " pod="openshift-marketplace/redhat-marketplace-8c6hb" Feb 26 18:14:32 crc kubenswrapper[4805]: I0226 18:14:32.411110 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd83e24f-0a04-4d06-801b-27202aec7cba-catalog-content\") pod \"redhat-marketplace-8c6hb\" (UID: \"fd83e24f-0a04-4d06-801b-27202aec7cba\") " pod="openshift-marketplace/redhat-marketplace-8c6hb" Feb 26 18:14:32 crc kubenswrapper[4805]: I0226 18:14:32.411224 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd83e24f-0a04-4d06-801b-27202aec7cba-utilities\") pod \"redhat-marketplace-8c6hb\" (UID: \"fd83e24f-0a04-4d06-801b-27202aec7cba\") " pod="openshift-marketplace/redhat-marketplace-8c6hb" Feb 26 18:14:32 crc kubenswrapper[4805]: I0226 18:14:32.411257 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-462w7\" (UniqueName: \"kubernetes.io/projected/fd83e24f-0a04-4d06-801b-27202aec7cba-kube-api-access-462w7\") pod \"redhat-marketplace-8c6hb\" (UID: \"fd83e24f-0a04-4d06-801b-27202aec7cba\") " pod="openshift-marketplace/redhat-marketplace-8c6hb" Feb 26 18:14:32 crc kubenswrapper[4805]: I0226 18:14:32.411696 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd83e24f-0a04-4d06-801b-27202aec7cba-catalog-content\") pod \"redhat-marketplace-8c6hb\" (UID: \"fd83e24f-0a04-4d06-801b-27202aec7cba\") " pod="openshift-marketplace/redhat-marketplace-8c6hb" Feb 26 18:14:32 crc kubenswrapper[4805]: I0226 18:14:32.434199 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-462w7\" (UniqueName: \"kubernetes.io/projected/fd83e24f-0a04-4d06-801b-27202aec7cba-kube-api-access-462w7\") pod \"redhat-marketplace-8c6hb\" (UID: \"fd83e24f-0a04-4d06-801b-27202aec7cba\") " pod="openshift-marketplace/redhat-marketplace-8c6hb" Feb 26 18:14:32 crc kubenswrapper[4805]: I0226 18:14:32.553217 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8c6hb" Feb 26 18:14:33 crc kubenswrapper[4805]: I0226 18:14:33.066713 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8c6hb"] Feb 26 18:14:33 crc kubenswrapper[4805]: I0226 18:14:33.091433 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8c6hb" event={"ID":"fd83e24f-0a04-4d06-801b-27202aec7cba","Type":"ContainerStarted","Data":"790ad17377a33798e44c20dd12688c5adabe4acc5e7de94e6f5778cf4907dbd2"} Feb 26 18:14:34 crc kubenswrapper[4805]: I0226 18:14:34.103695 4805 generic.go:334] "Generic (PLEG): container finished" podID="fd83e24f-0a04-4d06-801b-27202aec7cba" containerID="4ecab459769ffaae4603781790d50a622908b973944351addfedc1b1a2e1ee8f" exitCode=0 Feb 26 18:14:34 crc kubenswrapper[4805]: I0226 18:14:34.103934 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8c6hb" event={"ID":"fd83e24f-0a04-4d06-801b-27202aec7cba","Type":"ContainerDied","Data":"4ecab459769ffaae4603781790d50a622908b973944351addfedc1b1a2e1ee8f"} Feb 26 18:14:36 crc kubenswrapper[4805]: I0226 18:14:36.129853 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8c6hb" event={"ID":"fd83e24f-0a04-4d06-801b-27202aec7cba","Type":"ContainerStarted","Data":"59543898ca8119b111d74ded9c1f82c5fede45571c04b8cedb99ea0cddbad1fe"} Feb 26 18:14:37 crc kubenswrapper[4805]: I0226 18:14:37.140287 4805 generic.go:334] "Generic (PLEG): container finished" podID="fd83e24f-0a04-4d06-801b-27202aec7cba" containerID="59543898ca8119b111d74ded9c1f82c5fede45571c04b8cedb99ea0cddbad1fe" exitCode=0 Feb 26 18:14:37 crc kubenswrapper[4805]: I0226 18:14:37.140352 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8c6hb" event={"ID":"fd83e24f-0a04-4d06-801b-27202aec7cba","Type":"ContainerDied","Data":"59543898ca8119b111d74ded9c1f82c5fede45571c04b8cedb99ea0cddbad1fe"} Feb 26 18:14:38 crc kubenswrapper[4805]: I0226 18:14:38.154929 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8c6hb" event={"ID":"fd83e24f-0a04-4d06-801b-27202aec7cba","Type":"ContainerStarted","Data":"355e338c13ff6f564b0099f47acf1797af423433a3544017f6ce5a07c84fe018"} Feb 26 18:14:38 crc kubenswrapper[4805]: I0226 18:14:38.178338 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8c6hb" podStartSLOduration=2.6681601329999998 podStartE2EDuration="6.178317864s" podCreationTimestamp="2026-02-26 18:14:32 +0000 UTC" firstStartedPulling="2026-02-26 18:14:34.106267337 +0000 UTC m=+3588.668021676" lastFinishedPulling="2026-02-26 18:14:37.616425068 +0000 UTC m=+3592.178179407" observedRunningTime="2026-02-26 18:14:38.174840275 +0000 UTC m=+3592.736594624" watchObservedRunningTime="2026-02-26 18:14:38.178317864 +0000 UTC m=+3592.740072203" Feb 26 18:14:42 crc kubenswrapper[4805]: I0226 18:14:42.553836 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8c6hb" Feb 26 18:14:42 crc kubenswrapper[4805]: I0226 18:14:42.554440 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8c6hb" Feb 26 18:14:42 crc kubenswrapper[4805]: I0226 18:14:42.608681 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8c6hb" Feb 26 18:14:43 crc kubenswrapper[4805]: I0226 18:14:43.250796 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8c6hb" Feb 26 18:14:44 crc kubenswrapper[4805]: I0226 18:14:44.204537 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8c6hb"] Feb 26 18:14:45 crc kubenswrapper[4805]: I0226 18:14:45.233669 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8c6hb" podUID="fd83e24f-0a04-4d06-801b-27202aec7cba" containerName="registry-server" containerID="cri-o://355e338c13ff6f564b0099f47acf1797af423433a3544017f6ce5a07c84fe018" gracePeriod=2 Feb 26 18:14:45 crc kubenswrapper[4805]: I0226 18:14:45.823502 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8c6hb" Feb 26 18:14:45 crc kubenswrapper[4805]: I0226 18:14:45.938196 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd83e24f-0a04-4d06-801b-27202aec7cba-utilities\") pod \"fd83e24f-0a04-4d06-801b-27202aec7cba\" (UID: \"fd83e24f-0a04-4d06-801b-27202aec7cba\") " Feb 26 18:14:45 crc kubenswrapper[4805]: I0226 18:14:45.938245 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd83e24f-0a04-4d06-801b-27202aec7cba-catalog-content\") pod \"fd83e24f-0a04-4d06-801b-27202aec7cba\" (UID: \"fd83e24f-0a04-4d06-801b-27202aec7cba\") " Feb 26 18:14:45 crc kubenswrapper[4805]: I0226 18:14:45.938477 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-462w7\" (UniqueName: \"kubernetes.io/projected/fd83e24f-0a04-4d06-801b-27202aec7cba-kube-api-access-462w7\") pod \"fd83e24f-0a04-4d06-801b-27202aec7cba\" (UID: \"fd83e24f-0a04-4d06-801b-27202aec7cba\") " Feb 26 18:14:45 crc kubenswrapper[4805]: I0226 18:14:45.939703 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd83e24f-0a04-4d06-801b-27202aec7cba-utilities" (OuterVolumeSpecName: "utilities") pod "fd83e24f-0a04-4d06-801b-27202aec7cba" (UID: "fd83e24f-0a04-4d06-801b-27202aec7cba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 18:14:45 crc kubenswrapper[4805]: I0226 18:14:45.943652 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd83e24f-0a04-4d06-801b-27202aec7cba-kube-api-access-462w7" (OuterVolumeSpecName: "kube-api-access-462w7") pod "fd83e24f-0a04-4d06-801b-27202aec7cba" (UID: "fd83e24f-0a04-4d06-801b-27202aec7cba"). InnerVolumeSpecName "kube-api-access-462w7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:14:45 crc kubenswrapper[4805]: I0226 18:14:45.967710 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd83e24f-0a04-4d06-801b-27202aec7cba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd83e24f-0a04-4d06-801b-27202aec7cba" (UID: "fd83e24f-0a04-4d06-801b-27202aec7cba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 18:14:46 crc kubenswrapper[4805]: I0226 18:14:46.040479 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-462w7\" (UniqueName: \"kubernetes.io/projected/fd83e24f-0a04-4d06-801b-27202aec7cba-kube-api-access-462w7\") on node \"crc\" DevicePath \"\"" Feb 26 18:14:46 crc kubenswrapper[4805]: I0226 18:14:46.040512 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd83e24f-0a04-4d06-801b-27202aec7cba-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 18:14:46 crc kubenswrapper[4805]: I0226 18:14:46.040523 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd83e24f-0a04-4d06-801b-27202aec7cba-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 18:14:46 crc kubenswrapper[4805]: I0226 18:14:46.251535 4805 generic.go:334] "Generic (PLEG): container finished" podID="fd83e24f-0a04-4d06-801b-27202aec7cba" containerID="355e338c13ff6f564b0099f47acf1797af423433a3544017f6ce5a07c84fe018" exitCode=0 Feb 26 18:14:46 crc kubenswrapper[4805]: I0226 18:14:46.251582 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8c6hb" event={"ID":"fd83e24f-0a04-4d06-801b-27202aec7cba","Type":"ContainerDied","Data":"355e338c13ff6f564b0099f47acf1797af423433a3544017f6ce5a07c84fe018"} Feb 26 18:14:46 crc kubenswrapper[4805]: I0226 18:14:46.251625 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8c6hb" event={"ID":"fd83e24f-0a04-4d06-801b-27202aec7cba","Type":"ContainerDied","Data":"790ad17377a33798e44c20dd12688c5adabe4acc5e7de94e6f5778cf4907dbd2"} Feb 26 18:14:46 crc kubenswrapper[4805]: I0226 18:14:46.251645 4805 scope.go:117] "RemoveContainer" containerID="355e338c13ff6f564b0099f47acf1797af423433a3544017f6ce5a07c84fe018" Feb 26 18:14:46 crc kubenswrapper[4805]: I0226 18:14:46.251641 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8c6hb" Feb 26 18:14:46 crc kubenswrapper[4805]: I0226 18:14:46.283910 4805 scope.go:117] "RemoveContainer" containerID="59543898ca8119b111d74ded9c1f82c5fede45571c04b8cedb99ea0cddbad1fe" Feb 26 18:14:46 crc kubenswrapper[4805]: I0226 18:14:46.305717 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8c6hb"] Feb 26 18:14:46 crc kubenswrapper[4805]: I0226 18:14:46.314616 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8c6hb"] Feb 26 18:14:46 crc kubenswrapper[4805]: I0226 18:14:46.338871 4805 scope.go:117] "RemoveContainer" containerID="4ecab459769ffaae4603781790d50a622908b973944351addfedc1b1a2e1ee8f" Feb 26 18:14:46 crc kubenswrapper[4805]: I0226 18:14:46.387480 4805 scope.go:117] "RemoveContainer" containerID="355e338c13ff6f564b0099f47acf1797af423433a3544017f6ce5a07c84fe018" Feb 26 18:14:46 crc kubenswrapper[4805]: E0226 18:14:46.387976 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"355e338c13ff6f564b0099f47acf1797af423433a3544017f6ce5a07c84fe018\": container with ID starting with 355e338c13ff6f564b0099f47acf1797af423433a3544017f6ce5a07c84fe018 not found: ID does not exist" containerID="355e338c13ff6f564b0099f47acf1797af423433a3544017f6ce5a07c84fe018" Feb 26 18:14:46 crc kubenswrapper[4805]: I0226 18:14:46.388067 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"355e338c13ff6f564b0099f47acf1797af423433a3544017f6ce5a07c84fe018"} err="failed to get container status \"355e338c13ff6f564b0099f47acf1797af423433a3544017f6ce5a07c84fe018\": rpc error: code = NotFound desc = could not find container \"355e338c13ff6f564b0099f47acf1797af423433a3544017f6ce5a07c84fe018\": container with ID starting with 355e338c13ff6f564b0099f47acf1797af423433a3544017f6ce5a07c84fe018 not found: ID does not exist" Feb 26 18:14:46 crc kubenswrapper[4805]: I0226 18:14:46.388108 4805 scope.go:117] "RemoveContainer" containerID="59543898ca8119b111d74ded9c1f82c5fede45571c04b8cedb99ea0cddbad1fe" Feb 26 18:14:46 crc kubenswrapper[4805]: E0226 18:14:46.388725 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59543898ca8119b111d74ded9c1f82c5fede45571c04b8cedb99ea0cddbad1fe\": container with ID starting with 59543898ca8119b111d74ded9c1f82c5fede45571c04b8cedb99ea0cddbad1fe not found: ID does not exist" containerID="59543898ca8119b111d74ded9c1f82c5fede45571c04b8cedb99ea0cddbad1fe" Feb 26 18:14:46 crc kubenswrapper[4805]: I0226 18:14:46.388786 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59543898ca8119b111d74ded9c1f82c5fede45571c04b8cedb99ea0cddbad1fe"} err="failed to get container status \"59543898ca8119b111d74ded9c1f82c5fede45571c04b8cedb99ea0cddbad1fe\": rpc error: code = NotFound desc = could not find container \"59543898ca8119b111d74ded9c1f82c5fede45571c04b8cedb99ea0cddbad1fe\": container with ID starting with 59543898ca8119b111d74ded9c1f82c5fede45571c04b8cedb99ea0cddbad1fe not found: ID does not exist" Feb 26 18:14:46 crc kubenswrapper[4805]: I0226 18:14:46.388827 4805 scope.go:117] "RemoveContainer" containerID="4ecab459769ffaae4603781790d50a622908b973944351addfedc1b1a2e1ee8f" Feb 26 18:14:46 crc kubenswrapper[4805]: E0226 18:14:46.390119 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ecab459769ffaae4603781790d50a622908b973944351addfedc1b1a2e1ee8f\": container with ID starting with 4ecab459769ffaae4603781790d50a622908b973944351addfedc1b1a2e1ee8f not found: ID does not exist" containerID="4ecab459769ffaae4603781790d50a622908b973944351addfedc1b1a2e1ee8f" Feb 26 18:14:46 crc kubenswrapper[4805]: I0226 18:14:46.390161 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ecab459769ffaae4603781790d50a622908b973944351addfedc1b1a2e1ee8f"} err="failed to get container status \"4ecab459769ffaae4603781790d50a622908b973944351addfedc1b1a2e1ee8f\": rpc error: code = NotFound desc = could not find container \"4ecab459769ffaae4603781790d50a622908b973944351addfedc1b1a2e1ee8f\": container with ID starting with 4ecab459769ffaae4603781790d50a622908b973944351addfedc1b1a2e1ee8f not found: ID does not exist" Feb 26 18:14:47 crc kubenswrapper[4805]: I0226 18:14:47.047336 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd83e24f-0a04-4d06-801b-27202aec7cba" path="/var/lib/kubelet/pods/fd83e24f-0a04-4d06-801b-27202aec7cba/volumes" Feb 26 18:14:48 crc kubenswrapper[4805]: I0226 18:14:48.008607 4805 scope.go:117] "RemoveContainer" containerID="e6b4b7a0124934599dd08cd60161f5fa01f7cca67d6b665b528ba7db9a62e58c" Feb 26 18:15:00 crc kubenswrapper[4805]: I0226 18:15:00.175123 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535495-rtlhw"] Feb 26 18:15:00 crc kubenswrapper[4805]: E0226 18:15:00.176005 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd83e24f-0a04-4d06-801b-27202aec7cba" containerName="extract-content" Feb 26 18:15:00 crc kubenswrapper[4805]: I0226 18:15:00.176035 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd83e24f-0a04-4d06-801b-27202aec7cba" containerName="extract-content" Feb 26 18:15:00 crc kubenswrapper[4805]: E0226 18:15:00.176056 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd83e24f-0a04-4d06-801b-27202aec7cba" containerName="registry-server" Feb 26 18:15:00 crc kubenswrapper[4805]: I0226 18:15:00.176063 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd83e24f-0a04-4d06-801b-27202aec7cba" containerName="registry-server" Feb 26 18:15:00 crc kubenswrapper[4805]: E0226 18:15:00.176085 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd83e24f-0a04-4d06-801b-27202aec7cba" containerName="extract-utilities" Feb 26 18:15:00 crc kubenswrapper[4805]: I0226 18:15:00.176091 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd83e24f-0a04-4d06-801b-27202aec7cba" containerName="extract-utilities" Feb 26 18:15:00 crc kubenswrapper[4805]: I0226 18:15:00.176283 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd83e24f-0a04-4d06-801b-27202aec7cba" containerName="registry-server" Feb 26 18:15:00 crc kubenswrapper[4805]: I0226 18:15:00.177083 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535495-rtlhw" Feb 26 18:15:00 crc kubenswrapper[4805]: I0226 18:15:00.179717 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 18:15:00 crc kubenswrapper[4805]: I0226 18:15:00.180271 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 18:15:00 crc kubenswrapper[4805]: I0226 18:15:00.193948 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535495-rtlhw"] Feb 26 18:15:00 crc kubenswrapper[4805]: I0226 18:15:00.255521 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f33d52ff-e958-4122-bb37-e5d5dbd65d34-secret-volume\") pod \"collect-profiles-29535495-rtlhw\" (UID: \"f33d52ff-e958-4122-bb37-e5d5dbd65d34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535495-rtlhw" Feb 26 18:15:00 crc kubenswrapper[4805]: I0226 18:15:00.255685 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f33d52ff-e958-4122-bb37-e5d5dbd65d34-config-volume\") pod \"collect-profiles-29535495-rtlhw\" (UID: \"f33d52ff-e958-4122-bb37-e5d5dbd65d34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535495-rtlhw" Feb 26 18:15:00 crc kubenswrapper[4805]: I0226 18:15:00.255707 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdss2\" (UniqueName: \"kubernetes.io/projected/f33d52ff-e958-4122-bb37-e5d5dbd65d34-kube-api-access-kdss2\") pod \"collect-profiles-29535495-rtlhw\" (UID: \"f33d52ff-e958-4122-bb37-e5d5dbd65d34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535495-rtlhw" Feb 26 18:15:00 crc kubenswrapper[4805]: I0226 18:15:00.357268 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f33d52ff-e958-4122-bb37-e5d5dbd65d34-secret-volume\") pod \"collect-profiles-29535495-rtlhw\" (UID: \"f33d52ff-e958-4122-bb37-e5d5dbd65d34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535495-rtlhw" Feb 26 18:15:00 crc kubenswrapper[4805]: I0226 18:15:00.357506 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f33d52ff-e958-4122-bb37-e5d5dbd65d34-config-volume\") pod \"collect-profiles-29535495-rtlhw\" (UID: \"f33d52ff-e958-4122-bb37-e5d5dbd65d34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535495-rtlhw" Feb 26 18:15:00 crc kubenswrapper[4805]: I0226 18:15:00.357545 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdss2\" (UniqueName: \"kubernetes.io/projected/f33d52ff-e958-4122-bb37-e5d5dbd65d34-kube-api-access-kdss2\") pod \"collect-profiles-29535495-rtlhw\" (UID: \"f33d52ff-e958-4122-bb37-e5d5dbd65d34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535495-rtlhw" Feb 26 18:15:00 crc kubenswrapper[4805]: I0226 18:15:00.358423 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f33d52ff-e958-4122-bb37-e5d5dbd65d34-config-volume\") pod \"collect-profiles-29535495-rtlhw\" (UID: \"f33d52ff-e958-4122-bb37-e5d5dbd65d34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535495-rtlhw" Feb 26 18:15:00 crc kubenswrapper[4805]: I0226 18:15:00.369590 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f33d52ff-e958-4122-bb37-e5d5dbd65d34-secret-volume\") pod \"collect-profiles-29535495-rtlhw\" (UID: \"f33d52ff-e958-4122-bb37-e5d5dbd65d34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535495-rtlhw" Feb 26 18:15:00 crc kubenswrapper[4805]: I0226 18:15:00.375272 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdss2\" (UniqueName: \"kubernetes.io/projected/f33d52ff-e958-4122-bb37-e5d5dbd65d34-kube-api-access-kdss2\") pod \"collect-profiles-29535495-rtlhw\" (UID: \"f33d52ff-e958-4122-bb37-e5d5dbd65d34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535495-rtlhw" Feb 26 18:15:00 crc kubenswrapper[4805]: I0226 18:15:00.504046 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535495-rtlhw" Feb 26 18:15:01 crc kubenswrapper[4805]: I0226 18:15:01.043444 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535495-rtlhw"] Feb 26 18:15:01 crc kubenswrapper[4805]: I0226 18:15:01.411997 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535495-rtlhw" event={"ID":"f33d52ff-e958-4122-bb37-e5d5dbd65d34","Type":"ContainerStarted","Data":"a3c3d87046274c24587ba8730b618b9049896a4e791068abb07f9afb8803741a"} Feb 26 18:15:01 crc kubenswrapper[4805]: I0226 18:15:01.412386 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535495-rtlhw" event={"ID":"f33d52ff-e958-4122-bb37-e5d5dbd65d34","Type":"ContainerStarted","Data":"399abebe23e6d6f7b5bdc5ba1a6fcb7b998e67041d7ca23674e3585e1aa2fc76"} Feb 26 18:15:01 crc kubenswrapper[4805]: I0226 18:15:01.436175 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29535495-rtlhw" podStartSLOduration=1.436157152 podStartE2EDuration="1.436157152s" podCreationTimestamp="2026-02-26 18:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-26 18:15:01.435246359 +0000 UTC m=+3615.997000698" watchObservedRunningTime="2026-02-26 18:15:01.436157152 +0000 UTC m=+3615.997911491" Feb 26 18:15:02 crc kubenswrapper[4805]: I0226 18:15:02.421624 4805 generic.go:334] "Generic (PLEG): container finished" podID="f33d52ff-e958-4122-bb37-e5d5dbd65d34" containerID="a3c3d87046274c24587ba8730b618b9049896a4e791068abb07f9afb8803741a" exitCode=0 Feb 26 18:15:02 crc kubenswrapper[4805]: I0226 18:15:02.421677 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535495-rtlhw" event={"ID":"f33d52ff-e958-4122-bb37-e5d5dbd65d34","Type":"ContainerDied","Data":"a3c3d87046274c24587ba8730b618b9049896a4e791068abb07f9afb8803741a"} Feb 26 18:15:03 crc kubenswrapper[4805]: I0226 18:15:03.928061 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535495-rtlhw" Feb 26 18:15:03 crc kubenswrapper[4805]: I0226 18:15:03.934531 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdss2\" (UniqueName: \"kubernetes.io/projected/f33d52ff-e958-4122-bb37-e5d5dbd65d34-kube-api-access-kdss2\") pod \"f33d52ff-e958-4122-bb37-e5d5dbd65d34\" (UID: \"f33d52ff-e958-4122-bb37-e5d5dbd65d34\") " Feb 26 18:15:03 crc kubenswrapper[4805]: I0226 18:15:03.934612 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f33d52ff-e958-4122-bb37-e5d5dbd65d34-config-volume\") pod \"f33d52ff-e958-4122-bb37-e5d5dbd65d34\" (UID: \"f33d52ff-e958-4122-bb37-e5d5dbd65d34\") " Feb 26 18:15:03 crc kubenswrapper[4805]: I0226 18:15:03.934706 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f33d52ff-e958-4122-bb37-e5d5dbd65d34-secret-volume\") pod \"f33d52ff-e958-4122-bb37-e5d5dbd65d34\" (UID: \"f33d52ff-e958-4122-bb37-e5d5dbd65d34\") " Feb 26 18:15:03 crc kubenswrapper[4805]: I0226 18:15:03.935297 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f33d52ff-e958-4122-bb37-e5d5dbd65d34-config-volume" (OuterVolumeSpecName: "config-volume") pod "f33d52ff-e958-4122-bb37-e5d5dbd65d34" (UID: "f33d52ff-e958-4122-bb37-e5d5dbd65d34"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 18:15:03 crc kubenswrapper[4805]: I0226 18:15:03.940446 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f33d52ff-e958-4122-bb37-e5d5dbd65d34-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f33d52ff-e958-4122-bb37-e5d5dbd65d34" (UID: "f33d52ff-e958-4122-bb37-e5d5dbd65d34"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 18:15:03 crc kubenswrapper[4805]: I0226 18:15:03.940474 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f33d52ff-e958-4122-bb37-e5d5dbd65d34-kube-api-access-kdss2" (OuterVolumeSpecName: "kube-api-access-kdss2") pod "f33d52ff-e958-4122-bb37-e5d5dbd65d34" (UID: "f33d52ff-e958-4122-bb37-e5d5dbd65d34"). InnerVolumeSpecName "kube-api-access-kdss2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:15:04 crc kubenswrapper[4805]: I0226 18:15:04.036239 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdss2\" (UniqueName: \"kubernetes.io/projected/f33d52ff-e958-4122-bb37-e5d5dbd65d34-kube-api-access-kdss2\") on node \"crc\" DevicePath \"\"" Feb 26 18:15:04 crc kubenswrapper[4805]: I0226 18:15:04.036274 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f33d52ff-e958-4122-bb37-e5d5dbd65d34-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 18:15:04 crc kubenswrapper[4805]: I0226 18:15:04.036286 4805 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f33d52ff-e958-4122-bb37-e5d5dbd65d34-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 18:15:04 crc kubenswrapper[4805]: I0226 18:15:04.479313 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535495-rtlhw" event={"ID":"f33d52ff-e958-4122-bb37-e5d5dbd65d34","Type":"ContainerDied","Data":"399abebe23e6d6f7b5bdc5ba1a6fcb7b998e67041d7ca23674e3585e1aa2fc76"} Feb 26 18:15:04 crc kubenswrapper[4805]: I0226 18:15:04.479358 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="399abebe23e6d6f7b5bdc5ba1a6fcb7b998e67041d7ca23674e3585e1aa2fc76" Feb 26 18:15:04 crc kubenswrapper[4805]: I0226 18:15:04.479377 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535495-rtlhw" Feb 26 18:15:04 crc kubenswrapper[4805]: I0226 18:15:04.545586 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535450-pkxrs"] Feb 26 18:15:04 crc kubenswrapper[4805]: I0226 18:15:04.558556 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535450-pkxrs"] Feb 26 18:15:04 crc kubenswrapper[4805]: I0226 18:15:04.969745 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b832ebc6-5bcc-437a-9ff7-8e9987e423af" path="/var/lib/kubelet/pods/b832ebc6-5bcc-437a-9ff7-8e9987e423af/volumes" Feb 26 18:15:48 crc kubenswrapper[4805]: I0226 18:15:48.135928 4805 scope.go:117] "RemoveContainer" containerID="1a8ce2778aa1f4965424fa96a8b7ca85ae26ff8f3b4144fb6d807265af867ded" Feb 26 18:16:00 crc kubenswrapper[4805]: I0226 18:16:00.152986 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535496-xxdbg"] Feb 26 18:16:00 crc kubenswrapper[4805]: E0226 18:16:00.155419 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f33d52ff-e958-4122-bb37-e5d5dbd65d34" containerName="collect-profiles" Feb 26 18:16:00 crc kubenswrapper[4805]: I0226 18:16:00.155449 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="f33d52ff-e958-4122-bb37-e5d5dbd65d34" containerName="collect-profiles" Feb 26 18:16:00 crc kubenswrapper[4805]: I0226 18:16:00.155770 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="f33d52ff-e958-4122-bb37-e5d5dbd65d34" containerName="collect-profiles" Feb 26 18:16:00 crc kubenswrapper[4805]: I0226 18:16:00.156603 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535496-xxdbg" Feb 26 18:16:00 crc kubenswrapper[4805]: I0226 18:16:00.164556 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 18:16:00 crc kubenswrapper[4805]: I0226 18:16:00.165337 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535496-xxdbg"] Feb 26 18:16:00 crc kubenswrapper[4805]: I0226 18:16:00.165485 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 18:16:00 crc kubenswrapper[4805]: I0226 18:16:00.165564 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 18:16:00 crc kubenswrapper[4805]: I0226 18:16:00.213061 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fsk7\" (UniqueName: \"kubernetes.io/projected/e56b67f3-6a30-4521-bb90-eda160f861fa-kube-api-access-6fsk7\") pod \"auto-csr-approver-29535496-xxdbg\" (UID: \"e56b67f3-6a30-4521-bb90-eda160f861fa\") " pod="openshift-infra/auto-csr-approver-29535496-xxdbg" Feb 26 18:16:00 crc kubenswrapper[4805]: I0226 18:16:00.315373 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fsk7\" (UniqueName: \"kubernetes.io/projected/e56b67f3-6a30-4521-bb90-eda160f861fa-kube-api-access-6fsk7\") pod \"auto-csr-approver-29535496-xxdbg\" (UID: \"e56b67f3-6a30-4521-bb90-eda160f861fa\") " pod="openshift-infra/auto-csr-approver-29535496-xxdbg" Feb 26 18:16:00 crc kubenswrapper[4805]: I0226 18:16:00.343316 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fsk7\" (UniqueName: \"kubernetes.io/projected/e56b67f3-6a30-4521-bb90-eda160f861fa-kube-api-access-6fsk7\") pod \"auto-csr-approver-29535496-xxdbg\" (UID: \"e56b67f3-6a30-4521-bb90-eda160f861fa\") " pod="openshift-infra/auto-csr-approver-29535496-xxdbg" Feb 26 18:16:00 crc kubenswrapper[4805]: I0226 18:16:00.476119 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535496-xxdbg" Feb 26 18:16:01 crc kubenswrapper[4805]: I0226 18:16:01.011967 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535496-xxdbg"] Feb 26 18:16:01 crc kubenswrapper[4805]: I0226 18:16:01.123146 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535496-xxdbg" event={"ID":"e56b67f3-6a30-4521-bb90-eda160f861fa","Type":"ContainerStarted","Data":"374f52818a535f037d2493506407b293eab5042262e4f2d5461062bb2ad2791b"} Feb 26 18:16:02 crc kubenswrapper[4805]: I0226 18:16:02.978587 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 18:16:02 crc kubenswrapper[4805]: I0226 18:16:02.979189 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 18:16:03 crc kubenswrapper[4805]: I0226 18:16:03.161220 4805 generic.go:334] "Generic (PLEG): container finished" podID="e56b67f3-6a30-4521-bb90-eda160f861fa" containerID="ac62308522dad1df849e57bd2a32d86729c275f3de768e581c0355f9e19046c2" exitCode=0 Feb 26 18:16:03 crc kubenswrapper[4805]: I0226 18:16:03.161799 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535496-xxdbg" event={"ID":"e56b67f3-6a30-4521-bb90-eda160f861fa","Type":"ContainerDied","Data":"ac62308522dad1df849e57bd2a32d86729c275f3de768e581c0355f9e19046c2"} Feb 26 18:16:04 crc kubenswrapper[4805]: I0226 18:16:04.747038 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535496-xxdbg" Feb 26 18:16:04 crc kubenswrapper[4805]: I0226 18:16:04.769164 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fsk7\" (UniqueName: \"kubernetes.io/projected/e56b67f3-6a30-4521-bb90-eda160f861fa-kube-api-access-6fsk7\") pod \"e56b67f3-6a30-4521-bb90-eda160f861fa\" (UID: \"e56b67f3-6a30-4521-bb90-eda160f861fa\") " Feb 26 18:16:04 crc kubenswrapper[4805]: I0226 18:16:04.777647 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e56b67f3-6a30-4521-bb90-eda160f861fa-kube-api-access-6fsk7" (OuterVolumeSpecName: "kube-api-access-6fsk7") pod "e56b67f3-6a30-4521-bb90-eda160f861fa" (UID: "e56b67f3-6a30-4521-bb90-eda160f861fa"). InnerVolumeSpecName "kube-api-access-6fsk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:16:04 crc kubenswrapper[4805]: I0226 18:16:04.872574 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fsk7\" (UniqueName: \"kubernetes.io/projected/e56b67f3-6a30-4521-bb90-eda160f861fa-kube-api-access-6fsk7\") on node \"crc\" DevicePath \"\"" Feb 26 18:16:05 crc kubenswrapper[4805]: I0226 18:16:05.184858 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535496-xxdbg" event={"ID":"e56b67f3-6a30-4521-bb90-eda160f861fa","Type":"ContainerDied","Data":"374f52818a535f037d2493506407b293eab5042262e4f2d5461062bb2ad2791b"} Feb 26 18:16:05 crc kubenswrapper[4805]: I0226 18:16:05.184907 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="374f52818a535f037d2493506407b293eab5042262e4f2d5461062bb2ad2791b" Feb 26 18:16:05 crc kubenswrapper[4805]: I0226 18:16:05.185603 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535496-xxdbg" Feb 26 18:16:05 crc kubenswrapper[4805]: I0226 18:16:05.821983 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535490-pmznq"] Feb 26 18:16:05 crc kubenswrapper[4805]: I0226 18:16:05.831735 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535490-pmznq"] Feb 26 18:16:06 crc kubenswrapper[4805]: I0226 18:16:06.964878 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9670664a-aeea-4a81-8789-cd93b87c8a03" path="/var/lib/kubelet/pods/9670664a-aeea-4a81-8789-cd93b87c8a03/volumes" Feb 26 18:16:32 crc kubenswrapper[4805]: I0226 18:16:32.978280 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 18:16:32 crc kubenswrapper[4805]: I0226 18:16:32.979113 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 18:16:48 crc kubenswrapper[4805]: I0226 18:16:48.203160 4805 scope.go:117] "RemoveContainer" containerID="c4e594f4db8223809056923bfddcb40a1f82f6ca5b2bd1d26e326a5fdb1df968" Feb 26 18:17:02 crc kubenswrapper[4805]: I0226 18:17:02.978748 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 18:17:02 crc kubenswrapper[4805]: I0226 18:17:02.979373 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 18:17:02 crc kubenswrapper[4805]: I0226 18:17:02.979416 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" Feb 26 18:17:02 crc kubenswrapper[4805]: I0226 18:17:02.980210 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e1edd14159ce6432f8e6114fdcbd7c7a338238d8dfa653a4f4dfc4085024378c"} pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 18:17:02 crc kubenswrapper[4805]: I0226 18:17:02.980257 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" containerID="cri-o://e1edd14159ce6432f8e6114fdcbd7c7a338238d8dfa653a4f4dfc4085024378c" gracePeriod=600 Feb 26 18:17:03 crc kubenswrapper[4805]: I0226 18:17:03.866887 4805 generic.go:334] "Generic (PLEG): container finished" podID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerID="e1edd14159ce6432f8e6114fdcbd7c7a338238d8dfa653a4f4dfc4085024378c" exitCode=0 Feb 26 18:17:03 crc kubenswrapper[4805]: I0226 18:17:03.867165 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" event={"ID":"25e83477-65d0-41be-8e55-fdacfc5871a8","Type":"ContainerDied","Data":"e1edd14159ce6432f8e6114fdcbd7c7a338238d8dfa653a4f4dfc4085024378c"} Feb 26 18:17:03 crc kubenswrapper[4805]: I0226 18:17:03.867349 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" event={"ID":"25e83477-65d0-41be-8e55-fdacfc5871a8","Type":"ContainerStarted","Data":"4415e2b16fe48fd329f918b4a5ab7fde8934f7960e3569e0600ff07ec9d059dd"} Feb 26 18:17:03 crc kubenswrapper[4805]: I0226 18:17:03.867368 4805 scope.go:117] "RemoveContainer" containerID="3dde3ae0c6957002421c38b652b72ebb5280728ffecf8fb3fb8bf0c13b250691" Feb 26 18:18:00 crc kubenswrapper[4805]: I0226 18:18:00.150639 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535498-6z6zw"] Feb 26 18:18:00 crc kubenswrapper[4805]: E0226 18:18:00.151555 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e56b67f3-6a30-4521-bb90-eda160f861fa" containerName="oc" Feb 26 18:18:00 crc kubenswrapper[4805]: I0226 18:18:00.151570 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e56b67f3-6a30-4521-bb90-eda160f861fa" containerName="oc" Feb 26 18:18:00 crc kubenswrapper[4805]: I0226 18:18:00.151761 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="e56b67f3-6a30-4521-bb90-eda160f861fa" containerName="oc" Feb 26 18:18:00 crc kubenswrapper[4805]: I0226 18:18:00.152551 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535498-6z6zw" Feb 26 18:18:00 crc kubenswrapper[4805]: I0226 18:18:00.155547 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 18:18:00 crc kubenswrapper[4805]: I0226 18:18:00.155788 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 18:18:00 crc kubenswrapper[4805]: I0226 18:18:00.155904 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 18:18:00 crc kubenswrapper[4805]: I0226 18:18:00.165637 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535498-6z6zw"] Feb 26 18:18:00 crc kubenswrapper[4805]: I0226 18:18:00.296241 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bc2c\" (UniqueName: \"kubernetes.io/projected/3953e5e0-11ea-4570-a673-fcfd8122d2bd-kube-api-access-5bc2c\") pod \"auto-csr-approver-29535498-6z6zw\" (UID: \"3953e5e0-11ea-4570-a673-fcfd8122d2bd\") " pod="openshift-infra/auto-csr-approver-29535498-6z6zw" Feb 26 18:18:00 crc kubenswrapper[4805]: I0226 18:18:00.398001 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bc2c\" (UniqueName: \"kubernetes.io/projected/3953e5e0-11ea-4570-a673-fcfd8122d2bd-kube-api-access-5bc2c\") pod \"auto-csr-approver-29535498-6z6zw\" (UID: \"3953e5e0-11ea-4570-a673-fcfd8122d2bd\") " pod="openshift-infra/auto-csr-approver-29535498-6z6zw" Feb 26 18:18:00 crc kubenswrapper[4805]: I0226 18:18:00.420493 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bc2c\" (UniqueName: \"kubernetes.io/projected/3953e5e0-11ea-4570-a673-fcfd8122d2bd-kube-api-access-5bc2c\") pod \"auto-csr-approver-29535498-6z6zw\" (UID: \"3953e5e0-11ea-4570-a673-fcfd8122d2bd\") " pod="openshift-infra/auto-csr-approver-29535498-6z6zw" Feb 26 18:18:00 crc kubenswrapper[4805]: I0226 18:18:00.496532 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535498-6z6zw" Feb 26 18:18:01 crc kubenswrapper[4805]: I0226 18:18:01.033892 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535498-6z6zw"] Feb 26 18:18:01 crc kubenswrapper[4805]: I0226 18:18:01.039419 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 18:18:01 crc kubenswrapper[4805]: I0226 18:18:01.512099 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535498-6z6zw" event={"ID":"3953e5e0-11ea-4570-a673-fcfd8122d2bd","Type":"ContainerStarted","Data":"633e235cf5ece1c196919ce4a23d2ca090f07677d75cac245a4ccf5f356040fe"} Feb 26 18:18:02 crc kubenswrapper[4805]: I0226 18:18:02.524746 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535498-6z6zw" event={"ID":"3953e5e0-11ea-4570-a673-fcfd8122d2bd","Type":"ContainerStarted","Data":"d5d3c2d664a5374074edcedb444030c0c0b56321d9bf0b1558a0d7233289d050"} Feb 26 18:18:02 crc kubenswrapper[4805]: I0226 18:18:02.545834 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535498-6z6zw" podStartSLOduration=1.427307856 podStartE2EDuration="2.545806798s" podCreationTimestamp="2026-02-26 18:18:00 +0000 UTC" firstStartedPulling="2026-02-26 18:18:01.039228572 +0000 UTC m=+3795.600982911" lastFinishedPulling="2026-02-26 18:18:02.157727504 +0000 UTC m=+3796.719481853" observedRunningTime="2026-02-26 18:18:02.539626121 +0000 UTC m=+3797.101380490" watchObservedRunningTime="2026-02-26 18:18:02.545806798 +0000 UTC m=+3797.107561167" Feb 26 18:18:03 crc kubenswrapper[4805]: I0226 18:18:03.539888 4805 generic.go:334] "Generic (PLEG): container finished" podID="3953e5e0-11ea-4570-a673-fcfd8122d2bd" containerID="d5d3c2d664a5374074edcedb444030c0c0b56321d9bf0b1558a0d7233289d050" exitCode=0 Feb 26 18:18:03 crc kubenswrapper[4805]: I0226 18:18:03.540198 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535498-6z6zw" event={"ID":"3953e5e0-11ea-4570-a673-fcfd8122d2bd","Type":"ContainerDied","Data":"d5d3c2d664a5374074edcedb444030c0c0b56321d9bf0b1558a0d7233289d050"} Feb 26 18:18:04 crc kubenswrapper[4805]: I0226 18:18:04.953936 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535498-6z6zw" Feb 26 18:18:05 crc kubenswrapper[4805]: I0226 18:18:05.100677 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bc2c\" (UniqueName: \"kubernetes.io/projected/3953e5e0-11ea-4570-a673-fcfd8122d2bd-kube-api-access-5bc2c\") pod \"3953e5e0-11ea-4570-a673-fcfd8122d2bd\" (UID: \"3953e5e0-11ea-4570-a673-fcfd8122d2bd\") " Feb 26 18:18:05 crc kubenswrapper[4805]: I0226 18:18:05.108381 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3953e5e0-11ea-4570-a673-fcfd8122d2bd-kube-api-access-5bc2c" (OuterVolumeSpecName: "kube-api-access-5bc2c") pod "3953e5e0-11ea-4570-a673-fcfd8122d2bd" (UID: "3953e5e0-11ea-4570-a673-fcfd8122d2bd"). InnerVolumeSpecName "kube-api-access-5bc2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:18:05 crc kubenswrapper[4805]: I0226 18:18:05.204177 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bc2c\" (UniqueName: \"kubernetes.io/projected/3953e5e0-11ea-4570-a673-fcfd8122d2bd-kube-api-access-5bc2c\") on node \"crc\" DevicePath \"\"" Feb 26 18:18:05 crc kubenswrapper[4805]: I0226 18:18:05.567433 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535498-6z6zw" event={"ID":"3953e5e0-11ea-4570-a673-fcfd8122d2bd","Type":"ContainerDied","Data":"633e235cf5ece1c196919ce4a23d2ca090f07677d75cac245a4ccf5f356040fe"} Feb 26 18:18:05 crc kubenswrapper[4805]: I0226 18:18:05.567708 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="633e235cf5ece1c196919ce4a23d2ca090f07677d75cac245a4ccf5f356040fe" Feb 26 18:18:05 crc kubenswrapper[4805]: I0226 18:18:05.567776 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535498-6z6zw" Feb 26 18:18:05 crc kubenswrapper[4805]: I0226 18:18:05.640749 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535492-cqnbn"] Feb 26 18:18:05 crc kubenswrapper[4805]: I0226 18:18:05.649504 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535492-cqnbn"] Feb 26 18:18:06 crc kubenswrapper[4805]: I0226 18:18:06.969320 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2ec298b-b595-48c0-8bab-35dbefa4a5e5" path="/var/lib/kubelet/pods/b2ec298b-b595-48c0-8bab-35dbefa4a5e5/volumes" Feb 26 18:18:48 crc kubenswrapper[4805]: I0226 18:18:48.315542 4805 scope.go:117] "RemoveContainer" containerID="cc81d72902506a817bf18e2c9a26e1d0d9e5464925ce589eb1a2ebe52627508a" Feb 26 18:19:17 crc kubenswrapper[4805]: I0226 18:19:17.160138 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q77ps"] Feb 26 18:19:17 crc kubenswrapper[4805]: E0226 18:19:17.161405 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3953e5e0-11ea-4570-a673-fcfd8122d2bd" containerName="oc" Feb 26 18:19:17 crc kubenswrapper[4805]: I0226 18:19:17.161425 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="3953e5e0-11ea-4570-a673-fcfd8122d2bd" containerName="oc" Feb 26 18:19:17 crc kubenswrapper[4805]: I0226 18:19:17.161724 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="3953e5e0-11ea-4570-a673-fcfd8122d2bd" containerName="oc" Feb 26 18:19:17 crc kubenswrapper[4805]: I0226 18:19:17.165113 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q77ps" Feb 26 18:19:17 crc kubenswrapper[4805]: I0226 18:19:17.177682 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q77ps"] Feb 26 18:19:17 crc kubenswrapper[4805]: I0226 18:19:17.310008 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74a56cde-727f-49eb-be83-4f90add701eb-catalog-content\") pod \"community-operators-q77ps\" (UID: \"74a56cde-727f-49eb-be83-4f90add701eb\") " pod="openshift-marketplace/community-operators-q77ps" Feb 26 18:19:17 crc kubenswrapper[4805]: I0226 18:19:17.310289 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxnwx\" (UniqueName: \"kubernetes.io/projected/74a56cde-727f-49eb-be83-4f90add701eb-kube-api-access-jxnwx\") pod \"community-operators-q77ps\" (UID: \"74a56cde-727f-49eb-be83-4f90add701eb\") " pod="openshift-marketplace/community-operators-q77ps" Feb 26 18:19:17 crc kubenswrapper[4805]: I0226 18:19:17.310899 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74a56cde-727f-49eb-be83-4f90add701eb-utilities\") pod \"community-operators-q77ps\" (UID: \"74a56cde-727f-49eb-be83-4f90add701eb\") " pod="openshift-marketplace/community-operators-q77ps" Feb 26 18:19:17 crc kubenswrapper[4805]: I0226 18:19:17.412793 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74a56cde-727f-49eb-be83-4f90add701eb-utilities\") pod \"community-operators-q77ps\" (UID: \"74a56cde-727f-49eb-be83-4f90add701eb\") " pod="openshift-marketplace/community-operators-q77ps" Feb 26 18:19:17 crc kubenswrapper[4805]: I0226 18:19:17.412922 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74a56cde-727f-49eb-be83-4f90add701eb-catalog-content\") pod \"community-operators-q77ps\" (UID: \"74a56cde-727f-49eb-be83-4f90add701eb\") " pod="openshift-marketplace/community-operators-q77ps" Feb 26 18:19:17 crc kubenswrapper[4805]: I0226 18:19:17.412963 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxnwx\" (UniqueName: \"kubernetes.io/projected/74a56cde-727f-49eb-be83-4f90add701eb-kube-api-access-jxnwx\") pod \"community-operators-q77ps\" (UID: \"74a56cde-727f-49eb-be83-4f90add701eb\") " pod="openshift-marketplace/community-operators-q77ps" Feb 26 18:19:17 crc kubenswrapper[4805]: I0226 18:19:17.414304 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74a56cde-727f-49eb-be83-4f90add701eb-catalog-content\") pod \"community-operators-q77ps\" (UID: \"74a56cde-727f-49eb-be83-4f90add701eb\") " pod="openshift-marketplace/community-operators-q77ps" Feb 26 18:19:17 crc kubenswrapper[4805]: I0226 18:19:17.414392 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74a56cde-727f-49eb-be83-4f90add701eb-utilities\") pod \"community-operators-q77ps\" (UID: \"74a56cde-727f-49eb-be83-4f90add701eb\") " pod="openshift-marketplace/community-operators-q77ps" Feb 26 18:19:17 crc kubenswrapper[4805]: I0226 18:19:17.435721 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxnwx\" (UniqueName: \"kubernetes.io/projected/74a56cde-727f-49eb-be83-4f90add701eb-kube-api-access-jxnwx\") pod \"community-operators-q77ps\" (UID: \"74a56cde-727f-49eb-be83-4f90add701eb\") " pod="openshift-marketplace/community-operators-q77ps" Feb 26 18:19:17 crc kubenswrapper[4805]: I0226 18:19:17.503090 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q77ps" Feb 26 18:19:18 crc kubenswrapper[4805]: I0226 18:19:18.030244 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q77ps"] Feb 26 18:19:18 crc kubenswrapper[4805]: I0226 18:19:18.391177 4805 generic.go:334] "Generic (PLEG): container finished" podID="74a56cde-727f-49eb-be83-4f90add701eb" containerID="d552e98e5998769f0c8182c2dc37fd15f5573a9e2df5937b3001a51db3bfd4a7" exitCode=0 Feb 26 18:19:18 crc kubenswrapper[4805]: I0226 18:19:18.391267 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q77ps" event={"ID":"74a56cde-727f-49eb-be83-4f90add701eb","Type":"ContainerDied","Data":"d552e98e5998769f0c8182c2dc37fd15f5573a9e2df5937b3001a51db3bfd4a7"} Feb 26 18:19:18 crc kubenswrapper[4805]: I0226 18:19:18.391481 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q77ps" event={"ID":"74a56cde-727f-49eb-be83-4f90add701eb","Type":"ContainerStarted","Data":"8b5f30519f2ce47312c961196af443a216ed4c447178143872da8d961a90bb6d"} Feb 26 18:19:19 crc kubenswrapper[4805]: I0226 18:19:19.403435 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q77ps" event={"ID":"74a56cde-727f-49eb-be83-4f90add701eb","Type":"ContainerStarted","Data":"b297f83e7d3d4af5f5bc95c72be7fd4f7a14f980c532e93432f113c72650cc1c"} Feb 26 18:19:21 crc kubenswrapper[4805]: I0226 18:19:21.431537 4805 generic.go:334] "Generic (PLEG): container finished" podID="74a56cde-727f-49eb-be83-4f90add701eb" containerID="b297f83e7d3d4af5f5bc95c72be7fd4f7a14f980c532e93432f113c72650cc1c" exitCode=0 Feb 26 18:19:21 crc kubenswrapper[4805]: I0226 18:19:21.431584 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q77ps" event={"ID":"74a56cde-727f-49eb-be83-4f90add701eb","Type":"ContainerDied","Data":"b297f83e7d3d4af5f5bc95c72be7fd4f7a14f980c532e93432f113c72650cc1c"} Feb 26 18:19:22 crc kubenswrapper[4805]: I0226 18:19:22.446594 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q77ps" event={"ID":"74a56cde-727f-49eb-be83-4f90add701eb","Type":"ContainerStarted","Data":"0e55c7f9c020fa16635528d6f36432e0f0a65af695ed4296e60aa9d6be1f03a7"} Feb 26 18:19:22 crc kubenswrapper[4805]: I0226 18:19:22.474177 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q77ps" podStartSLOduration=2.0402315030000002 podStartE2EDuration="5.474154809s" podCreationTimestamp="2026-02-26 18:19:17 +0000 UTC" firstStartedPulling="2026-02-26 18:19:18.392991987 +0000 UTC m=+3872.954746346" lastFinishedPulling="2026-02-26 18:19:21.826915313 +0000 UTC m=+3876.388669652" observedRunningTime="2026-02-26 18:19:22.468932326 +0000 UTC m=+3877.030686675" watchObservedRunningTime="2026-02-26 18:19:22.474154809 +0000 UTC m=+3877.035909148" Feb 26 18:19:27 crc kubenswrapper[4805]: I0226 18:19:27.504505 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q77ps" Feb 26 18:19:27 crc kubenswrapper[4805]: I0226 18:19:27.505150 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q77ps" Feb 26 18:19:27 crc kubenswrapper[4805]: I0226 18:19:27.551221 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q77ps" Feb 26 18:19:28 crc kubenswrapper[4805]: I0226 18:19:28.586761 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q77ps" Feb 26 18:19:28 crc kubenswrapper[4805]: I0226 18:19:28.655659 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q77ps"] Feb 26 18:19:30 crc kubenswrapper[4805]: I0226 18:19:30.540611 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q77ps" podUID="74a56cde-727f-49eb-be83-4f90add701eb" containerName="registry-server" containerID="cri-o://0e55c7f9c020fa16635528d6f36432e0f0a65af695ed4296e60aa9d6be1f03a7" gracePeriod=2 Feb 26 18:19:31 crc kubenswrapper[4805]: I0226 18:19:31.116692 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q77ps" Feb 26 18:19:31 crc kubenswrapper[4805]: I0226 18:19:31.302939 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxnwx\" (UniqueName: \"kubernetes.io/projected/74a56cde-727f-49eb-be83-4f90add701eb-kube-api-access-jxnwx\") pod \"74a56cde-727f-49eb-be83-4f90add701eb\" (UID: \"74a56cde-727f-49eb-be83-4f90add701eb\") " Feb 26 18:19:31 crc kubenswrapper[4805]: I0226 18:19:31.303211 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74a56cde-727f-49eb-be83-4f90add701eb-utilities\") pod \"74a56cde-727f-49eb-be83-4f90add701eb\" (UID: \"74a56cde-727f-49eb-be83-4f90add701eb\") " Feb 26 18:19:31 crc kubenswrapper[4805]: I0226 18:19:31.303484 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74a56cde-727f-49eb-be83-4f90add701eb-catalog-content\") pod \"74a56cde-727f-49eb-be83-4f90add701eb\" (UID: \"74a56cde-727f-49eb-be83-4f90add701eb\") " Feb 26 18:19:31 crc kubenswrapper[4805]: I0226 18:19:31.304126 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74a56cde-727f-49eb-be83-4f90add701eb-utilities" (OuterVolumeSpecName: "utilities") pod "74a56cde-727f-49eb-be83-4f90add701eb" (UID: "74a56cde-727f-49eb-be83-4f90add701eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 18:19:31 crc kubenswrapper[4805]: I0226 18:19:31.304451 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74a56cde-727f-49eb-be83-4f90add701eb-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 18:19:31 crc kubenswrapper[4805]: I0226 18:19:31.319318 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74a56cde-727f-49eb-be83-4f90add701eb-kube-api-access-jxnwx" (OuterVolumeSpecName: "kube-api-access-jxnwx") pod "74a56cde-727f-49eb-be83-4f90add701eb" (UID: "74a56cde-727f-49eb-be83-4f90add701eb"). InnerVolumeSpecName "kube-api-access-jxnwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:19:31 crc kubenswrapper[4805]: I0226 18:19:31.360467 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74a56cde-727f-49eb-be83-4f90add701eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74a56cde-727f-49eb-be83-4f90add701eb" (UID: "74a56cde-727f-49eb-be83-4f90add701eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 18:19:31 crc kubenswrapper[4805]: I0226 18:19:31.406776 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74a56cde-727f-49eb-be83-4f90add701eb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 18:19:31 crc kubenswrapper[4805]: I0226 18:19:31.406821 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxnwx\" (UniqueName: \"kubernetes.io/projected/74a56cde-727f-49eb-be83-4f90add701eb-kube-api-access-jxnwx\") on node \"crc\" DevicePath \"\"" Feb 26 18:19:31 crc kubenswrapper[4805]: I0226 18:19:31.553003 4805 generic.go:334] "Generic (PLEG): container finished" podID="74a56cde-727f-49eb-be83-4f90add701eb" containerID="0e55c7f9c020fa16635528d6f36432e0f0a65af695ed4296e60aa9d6be1f03a7" exitCode=0 Feb 26 18:19:31 crc kubenswrapper[4805]: I0226 18:19:31.553079 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q77ps" event={"ID":"74a56cde-727f-49eb-be83-4f90add701eb","Type":"ContainerDied","Data":"0e55c7f9c020fa16635528d6f36432e0f0a65af695ed4296e60aa9d6be1f03a7"} Feb 26 18:19:31 crc kubenswrapper[4805]: I0226 18:19:31.553104 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q77ps" event={"ID":"74a56cde-727f-49eb-be83-4f90add701eb","Type":"ContainerDied","Data":"8b5f30519f2ce47312c961196af443a216ed4c447178143872da8d961a90bb6d"} Feb 26 18:19:31 crc kubenswrapper[4805]: I0226 18:19:31.553120 4805 scope.go:117] "RemoveContainer" containerID="0e55c7f9c020fa16635528d6f36432e0f0a65af695ed4296e60aa9d6be1f03a7" Feb 26 18:19:31 crc kubenswrapper[4805]: I0226 18:19:31.553255 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q77ps" Feb 26 18:19:31 crc kubenswrapper[4805]: I0226 18:19:31.596726 4805 scope.go:117] "RemoveContainer" containerID="b297f83e7d3d4af5f5bc95c72be7fd4f7a14f980c532e93432f113c72650cc1c" Feb 26 18:19:31 crc kubenswrapper[4805]: I0226 18:19:31.609942 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q77ps"] Feb 26 18:19:31 crc kubenswrapper[4805]: I0226 18:19:31.625760 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q77ps"] Feb 26 18:19:31 crc kubenswrapper[4805]: I0226 18:19:31.640641 4805 scope.go:117] "RemoveContainer" containerID="d552e98e5998769f0c8182c2dc37fd15f5573a9e2df5937b3001a51db3bfd4a7" Feb 26 18:19:31 crc kubenswrapper[4805]: I0226 18:19:31.668192 4805 scope.go:117] "RemoveContainer" containerID="0e55c7f9c020fa16635528d6f36432e0f0a65af695ed4296e60aa9d6be1f03a7" Feb 26 18:19:31 crc kubenswrapper[4805]: E0226 18:19:31.668679 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e55c7f9c020fa16635528d6f36432e0f0a65af695ed4296e60aa9d6be1f03a7\": container with ID starting with 0e55c7f9c020fa16635528d6f36432e0f0a65af695ed4296e60aa9d6be1f03a7 not found: ID does not exist" containerID="0e55c7f9c020fa16635528d6f36432e0f0a65af695ed4296e60aa9d6be1f03a7" Feb 26 18:19:31 crc kubenswrapper[4805]: I0226 18:19:31.668732 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e55c7f9c020fa16635528d6f36432e0f0a65af695ed4296e60aa9d6be1f03a7"} err="failed to get container status \"0e55c7f9c020fa16635528d6f36432e0f0a65af695ed4296e60aa9d6be1f03a7\": rpc error: code = NotFound desc = could not find container \"0e55c7f9c020fa16635528d6f36432e0f0a65af695ed4296e60aa9d6be1f03a7\": container with ID starting with 0e55c7f9c020fa16635528d6f36432e0f0a65af695ed4296e60aa9d6be1f03a7 not found: ID does not exist" Feb 26 18:19:31 crc kubenswrapper[4805]: I0226 18:19:31.668754 4805 scope.go:117] "RemoveContainer" containerID="b297f83e7d3d4af5f5bc95c72be7fd4f7a14f980c532e93432f113c72650cc1c" Feb 26 18:19:31 crc kubenswrapper[4805]: E0226 18:19:31.669155 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b297f83e7d3d4af5f5bc95c72be7fd4f7a14f980c532e93432f113c72650cc1c\": container with ID starting with b297f83e7d3d4af5f5bc95c72be7fd4f7a14f980c532e93432f113c72650cc1c not found: ID does not exist" containerID="b297f83e7d3d4af5f5bc95c72be7fd4f7a14f980c532e93432f113c72650cc1c" Feb 26 18:19:31 crc kubenswrapper[4805]: I0226 18:19:31.669200 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b297f83e7d3d4af5f5bc95c72be7fd4f7a14f980c532e93432f113c72650cc1c"} err="failed to get container status \"b297f83e7d3d4af5f5bc95c72be7fd4f7a14f980c532e93432f113c72650cc1c\": rpc error: code = NotFound desc = could not find container \"b297f83e7d3d4af5f5bc95c72be7fd4f7a14f980c532e93432f113c72650cc1c\": container with ID starting with b297f83e7d3d4af5f5bc95c72be7fd4f7a14f980c532e93432f113c72650cc1c not found: ID does not exist" Feb 26 18:19:31 crc kubenswrapper[4805]: I0226 18:19:31.669215 4805 scope.go:117] "RemoveContainer" containerID="d552e98e5998769f0c8182c2dc37fd15f5573a9e2df5937b3001a51db3bfd4a7" Feb 26 18:19:31 crc kubenswrapper[4805]: E0226 18:19:31.669533 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d552e98e5998769f0c8182c2dc37fd15f5573a9e2df5937b3001a51db3bfd4a7\": container with ID starting with d552e98e5998769f0c8182c2dc37fd15f5573a9e2df5937b3001a51db3bfd4a7 not found: ID does not exist" containerID="d552e98e5998769f0c8182c2dc37fd15f5573a9e2df5937b3001a51db3bfd4a7" Feb 26 18:19:31 crc kubenswrapper[4805]: I0226 18:19:31.669563 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d552e98e5998769f0c8182c2dc37fd15f5573a9e2df5937b3001a51db3bfd4a7"} err="failed to get container status \"d552e98e5998769f0c8182c2dc37fd15f5573a9e2df5937b3001a51db3bfd4a7\": rpc error: code = NotFound desc = could not find container \"d552e98e5998769f0c8182c2dc37fd15f5573a9e2df5937b3001a51db3bfd4a7\": container with ID starting with d552e98e5998769f0c8182c2dc37fd15f5573a9e2df5937b3001a51db3bfd4a7 not found: ID does not exist" Feb 26 18:19:31 crc kubenswrapper[4805]: E0226 18:19:31.708580 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74a56cde_727f_49eb_be83_4f90add701eb.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74a56cde_727f_49eb_be83_4f90add701eb.slice/crio-8b5f30519f2ce47312c961196af443a216ed4c447178143872da8d961a90bb6d\": RecentStats: unable to find data in memory cache]" Feb 26 18:19:32 crc kubenswrapper[4805]: I0226 18:19:32.969980 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74a56cde-727f-49eb-be83-4f90add701eb" path="/var/lib/kubelet/pods/74a56cde-727f-49eb-be83-4f90add701eb/volumes" Feb 26 18:19:32 crc kubenswrapper[4805]: I0226 18:19:32.978060 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 18:19:32 crc kubenswrapper[4805]: I0226 18:19:32.978124 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 18:20:00 crc kubenswrapper[4805]: I0226 18:20:00.155538 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535500-nkqcq"] Feb 26 18:20:00 crc kubenswrapper[4805]: E0226 18:20:00.158067 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74a56cde-727f-49eb-be83-4f90add701eb" containerName="extract-utilities" Feb 26 18:20:00 crc kubenswrapper[4805]: I0226 18:20:00.158193 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="74a56cde-727f-49eb-be83-4f90add701eb" containerName="extract-utilities" Feb 26 18:20:00 crc kubenswrapper[4805]: E0226 18:20:00.158301 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74a56cde-727f-49eb-be83-4f90add701eb" containerName="extract-content" Feb 26 18:20:00 crc kubenswrapper[4805]: I0226 18:20:00.158387 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="74a56cde-727f-49eb-be83-4f90add701eb" containerName="extract-content" Feb 26 18:20:00 crc kubenswrapper[4805]: E0226 18:20:00.158520 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74a56cde-727f-49eb-be83-4f90add701eb" containerName="registry-server" Feb 26 18:20:00 crc kubenswrapper[4805]: I0226 18:20:00.158606 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="74a56cde-727f-49eb-be83-4f90add701eb" containerName="registry-server" Feb 26 18:20:00 crc kubenswrapper[4805]: I0226 18:20:00.158967 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="74a56cde-727f-49eb-be83-4f90add701eb" containerName="registry-server" Feb 26 18:20:00 crc kubenswrapper[4805]: I0226 18:20:00.160226 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535500-nkqcq" Feb 26 18:20:00 crc kubenswrapper[4805]: I0226 18:20:00.162439 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 18:20:00 crc kubenswrapper[4805]: I0226 18:20:00.162640 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 18:20:00 crc kubenswrapper[4805]: I0226 18:20:00.162841 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 18:20:00 crc kubenswrapper[4805]: I0226 18:20:00.174752 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535500-nkqcq"] Feb 26 18:20:00 crc kubenswrapper[4805]: I0226 18:20:00.243264 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvv9q\" (UniqueName: \"kubernetes.io/projected/6aee106a-2544-47a1-bd85-4ddce85fd4e0-kube-api-access-rvv9q\") pod \"auto-csr-approver-29535500-nkqcq\" (UID: \"6aee106a-2544-47a1-bd85-4ddce85fd4e0\") " pod="openshift-infra/auto-csr-approver-29535500-nkqcq" Feb 26 18:20:00 crc kubenswrapper[4805]: I0226 18:20:00.345426 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvv9q\" (UniqueName: \"kubernetes.io/projected/6aee106a-2544-47a1-bd85-4ddce85fd4e0-kube-api-access-rvv9q\") pod \"auto-csr-approver-29535500-nkqcq\" (UID: \"6aee106a-2544-47a1-bd85-4ddce85fd4e0\") " pod="openshift-infra/auto-csr-approver-29535500-nkqcq" Feb 26 18:20:00 crc kubenswrapper[4805]: I0226 18:20:00.363108 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvv9q\" (UniqueName: \"kubernetes.io/projected/6aee106a-2544-47a1-bd85-4ddce85fd4e0-kube-api-access-rvv9q\") pod \"auto-csr-approver-29535500-nkqcq\" (UID: \"6aee106a-2544-47a1-bd85-4ddce85fd4e0\") " pod="openshift-infra/auto-csr-approver-29535500-nkqcq" Feb 26 18:20:00 crc kubenswrapper[4805]: I0226 18:20:00.489113 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535500-nkqcq" Feb 26 18:20:00 crc kubenswrapper[4805]: I0226 18:20:00.989870 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535500-nkqcq"] Feb 26 18:20:01 crc kubenswrapper[4805]: I0226 18:20:01.915192 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535500-nkqcq" event={"ID":"6aee106a-2544-47a1-bd85-4ddce85fd4e0","Type":"ContainerStarted","Data":"b53e469b4af2ed3d039d4dcbb2920201ced2e23cad9cae31f41135761304ebd9"} Feb 26 18:20:02 crc kubenswrapper[4805]: I0226 18:20:02.929140 4805 generic.go:334] "Generic (PLEG): container finished" podID="6aee106a-2544-47a1-bd85-4ddce85fd4e0" containerID="e156e37e910cada355630962ee0731e822e9446d66c7f5953e99860c174d3c77" exitCode=0 Feb 26 18:20:02 crc kubenswrapper[4805]: I0226 18:20:02.929210 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535500-nkqcq" event={"ID":"6aee106a-2544-47a1-bd85-4ddce85fd4e0","Type":"ContainerDied","Data":"e156e37e910cada355630962ee0731e822e9446d66c7f5953e99860c174d3c77"} Feb 26 18:20:02 crc kubenswrapper[4805]: I0226 18:20:02.977716 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 18:20:02 crc kubenswrapper[4805]: I0226 18:20:02.977817 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 18:20:04 crc kubenswrapper[4805]: I0226 18:20:04.440608 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535500-nkqcq" Feb 26 18:20:04 crc kubenswrapper[4805]: I0226 18:20:04.538369 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvv9q\" (UniqueName: \"kubernetes.io/projected/6aee106a-2544-47a1-bd85-4ddce85fd4e0-kube-api-access-rvv9q\") pod \"6aee106a-2544-47a1-bd85-4ddce85fd4e0\" (UID: \"6aee106a-2544-47a1-bd85-4ddce85fd4e0\") " Feb 26 18:20:04 crc kubenswrapper[4805]: I0226 18:20:04.545561 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aee106a-2544-47a1-bd85-4ddce85fd4e0-kube-api-access-rvv9q" (OuterVolumeSpecName: "kube-api-access-rvv9q") pod "6aee106a-2544-47a1-bd85-4ddce85fd4e0" (UID: "6aee106a-2544-47a1-bd85-4ddce85fd4e0"). InnerVolumeSpecName "kube-api-access-rvv9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:20:04 crc kubenswrapper[4805]: I0226 18:20:04.640960 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvv9q\" (UniqueName: \"kubernetes.io/projected/6aee106a-2544-47a1-bd85-4ddce85fd4e0-kube-api-access-rvv9q\") on node \"crc\" DevicePath \"\"" Feb 26 18:20:04 crc kubenswrapper[4805]: I0226 18:20:04.954673 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535500-nkqcq" Feb 26 18:20:04 crc kubenswrapper[4805]: I0226 18:20:04.982086 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535500-nkqcq" event={"ID":"6aee106a-2544-47a1-bd85-4ddce85fd4e0","Type":"ContainerDied","Data":"b53e469b4af2ed3d039d4dcbb2920201ced2e23cad9cae31f41135761304ebd9"} Feb 26 18:20:04 crc kubenswrapper[4805]: I0226 18:20:04.982131 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b53e469b4af2ed3d039d4dcbb2920201ced2e23cad9cae31f41135761304ebd9" Feb 26 18:20:05 crc kubenswrapper[4805]: I0226 18:20:05.533314 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535494-fggpt"] Feb 26 18:20:05 crc kubenswrapper[4805]: I0226 18:20:05.545906 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535494-fggpt"] Feb 26 18:20:06 crc kubenswrapper[4805]: I0226 18:20:06.973323 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0069ef07-ee19-4617-8617-9db194e2e3d9" path="/var/lib/kubelet/pods/0069ef07-ee19-4617-8617-9db194e2e3d9/volumes" Feb 26 18:20:32 crc kubenswrapper[4805]: I0226 18:20:32.977903 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 18:20:32 crc kubenswrapper[4805]: I0226 18:20:32.978698 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 18:20:32 crc kubenswrapper[4805]: I0226 18:20:32.978766 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" Feb 26 18:20:32 crc kubenswrapper[4805]: I0226 18:20:32.980140 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4415e2b16fe48fd329f918b4a5ab7fde8934f7960e3569e0600ff07ec9d059dd"} pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 18:20:32 crc kubenswrapper[4805]: I0226 18:20:32.980286 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" containerID="cri-o://4415e2b16fe48fd329f918b4a5ab7fde8934f7960e3569e0600ff07ec9d059dd" gracePeriod=600 Feb 26 18:20:33 crc kubenswrapper[4805]: E0226 18:20:33.110686 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:20:33 crc kubenswrapper[4805]: I0226 18:20:33.259237 4805 generic.go:334] "Generic (PLEG): container finished" podID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerID="4415e2b16fe48fd329f918b4a5ab7fde8934f7960e3569e0600ff07ec9d059dd" exitCode=0 Feb 26 18:20:33 crc kubenswrapper[4805]: I0226 18:20:33.259325 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" event={"ID":"25e83477-65d0-41be-8e55-fdacfc5871a8","Type":"ContainerDied","Data":"4415e2b16fe48fd329f918b4a5ab7fde8934f7960e3569e0600ff07ec9d059dd"} Feb 26 18:20:33 crc kubenswrapper[4805]: I0226 18:20:33.259660 4805 scope.go:117] "RemoveContainer" containerID="e1edd14159ce6432f8e6114fdcbd7c7a338238d8dfa653a4f4dfc4085024378c" Feb 26 18:20:33 crc kubenswrapper[4805]: I0226 18:20:33.260295 4805 scope.go:117] "RemoveContainer" containerID="4415e2b16fe48fd329f918b4a5ab7fde8934f7960e3569e0600ff07ec9d059dd" Feb 26 18:20:33 crc kubenswrapper[4805]: E0226 18:20:33.260580 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:20:47 crc kubenswrapper[4805]: I0226 18:20:47.953627 4805 scope.go:117] "RemoveContainer" containerID="4415e2b16fe48fd329f918b4a5ab7fde8934f7960e3569e0600ff07ec9d059dd" Feb 26 18:20:47 crc kubenswrapper[4805]: E0226 18:20:47.954280 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:20:48 crc kubenswrapper[4805]: I0226 18:20:48.462446 4805 scope.go:117] "RemoveContainer" containerID="d32bfb0323b61455b413ac10fcceb3bd4d53887be96a5fcb7beb43578304bc9a" Feb 26 18:21:00 crc kubenswrapper[4805]: I0226 18:21:00.953160 4805 scope.go:117] "RemoveContainer" containerID="4415e2b16fe48fd329f918b4a5ab7fde8934f7960e3569e0600ff07ec9d059dd" Feb 26 18:21:00 crc kubenswrapper[4805]: E0226 18:21:00.953905 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:21:13 crc kubenswrapper[4805]: I0226 18:21:13.953526 4805 scope.go:117] "RemoveContainer" containerID="4415e2b16fe48fd329f918b4a5ab7fde8934f7960e3569e0600ff07ec9d059dd" Feb 26 18:21:13 crc kubenswrapper[4805]: E0226 18:21:13.954509 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:21:27 crc kubenswrapper[4805]: I0226 18:21:27.953638 4805 scope.go:117] "RemoveContainer" containerID="4415e2b16fe48fd329f918b4a5ab7fde8934f7960e3569e0600ff07ec9d059dd" Feb 26 18:21:27 crc kubenswrapper[4805]: E0226 18:21:27.957554 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:21:39 crc kubenswrapper[4805]: I0226 18:21:39.954188 4805 scope.go:117] "RemoveContainer" containerID="4415e2b16fe48fd329f918b4a5ab7fde8934f7960e3569e0600ff07ec9d059dd" Feb 26 18:21:39 crc kubenswrapper[4805]: E0226 18:21:39.955144 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:21:52 crc kubenswrapper[4805]: I0226 18:21:52.958880 4805 scope.go:117] "RemoveContainer" containerID="4415e2b16fe48fd329f918b4a5ab7fde8934f7960e3569e0600ff07ec9d059dd" Feb 26 18:21:52 crc kubenswrapper[4805]: E0226 18:21:52.959755 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:22:00 crc kubenswrapper[4805]: I0226 18:22:00.148443 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535502-rtkgq"] Feb 26 18:22:00 crc kubenswrapper[4805]: E0226 18:22:00.149691 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aee106a-2544-47a1-bd85-4ddce85fd4e0" containerName="oc" Feb 26 18:22:00 crc kubenswrapper[4805]: I0226 18:22:00.149713 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aee106a-2544-47a1-bd85-4ddce85fd4e0" containerName="oc" Feb 26 18:22:00 crc kubenswrapper[4805]: I0226 18:22:00.150079 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aee106a-2544-47a1-bd85-4ddce85fd4e0" containerName="oc" Feb 26 18:22:00 crc kubenswrapper[4805]: I0226 18:22:00.151003 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535502-rtkgq" Feb 26 18:22:00 crc kubenswrapper[4805]: I0226 18:22:00.153250 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 18:22:00 crc kubenswrapper[4805]: I0226 18:22:00.153466 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 18:22:00 crc kubenswrapper[4805]: I0226 18:22:00.153990 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 18:22:00 crc kubenswrapper[4805]: I0226 18:22:00.167107 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535502-rtkgq"] Feb 26 18:22:00 crc kubenswrapper[4805]: I0226 18:22:00.301799 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh9nt\" (UniqueName: \"kubernetes.io/projected/ba1fd226-243b-4ef3-b29b-e33bf6d5bb5f-kube-api-access-wh9nt\") pod \"auto-csr-approver-29535502-rtkgq\" (UID: \"ba1fd226-243b-4ef3-b29b-e33bf6d5bb5f\") " pod="openshift-infra/auto-csr-approver-29535502-rtkgq" Feb 26 18:22:00 crc kubenswrapper[4805]: I0226 18:22:00.403456 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh9nt\" (UniqueName: \"kubernetes.io/projected/ba1fd226-243b-4ef3-b29b-e33bf6d5bb5f-kube-api-access-wh9nt\") pod \"auto-csr-approver-29535502-rtkgq\" (UID: \"ba1fd226-243b-4ef3-b29b-e33bf6d5bb5f\") " pod="openshift-infra/auto-csr-approver-29535502-rtkgq" Feb 26 18:22:00 crc kubenswrapper[4805]: I0226 18:22:00.432679 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh9nt\" (UniqueName: \"kubernetes.io/projected/ba1fd226-243b-4ef3-b29b-e33bf6d5bb5f-kube-api-access-wh9nt\") pod \"auto-csr-approver-29535502-rtkgq\" (UID: \"ba1fd226-243b-4ef3-b29b-e33bf6d5bb5f\") " pod="openshift-infra/auto-csr-approver-29535502-rtkgq" Feb 26 18:22:00 crc kubenswrapper[4805]: I0226 18:22:00.485153 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535502-rtkgq" Feb 26 18:22:00 crc kubenswrapper[4805]: I0226 18:22:00.996134 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535502-rtkgq"] Feb 26 18:22:01 crc kubenswrapper[4805]: I0226 18:22:01.220695 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535502-rtkgq" event={"ID":"ba1fd226-243b-4ef3-b29b-e33bf6d5bb5f","Type":"ContainerStarted","Data":"f53bedea810bdd6d2b0a4ee2b19a426f2c83b0fddfd0ddff3f670c10f98dcc4c"} Feb 26 18:22:03 crc kubenswrapper[4805]: I0226 18:22:03.244003 4805 generic.go:334] "Generic (PLEG): container finished" podID="ba1fd226-243b-4ef3-b29b-e33bf6d5bb5f" containerID="db39d4ba9625bd285aa89e214111356fd5cd535f0b23796157adffe996d54bf6" exitCode=0 Feb 26 18:22:03 crc kubenswrapper[4805]: I0226 18:22:03.244183 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535502-rtkgq" event={"ID":"ba1fd226-243b-4ef3-b29b-e33bf6d5bb5f","Type":"ContainerDied","Data":"db39d4ba9625bd285aa89e214111356fd5cd535f0b23796157adffe996d54bf6"} Feb 26 18:22:04 crc kubenswrapper[4805]: I0226 18:22:04.680993 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535502-rtkgq" Feb 26 18:22:04 crc kubenswrapper[4805]: I0226 18:22:04.820226 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh9nt\" (UniqueName: \"kubernetes.io/projected/ba1fd226-243b-4ef3-b29b-e33bf6d5bb5f-kube-api-access-wh9nt\") pod \"ba1fd226-243b-4ef3-b29b-e33bf6d5bb5f\" (UID: \"ba1fd226-243b-4ef3-b29b-e33bf6d5bb5f\") " Feb 26 18:22:04 crc kubenswrapper[4805]: I0226 18:22:04.825674 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba1fd226-243b-4ef3-b29b-e33bf6d5bb5f-kube-api-access-wh9nt" (OuterVolumeSpecName: "kube-api-access-wh9nt") pod "ba1fd226-243b-4ef3-b29b-e33bf6d5bb5f" (UID: "ba1fd226-243b-4ef3-b29b-e33bf6d5bb5f"). InnerVolumeSpecName "kube-api-access-wh9nt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:22:04 crc kubenswrapper[4805]: I0226 18:22:04.923255 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh9nt\" (UniqueName: \"kubernetes.io/projected/ba1fd226-243b-4ef3-b29b-e33bf6d5bb5f-kube-api-access-wh9nt\") on node \"crc\" DevicePath \"\"" Feb 26 18:22:05 crc kubenswrapper[4805]: I0226 18:22:05.270325 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535502-rtkgq" event={"ID":"ba1fd226-243b-4ef3-b29b-e33bf6d5bb5f","Type":"ContainerDied","Data":"f53bedea810bdd6d2b0a4ee2b19a426f2c83b0fddfd0ddff3f670c10f98dcc4c"} Feb 26 18:22:05 crc kubenswrapper[4805]: I0226 18:22:05.270391 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f53bedea810bdd6d2b0a4ee2b19a426f2c83b0fddfd0ddff3f670c10f98dcc4c" Feb 26 18:22:05 crc kubenswrapper[4805]: I0226 18:22:05.270395 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535502-rtkgq" Feb 26 18:22:05 crc kubenswrapper[4805]: I0226 18:22:05.752822 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535496-xxdbg"] Feb 26 18:22:05 crc kubenswrapper[4805]: I0226 18:22:05.764643 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535496-xxdbg"] Feb 26 18:22:05 crc kubenswrapper[4805]: I0226 18:22:05.954316 4805 scope.go:117] "RemoveContainer" containerID="4415e2b16fe48fd329f918b4a5ab7fde8934f7960e3569e0600ff07ec9d059dd" Feb 26 18:22:05 crc kubenswrapper[4805]: E0226 18:22:05.954671 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:22:06 crc kubenswrapper[4805]: I0226 18:22:06.974514 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e56b67f3-6a30-4521-bb90-eda160f861fa" path="/var/lib/kubelet/pods/e56b67f3-6a30-4521-bb90-eda160f861fa/volumes" Feb 26 18:22:18 crc kubenswrapper[4805]: I0226 18:22:18.953989 4805 scope.go:117] "RemoveContainer" containerID="4415e2b16fe48fd329f918b4a5ab7fde8934f7960e3569e0600ff07ec9d059dd" Feb 26 18:22:18 crc kubenswrapper[4805]: E0226 18:22:18.954880 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:22:29 crc kubenswrapper[4805]: I0226 18:22:29.953757 4805 scope.go:117] "RemoveContainer" containerID="4415e2b16fe48fd329f918b4a5ab7fde8934f7960e3569e0600ff07ec9d059dd" Feb 26 18:22:29 crc kubenswrapper[4805]: E0226 18:22:29.955093 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:22:44 crc kubenswrapper[4805]: I0226 18:22:44.953418 4805 scope.go:117] "RemoveContainer" containerID="4415e2b16fe48fd329f918b4a5ab7fde8934f7960e3569e0600ff07ec9d059dd" Feb 26 18:22:44 crc kubenswrapper[4805]: E0226 18:22:44.954265 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:22:48 crc kubenswrapper[4805]: I0226 18:22:48.603465 4805 scope.go:117] "RemoveContainer" containerID="ac62308522dad1df849e57bd2a32d86729c275f3de768e581c0355f9e19046c2" Feb 26 18:22:56 crc kubenswrapper[4805]: I0226 18:22:56.960682 4805 scope.go:117] "RemoveContainer" containerID="4415e2b16fe48fd329f918b4a5ab7fde8934f7960e3569e0600ff07ec9d059dd" Feb 26 18:22:56 crc kubenswrapper[4805]: E0226 18:22:56.961491 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:23:09 crc kubenswrapper[4805]: I0226 18:23:09.536397 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 26 18:23:09 crc kubenswrapper[4805]: E0226 18:23:09.537370 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba1fd226-243b-4ef3-b29b-e33bf6d5bb5f" containerName="oc" Feb 26 18:23:09 crc kubenswrapper[4805]: I0226 18:23:09.537389 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba1fd226-243b-4ef3-b29b-e33bf6d5bb5f" containerName="oc" Feb 26 18:23:09 crc kubenswrapper[4805]: I0226 18:23:09.537674 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba1fd226-243b-4ef3-b29b-e33bf6d5bb5f" containerName="oc" Feb 26 18:23:09 crc kubenswrapper[4805]: I0226 18:23:09.538681 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 26 18:23:09 crc kubenswrapper[4805]: I0226 18:23:09.540784 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 26 18:23:09 crc kubenswrapper[4805]: I0226 18:23:09.541453 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 26 18:23:09 crc kubenswrapper[4805]: I0226 18:23:09.541794 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 26 18:23:09 crc kubenswrapper[4805]: I0226 18:23:09.542072 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-qcl6q" Feb 26 18:23:09 crc kubenswrapper[4805]: I0226 18:23:09.552178 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 26 18:23:09 crc kubenswrapper[4805]: I0226 18:23:09.627056 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"630c211f-3dd5-4951-9476-249d0f6bc049\") " pod="openstack/tempest-tests-tempest" Feb 26 18:23:09 crc kubenswrapper[4805]: I0226 18:23:09.627111 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/630c211f-3dd5-4951-9476-249d0f6bc049-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"630c211f-3dd5-4951-9476-249d0f6bc049\") " pod="openstack/tempest-tests-tempest" Feb 26 18:23:09 crc kubenswrapper[4805]: I0226 18:23:09.627394 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/630c211f-3dd5-4951-9476-249d0f6bc049-config-data\") pod \"tempest-tests-tempest\" (UID: \"630c211f-3dd5-4951-9476-249d0f6bc049\") " pod="openstack/tempest-tests-tempest" Feb 26 18:23:09 crc kubenswrapper[4805]: I0226 18:23:09.627416 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/630c211f-3dd5-4951-9476-249d0f6bc049-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"630c211f-3dd5-4951-9476-249d0f6bc049\") " pod="openstack/tempest-tests-tempest" Feb 26 18:23:09 crc kubenswrapper[4805]: I0226 18:23:09.627592 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/630c211f-3dd5-4951-9476-249d0f6bc049-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"630c211f-3dd5-4951-9476-249d0f6bc049\") " pod="openstack/tempest-tests-tempest" Feb 26 18:23:09 crc kubenswrapper[4805]: I0226 18:23:09.627638 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/630c211f-3dd5-4951-9476-249d0f6bc049-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"630c211f-3dd5-4951-9476-249d0f6bc049\") " pod="openstack/tempest-tests-tempest" Feb 26 18:23:09 crc kubenswrapper[4805]: I0226 18:23:09.627709 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/630c211f-3dd5-4951-9476-249d0f6bc049-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"630c211f-3dd5-4951-9476-249d0f6bc049\") " pod="openstack/tempest-tests-tempest" Feb 26 18:23:09 crc kubenswrapper[4805]: I0226 18:23:09.627753 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp6jg\" (UniqueName: \"kubernetes.io/projected/630c211f-3dd5-4951-9476-249d0f6bc049-kube-api-access-kp6jg\") pod \"tempest-tests-tempest\" (UID: \"630c211f-3dd5-4951-9476-249d0f6bc049\") " pod="openstack/tempest-tests-tempest" Feb 26 18:23:09 crc kubenswrapper[4805]: I0226 18:23:09.627952 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/630c211f-3dd5-4951-9476-249d0f6bc049-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"630c211f-3dd5-4951-9476-249d0f6bc049\") " pod="openstack/tempest-tests-tempest" Feb 26 18:23:09 crc kubenswrapper[4805]: I0226 18:23:09.729836 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/630c211f-3dd5-4951-9476-249d0f6bc049-config-data\") pod \"tempest-tests-tempest\" (UID: \"630c211f-3dd5-4951-9476-249d0f6bc049\") " pod="openstack/tempest-tests-tempest" Feb 26 18:23:09 crc kubenswrapper[4805]: I0226 18:23:09.730160 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/630c211f-3dd5-4951-9476-249d0f6bc049-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"630c211f-3dd5-4951-9476-249d0f6bc049\") " pod="openstack/tempest-tests-tempest" Feb 26 18:23:09 crc kubenswrapper[4805]: I0226 18:23:09.730364 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/630c211f-3dd5-4951-9476-249d0f6bc049-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"630c211f-3dd5-4951-9476-249d0f6bc049\") " pod="openstack/tempest-tests-tempest" Feb 26 18:23:09 crc kubenswrapper[4805]: I0226 18:23:09.730475 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/630c211f-3dd5-4951-9476-249d0f6bc049-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"630c211f-3dd5-4951-9476-249d0f6bc049\") " pod="openstack/tempest-tests-tempest" Feb 26 18:23:09 crc kubenswrapper[4805]: I0226 18:23:09.730636 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/630c211f-3dd5-4951-9476-249d0f6bc049-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"630c211f-3dd5-4951-9476-249d0f6bc049\") " pod="openstack/tempest-tests-tempest" Feb 26 18:23:09 crc kubenswrapper[4805]: I0226 18:23:09.730733 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/630c211f-3dd5-4951-9476-249d0f6bc049-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"630c211f-3dd5-4951-9476-249d0f6bc049\") " pod="openstack/tempest-tests-tempest" Feb 26 18:23:09 crc kubenswrapper[4805]: I0226 18:23:09.730974 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kp6jg\" (UniqueName: \"kubernetes.io/projected/630c211f-3dd5-4951-9476-249d0f6bc049-kube-api-access-kp6jg\") pod \"tempest-tests-tempest\" (UID: \"630c211f-3dd5-4951-9476-249d0f6bc049\") " pod="openstack/tempest-tests-tempest" Feb 26 18:23:09 crc kubenswrapper[4805]: I0226 18:23:09.731222 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/630c211f-3dd5-4951-9476-249d0f6bc049-config-data\") pod \"tempest-tests-tempest\" (UID: \"630c211f-3dd5-4951-9476-249d0f6bc049\") " pod="openstack/tempest-tests-tempest" Feb 26 18:23:09 crc kubenswrapper[4805]: I0226 18:23:09.731102 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/630c211f-3dd5-4951-9476-249d0f6bc049-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"630c211f-3dd5-4951-9476-249d0f6bc049\") " pod="openstack/tempest-tests-tempest" Feb 26 18:23:09 crc kubenswrapper[4805]: I0226 18:23:09.731370 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/630c211f-3dd5-4951-9476-249d0f6bc049-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"630c211f-3dd5-4951-9476-249d0f6bc049\") " pod="openstack/tempest-tests-tempest" Feb 26 18:23:09 crc kubenswrapper[4805]: I0226 18:23:09.731242 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/630c211f-3dd5-4951-9476-249d0f6bc049-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"630c211f-3dd5-4951-9476-249d0f6bc049\") " pod="openstack/tempest-tests-tempest" Feb 26 18:23:09 crc kubenswrapper[4805]: I0226 18:23:09.731698 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"630c211f-3dd5-4951-9476-249d0f6bc049\") " pod="openstack/tempest-tests-tempest" Feb 26 18:23:09 crc kubenswrapper[4805]: I0226 18:23:09.731849 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/630c211f-3dd5-4951-9476-249d0f6bc049-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"630c211f-3dd5-4951-9476-249d0f6bc049\") " pod="openstack/tempest-tests-tempest" Feb 26 18:23:09 crc kubenswrapper[4805]: I0226 18:23:09.731981 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"630c211f-3dd5-4951-9476-249d0f6bc049\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/tempest-tests-tempest" Feb 26 18:23:09 crc kubenswrapper[4805]: I0226 18:23:09.736701 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/630c211f-3dd5-4951-9476-249d0f6bc049-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"630c211f-3dd5-4951-9476-249d0f6bc049\") " pod="openstack/tempest-tests-tempest" Feb 26 18:23:09 crc kubenswrapper[4805]: I0226 18:23:09.736700 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/630c211f-3dd5-4951-9476-249d0f6bc049-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"630c211f-3dd5-4951-9476-249d0f6bc049\") " pod="openstack/tempest-tests-tempest" Feb 26 18:23:09 crc kubenswrapper[4805]: I0226 18:23:09.738535 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/630c211f-3dd5-4951-9476-249d0f6bc049-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"630c211f-3dd5-4951-9476-249d0f6bc049\") " pod="openstack/tempest-tests-tempest" Feb 26 18:23:09 crc kubenswrapper[4805]: I0226 18:23:09.747385 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp6jg\" (UniqueName: \"kubernetes.io/projected/630c211f-3dd5-4951-9476-249d0f6bc049-kube-api-access-kp6jg\") pod \"tempest-tests-tempest\" (UID: \"630c211f-3dd5-4951-9476-249d0f6bc049\") " pod="openstack/tempest-tests-tempest" Feb 26 18:23:09 crc kubenswrapper[4805]: I0226 18:23:09.766914 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"630c211f-3dd5-4951-9476-249d0f6bc049\") " pod="openstack/tempest-tests-tempest" Feb 26 18:23:09 crc kubenswrapper[4805]: I0226 18:23:09.865947 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 26 18:23:09 crc kubenswrapper[4805]: I0226 18:23:09.953007 4805 scope.go:117] "RemoveContainer" containerID="4415e2b16fe48fd329f918b4a5ab7fde8934f7960e3569e0600ff07ec9d059dd" Feb 26 18:23:09 crc kubenswrapper[4805]: E0226 18:23:09.953357 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:23:10 crc kubenswrapper[4805]: I0226 18:23:10.412193 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 26 18:23:10 crc kubenswrapper[4805]: I0226 18:23:10.414676 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 18:23:10 crc kubenswrapper[4805]: I0226 18:23:10.973922 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"630c211f-3dd5-4951-9476-249d0f6bc049","Type":"ContainerStarted","Data":"204b756e2eee983200f5b6c59a5bb335be88cb87c4761c8089d8906af4ab2223"} Feb 26 18:23:21 crc kubenswrapper[4805]: I0226 18:23:21.953864 4805 scope.go:117] "RemoveContainer" containerID="4415e2b16fe48fd329f918b4a5ab7fde8934f7960e3569e0600ff07ec9d059dd" Feb 26 18:23:21 crc kubenswrapper[4805]: E0226 18:23:21.954599 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:23:34 crc kubenswrapper[4805]: I0226 18:23:34.952824 4805 scope.go:117] "RemoveContainer" containerID="4415e2b16fe48fd329f918b4a5ab7fde8934f7960e3569e0600ff07ec9d059dd" Feb 26 18:23:34 crc kubenswrapper[4805]: E0226 18:23:34.953821 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:23:42 crc kubenswrapper[4805]: E0226 18:23:42.755470 4805 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 26 18:23:42 crc kubenswrapper[4805]: E0226 18:23:42.756122 4805 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kp6jg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(630c211f-3dd5-4951-9476-249d0f6bc049): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 26 18:23:42 crc kubenswrapper[4805]: E0226 18:23:42.757386 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="630c211f-3dd5-4951-9476-249d0f6bc049" Feb 26 18:23:43 crc kubenswrapper[4805]: E0226 18:23:43.328553 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="630c211f-3dd5-4951-9476-249d0f6bc049" Feb 26 18:23:49 crc kubenswrapper[4805]: I0226 18:23:49.493819 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6qvmj"] Feb 26 18:23:49 crc kubenswrapper[4805]: I0226 18:23:49.498504 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6qvmj" Feb 26 18:23:49 crc kubenswrapper[4805]: I0226 18:23:49.504779 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6qvmj"] Feb 26 18:23:49 crc kubenswrapper[4805]: I0226 18:23:49.650767 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce-utilities\") pod \"redhat-operators-6qvmj\" (UID: \"e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce\") " pod="openshift-marketplace/redhat-operators-6qvmj" Feb 26 18:23:49 crc kubenswrapper[4805]: I0226 18:23:49.650861 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8g4x\" (UniqueName: \"kubernetes.io/projected/e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce-kube-api-access-w8g4x\") pod \"redhat-operators-6qvmj\" (UID: \"e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce\") " pod="openshift-marketplace/redhat-operators-6qvmj" Feb 26 18:23:49 crc kubenswrapper[4805]: I0226 18:23:49.650887 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce-catalog-content\") pod \"redhat-operators-6qvmj\" (UID: \"e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce\") " pod="openshift-marketplace/redhat-operators-6qvmj" Feb 26 18:23:49 crc kubenswrapper[4805]: I0226 18:23:49.752538 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce-utilities\") pod \"redhat-operators-6qvmj\" (UID: \"e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce\") " pod="openshift-marketplace/redhat-operators-6qvmj" Feb 26 18:23:49 crc kubenswrapper[4805]: I0226 18:23:49.752591 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8g4x\" (UniqueName: \"kubernetes.io/projected/e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce-kube-api-access-w8g4x\") pod \"redhat-operators-6qvmj\" (UID: \"e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce\") " pod="openshift-marketplace/redhat-operators-6qvmj" Feb 26 18:23:49 crc kubenswrapper[4805]: I0226 18:23:49.752614 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce-catalog-content\") pod \"redhat-operators-6qvmj\" (UID: \"e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce\") " pod="openshift-marketplace/redhat-operators-6qvmj" Feb 26 18:23:49 crc kubenswrapper[4805]: I0226 18:23:49.753133 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce-utilities\") pod \"redhat-operators-6qvmj\" (UID: \"e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce\") " pod="openshift-marketplace/redhat-operators-6qvmj" Feb 26 18:23:49 crc kubenswrapper[4805]: I0226 18:23:49.753782 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce-catalog-content\") pod \"redhat-operators-6qvmj\" (UID: \"e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce\") " pod="openshift-marketplace/redhat-operators-6qvmj" Feb 26 18:23:49 crc kubenswrapper[4805]: I0226 18:23:49.791007 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8g4x\" (UniqueName: \"kubernetes.io/projected/e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce-kube-api-access-w8g4x\") pod \"redhat-operators-6qvmj\" (UID: \"e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce\") " pod="openshift-marketplace/redhat-operators-6qvmj" Feb 26 18:23:49 crc kubenswrapper[4805]: I0226 18:23:49.860281 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6qvmj" Feb 26 18:23:49 crc kubenswrapper[4805]: I0226 18:23:49.952884 4805 scope.go:117] "RemoveContainer" containerID="4415e2b16fe48fd329f918b4a5ab7fde8934f7960e3569e0600ff07ec9d059dd" Feb 26 18:23:49 crc kubenswrapper[4805]: E0226 18:23:49.953186 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:23:50 crc kubenswrapper[4805]: I0226 18:23:50.347698 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6qvmj"] Feb 26 18:23:50 crc kubenswrapper[4805]: I0226 18:23:50.387200 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qvmj" event={"ID":"e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce","Type":"ContainerStarted","Data":"0f5bb45fd290dbc8ebaf1b1f6a098b4c44db2d763ac5cbc764f83db3a330c3c7"} Feb 26 18:23:51 crc kubenswrapper[4805]: I0226 18:23:51.398677 4805 generic.go:334] "Generic (PLEG): container finished" podID="e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce" containerID="f76f2f19e570ffa08a408cbd8c55960817ac28446f78a1b3970c37905949b22d" exitCode=0 Feb 26 18:23:51 crc kubenswrapper[4805]: I0226 18:23:51.398806 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qvmj" event={"ID":"e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce","Type":"ContainerDied","Data":"f76f2f19e570ffa08a408cbd8c55960817ac28446f78a1b3970c37905949b22d"} Feb 26 18:23:53 crc kubenswrapper[4805]: I0226 18:23:53.421255 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qvmj" event={"ID":"e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce","Type":"ContainerStarted","Data":"fc4666e123b2cf901155d176aaa05e6f5ffdb027af85d3e05e9c55ccba5a0133"} Feb 26 18:23:58 crc kubenswrapper[4805]: I0226 18:23:58.400386 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 26 18:23:59 crc kubenswrapper[4805]: I0226 18:23:59.495596 4805 generic.go:334] "Generic (PLEG): container finished" podID="e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce" containerID="fc4666e123b2cf901155d176aaa05e6f5ffdb027af85d3e05e9c55ccba5a0133" exitCode=0 Feb 26 18:23:59 crc kubenswrapper[4805]: I0226 18:23:59.495883 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qvmj" event={"ID":"e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce","Type":"ContainerDied","Data":"fc4666e123b2cf901155d176aaa05e6f5ffdb027af85d3e05e9c55ccba5a0133"} Feb 26 18:24:00 crc kubenswrapper[4805]: I0226 18:24:00.145936 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535504-xn8cm"] Feb 26 18:24:00 crc kubenswrapper[4805]: I0226 18:24:00.147924 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535504-xn8cm" Feb 26 18:24:00 crc kubenswrapper[4805]: I0226 18:24:00.159176 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535504-xn8cm"] Feb 26 18:24:00 crc kubenswrapper[4805]: I0226 18:24:00.192736 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 18:24:00 crc kubenswrapper[4805]: I0226 18:24:00.192896 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 18:24:00 crc kubenswrapper[4805]: I0226 18:24:00.193094 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 18:24:00 crc kubenswrapper[4805]: I0226 18:24:00.318801 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx55j\" (UniqueName: \"kubernetes.io/projected/dae44122-1933-4cac-9b95-21b7c0cbe89f-kube-api-access-lx55j\") pod \"auto-csr-approver-29535504-xn8cm\" (UID: \"dae44122-1933-4cac-9b95-21b7c0cbe89f\") " pod="openshift-infra/auto-csr-approver-29535504-xn8cm" Feb 26 18:24:00 crc kubenswrapper[4805]: I0226 18:24:00.420695 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx55j\" (UniqueName: \"kubernetes.io/projected/dae44122-1933-4cac-9b95-21b7c0cbe89f-kube-api-access-lx55j\") pod \"auto-csr-approver-29535504-xn8cm\" (UID: \"dae44122-1933-4cac-9b95-21b7c0cbe89f\") " pod="openshift-infra/auto-csr-approver-29535504-xn8cm" Feb 26 18:24:00 crc kubenswrapper[4805]: I0226 18:24:00.441418 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx55j\" (UniqueName: \"kubernetes.io/projected/dae44122-1933-4cac-9b95-21b7c0cbe89f-kube-api-access-lx55j\") pod \"auto-csr-approver-29535504-xn8cm\" (UID: \"dae44122-1933-4cac-9b95-21b7c0cbe89f\") " pod="openshift-infra/auto-csr-approver-29535504-xn8cm" Feb 26 18:24:00 crc kubenswrapper[4805]: I0226 18:24:00.507231 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qvmj" event={"ID":"e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce","Type":"ContainerStarted","Data":"59bd8a86ec5632ac98694d1a23863d3c641600f1937f227a98676f8286522c6c"} Feb 26 18:24:00 crc kubenswrapper[4805]: I0226 18:24:00.510179 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"630c211f-3dd5-4951-9476-249d0f6bc049","Type":"ContainerStarted","Data":"ee898cafad946825d4f5ed26253700d08f31f8e0c4d5e5ed869eb79c9a933357"} Feb 26 18:24:00 crc kubenswrapper[4805]: I0226 18:24:00.511935 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535504-xn8cm" Feb 26 18:24:00 crc kubenswrapper[4805]: I0226 18:24:00.534978 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6qvmj" podStartSLOduration=3.055871095 podStartE2EDuration="11.534958661s" podCreationTimestamp="2026-02-26 18:23:49 +0000 UTC" firstStartedPulling="2026-02-26 18:23:51.402803122 +0000 UTC m=+4145.964557461" lastFinishedPulling="2026-02-26 18:23:59.881890678 +0000 UTC m=+4154.443645027" observedRunningTime="2026-02-26 18:24:00.525288405 +0000 UTC m=+4155.087042744" watchObservedRunningTime="2026-02-26 18:24:00.534958661 +0000 UTC m=+4155.096713000" Feb 26 18:24:00 crc kubenswrapper[4805]: I0226 18:24:00.565660 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.582241885 podStartE2EDuration="52.56564118s" podCreationTimestamp="2026-02-26 18:23:08 +0000 UTC" firstStartedPulling="2026-02-26 18:23:10.414396018 +0000 UTC m=+4104.976150357" lastFinishedPulling="2026-02-26 18:23:58.397795303 +0000 UTC m=+4152.959549652" observedRunningTime="2026-02-26 18:24:00.558510669 +0000 UTC m=+4155.120265008" watchObservedRunningTime="2026-02-26 18:24:00.56564118 +0000 UTC m=+4155.127395519" Feb 26 18:24:00 crc kubenswrapper[4805]: I0226 18:24:00.995082 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535504-xn8cm"] Feb 26 18:24:01 crc kubenswrapper[4805]: W0226 18:24:01.003652 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddae44122_1933_4cac_9b95_21b7c0cbe89f.slice/crio-92ed75edc3f550f4c161305cd0a4392e75ffa3f84547f0e9f8c60a80425da743 WatchSource:0}: Error finding container 92ed75edc3f550f4c161305cd0a4392e75ffa3f84547f0e9f8c60a80425da743: Status 404 returned error can't find the container with id 92ed75edc3f550f4c161305cd0a4392e75ffa3f84547f0e9f8c60a80425da743 Feb 26 18:24:01 crc kubenswrapper[4805]: I0226 18:24:01.522203 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535504-xn8cm" event={"ID":"dae44122-1933-4cac-9b95-21b7c0cbe89f","Type":"ContainerStarted","Data":"92ed75edc3f550f4c161305cd0a4392e75ffa3f84547f0e9f8c60a80425da743"} Feb 26 18:24:02 crc kubenswrapper[4805]: I0226 18:24:02.533539 4805 generic.go:334] "Generic (PLEG): container finished" podID="dae44122-1933-4cac-9b95-21b7c0cbe89f" containerID="b6568c0cd7a6a44894a07372f9f19a245c1d97ba938ac263ae4eedc5b51ae9da" exitCode=0 Feb 26 18:24:02 crc kubenswrapper[4805]: I0226 18:24:02.534105 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535504-xn8cm" event={"ID":"dae44122-1933-4cac-9b95-21b7c0cbe89f","Type":"ContainerDied","Data":"b6568c0cd7a6a44894a07372f9f19a245c1d97ba938ac263ae4eedc5b51ae9da"} Feb 26 18:24:03 crc kubenswrapper[4805]: I0226 18:24:03.953954 4805 scope.go:117] "RemoveContainer" containerID="4415e2b16fe48fd329f918b4a5ab7fde8934f7960e3569e0600ff07ec9d059dd" Feb 26 18:24:03 crc kubenswrapper[4805]: E0226 18:24:03.954406 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:24:04 crc kubenswrapper[4805]: I0226 18:24:04.035616 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535504-xn8cm" Feb 26 18:24:04 crc kubenswrapper[4805]: I0226 18:24:04.106554 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx55j\" (UniqueName: \"kubernetes.io/projected/dae44122-1933-4cac-9b95-21b7c0cbe89f-kube-api-access-lx55j\") pod \"dae44122-1933-4cac-9b95-21b7c0cbe89f\" (UID: \"dae44122-1933-4cac-9b95-21b7c0cbe89f\") " Feb 26 18:24:04 crc kubenswrapper[4805]: I0226 18:24:04.112879 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dae44122-1933-4cac-9b95-21b7c0cbe89f-kube-api-access-lx55j" (OuterVolumeSpecName: "kube-api-access-lx55j") pod "dae44122-1933-4cac-9b95-21b7c0cbe89f" (UID: "dae44122-1933-4cac-9b95-21b7c0cbe89f"). InnerVolumeSpecName "kube-api-access-lx55j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:24:04 crc kubenswrapper[4805]: I0226 18:24:04.209271 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lx55j\" (UniqueName: \"kubernetes.io/projected/dae44122-1933-4cac-9b95-21b7c0cbe89f-kube-api-access-lx55j\") on node \"crc\" DevicePath \"\"" Feb 26 18:24:04 crc kubenswrapper[4805]: I0226 18:24:04.553062 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535504-xn8cm" event={"ID":"dae44122-1933-4cac-9b95-21b7c0cbe89f","Type":"ContainerDied","Data":"92ed75edc3f550f4c161305cd0a4392e75ffa3f84547f0e9f8c60a80425da743"} Feb 26 18:24:04 crc kubenswrapper[4805]: I0226 18:24:04.553104 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92ed75edc3f550f4c161305cd0a4392e75ffa3f84547f0e9f8c60a80425da743" Feb 26 18:24:04 crc kubenswrapper[4805]: I0226 18:24:04.553119 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535504-xn8cm" Feb 26 18:24:05 crc kubenswrapper[4805]: I0226 18:24:05.120358 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535498-6z6zw"] Feb 26 18:24:05 crc kubenswrapper[4805]: I0226 18:24:05.134560 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535498-6z6zw"] Feb 26 18:24:06 crc kubenswrapper[4805]: I0226 18:24:06.979111 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3953e5e0-11ea-4570-a673-fcfd8122d2bd" path="/var/lib/kubelet/pods/3953e5e0-11ea-4570-a673-fcfd8122d2bd/volumes" Feb 26 18:24:09 crc kubenswrapper[4805]: I0226 18:24:09.861132 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6qvmj" Feb 26 18:24:09 crc kubenswrapper[4805]: I0226 18:24:09.861502 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6qvmj" Feb 26 18:24:09 crc kubenswrapper[4805]: I0226 18:24:09.905536 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6qvmj" Feb 26 18:24:10 crc kubenswrapper[4805]: I0226 18:24:10.689768 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6qvmj" Feb 26 18:24:10 crc kubenswrapper[4805]: I0226 18:24:10.743558 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6qvmj"] Feb 26 18:24:12 crc kubenswrapper[4805]: I0226 18:24:12.644631 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6qvmj" podUID="e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce" containerName="registry-server" containerID="cri-o://59bd8a86ec5632ac98694d1a23863d3c641600f1937f227a98676f8286522c6c" gracePeriod=2 Feb 26 18:24:13 crc kubenswrapper[4805]: I0226 18:24:13.261123 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6qvmj" Feb 26 18:24:13 crc kubenswrapper[4805]: I0226 18:24:13.357240 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce-catalog-content\") pod \"e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce\" (UID: \"e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce\") " Feb 26 18:24:13 crc kubenswrapper[4805]: I0226 18:24:13.357329 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8g4x\" (UniqueName: \"kubernetes.io/projected/e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce-kube-api-access-w8g4x\") pod \"e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce\" (UID: \"e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce\") " Feb 26 18:24:13 crc kubenswrapper[4805]: I0226 18:24:13.357474 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce-utilities\") pod \"e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce\" (UID: \"e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce\") " Feb 26 18:24:13 crc kubenswrapper[4805]: I0226 18:24:13.362114 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce-utilities" (OuterVolumeSpecName: "utilities") pod "e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce" (UID: "e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 18:24:13 crc kubenswrapper[4805]: I0226 18:24:13.364576 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce-kube-api-access-w8g4x" (OuterVolumeSpecName: "kube-api-access-w8g4x") pod "e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce" (UID: "e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce"). InnerVolumeSpecName "kube-api-access-w8g4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:24:13 crc kubenswrapper[4805]: I0226 18:24:13.460848 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8g4x\" (UniqueName: \"kubernetes.io/projected/e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce-kube-api-access-w8g4x\") on node \"crc\" DevicePath \"\"" Feb 26 18:24:13 crc kubenswrapper[4805]: I0226 18:24:13.460880 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 18:24:13 crc kubenswrapper[4805]: I0226 18:24:13.501899 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce" (UID: "e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 18:24:13 crc kubenswrapper[4805]: I0226 18:24:13.562605 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 18:24:13 crc kubenswrapper[4805]: I0226 18:24:13.657336 4805 generic.go:334] "Generic (PLEG): container finished" podID="e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce" containerID="59bd8a86ec5632ac98694d1a23863d3c641600f1937f227a98676f8286522c6c" exitCode=0 Feb 26 18:24:13 crc kubenswrapper[4805]: I0226 18:24:13.657380 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qvmj" event={"ID":"e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce","Type":"ContainerDied","Data":"59bd8a86ec5632ac98694d1a23863d3c641600f1937f227a98676f8286522c6c"} Feb 26 18:24:13 crc kubenswrapper[4805]: I0226 18:24:13.657405 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qvmj" event={"ID":"e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce","Type":"ContainerDied","Data":"0f5bb45fd290dbc8ebaf1b1f6a098b4c44db2d763ac5cbc764f83db3a330c3c7"} Feb 26 18:24:13 crc kubenswrapper[4805]: I0226 18:24:13.657422 4805 scope.go:117] "RemoveContainer" containerID="59bd8a86ec5632ac98694d1a23863d3c641600f1937f227a98676f8286522c6c" Feb 26 18:24:13 crc kubenswrapper[4805]: I0226 18:24:13.657420 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6qvmj" Feb 26 18:24:13 crc kubenswrapper[4805]: I0226 18:24:13.688817 4805 scope.go:117] "RemoveContainer" containerID="fc4666e123b2cf901155d176aaa05e6f5ffdb027af85d3e05e9c55ccba5a0133" Feb 26 18:24:13 crc kubenswrapper[4805]: I0226 18:24:13.696080 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6qvmj"] Feb 26 18:24:13 crc kubenswrapper[4805]: I0226 18:24:13.712972 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6qvmj"] Feb 26 18:24:13 crc kubenswrapper[4805]: I0226 18:24:13.713768 4805 scope.go:117] "RemoveContainer" containerID="f76f2f19e570ffa08a408cbd8c55960817ac28446f78a1b3970c37905949b22d" Feb 26 18:24:13 crc kubenswrapper[4805]: I0226 18:24:13.764647 4805 scope.go:117] "RemoveContainer" containerID="59bd8a86ec5632ac98694d1a23863d3c641600f1937f227a98676f8286522c6c" Feb 26 18:24:13 crc kubenswrapper[4805]: E0226 18:24:13.765135 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59bd8a86ec5632ac98694d1a23863d3c641600f1937f227a98676f8286522c6c\": container with ID starting with 59bd8a86ec5632ac98694d1a23863d3c641600f1937f227a98676f8286522c6c not found: ID does not exist" containerID="59bd8a86ec5632ac98694d1a23863d3c641600f1937f227a98676f8286522c6c" Feb 26 18:24:13 crc kubenswrapper[4805]: I0226 18:24:13.765175 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59bd8a86ec5632ac98694d1a23863d3c641600f1937f227a98676f8286522c6c"} err="failed to get container status \"59bd8a86ec5632ac98694d1a23863d3c641600f1937f227a98676f8286522c6c\": rpc error: code = NotFound desc = could not find container \"59bd8a86ec5632ac98694d1a23863d3c641600f1937f227a98676f8286522c6c\": container with ID starting with 59bd8a86ec5632ac98694d1a23863d3c641600f1937f227a98676f8286522c6c not found: ID does not exist" Feb 26 18:24:13 crc kubenswrapper[4805]: I0226 18:24:13.765200 4805 scope.go:117] "RemoveContainer" containerID="fc4666e123b2cf901155d176aaa05e6f5ffdb027af85d3e05e9c55ccba5a0133" Feb 26 18:24:13 crc kubenswrapper[4805]: E0226 18:24:13.765503 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc4666e123b2cf901155d176aaa05e6f5ffdb027af85d3e05e9c55ccba5a0133\": container with ID starting with fc4666e123b2cf901155d176aaa05e6f5ffdb027af85d3e05e9c55ccba5a0133 not found: ID does not exist" containerID="fc4666e123b2cf901155d176aaa05e6f5ffdb027af85d3e05e9c55ccba5a0133" Feb 26 18:24:13 crc kubenswrapper[4805]: I0226 18:24:13.765522 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc4666e123b2cf901155d176aaa05e6f5ffdb027af85d3e05e9c55ccba5a0133"} err="failed to get container status \"fc4666e123b2cf901155d176aaa05e6f5ffdb027af85d3e05e9c55ccba5a0133\": rpc error: code = NotFound desc = could not find container \"fc4666e123b2cf901155d176aaa05e6f5ffdb027af85d3e05e9c55ccba5a0133\": container with ID starting with fc4666e123b2cf901155d176aaa05e6f5ffdb027af85d3e05e9c55ccba5a0133 not found: ID does not exist" Feb 26 18:24:13 crc kubenswrapper[4805]: I0226 18:24:13.765579 4805 scope.go:117] "RemoveContainer" containerID="f76f2f19e570ffa08a408cbd8c55960817ac28446f78a1b3970c37905949b22d" Feb 26 18:24:13 crc kubenswrapper[4805]: E0226 18:24:13.765950 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f76f2f19e570ffa08a408cbd8c55960817ac28446f78a1b3970c37905949b22d\": container with ID starting with f76f2f19e570ffa08a408cbd8c55960817ac28446f78a1b3970c37905949b22d not found: ID does not exist" containerID="f76f2f19e570ffa08a408cbd8c55960817ac28446f78a1b3970c37905949b22d" Feb 26 18:24:13 crc kubenswrapper[4805]: I0226 18:24:13.765974 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f76f2f19e570ffa08a408cbd8c55960817ac28446f78a1b3970c37905949b22d"} err="failed to get container status \"f76f2f19e570ffa08a408cbd8c55960817ac28446f78a1b3970c37905949b22d\": rpc error: code = NotFound desc = could not find container \"f76f2f19e570ffa08a408cbd8c55960817ac28446f78a1b3970c37905949b22d\": container with ID starting with f76f2f19e570ffa08a408cbd8c55960817ac28446f78a1b3970c37905949b22d not found: ID does not exist" Feb 26 18:24:14 crc kubenswrapper[4805]: I0226 18:24:14.979046 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce" path="/var/lib/kubelet/pods/e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce/volumes" Feb 26 18:24:15 crc kubenswrapper[4805]: I0226 18:24:15.954439 4805 scope.go:117] "RemoveContainer" containerID="4415e2b16fe48fd329f918b4a5ab7fde8934f7960e3569e0600ff07ec9d059dd" Feb 26 18:24:15 crc kubenswrapper[4805]: E0226 18:24:15.955203 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:24:27 crc kubenswrapper[4805]: I0226 18:24:27.953607 4805 scope.go:117] "RemoveContainer" containerID="4415e2b16fe48fd329f918b4a5ab7fde8934f7960e3569e0600ff07ec9d059dd" Feb 26 18:24:27 crc kubenswrapper[4805]: E0226 18:24:27.954355 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:24:39 crc kubenswrapper[4805]: I0226 18:24:39.954415 4805 scope.go:117] "RemoveContainer" containerID="4415e2b16fe48fd329f918b4a5ab7fde8934f7960e3569e0600ff07ec9d059dd" Feb 26 18:24:39 crc kubenswrapper[4805]: E0226 18:24:39.955326 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:24:48 crc kubenswrapper[4805]: I0226 18:24:48.702737 4805 scope.go:117] "RemoveContainer" containerID="d5d3c2d664a5374074edcedb444030c0c0b56321d9bf0b1558a0d7233289d050" Feb 26 18:24:50 crc kubenswrapper[4805]: I0226 18:24:50.954845 4805 scope.go:117] "RemoveContainer" containerID="4415e2b16fe48fd329f918b4a5ab7fde8934f7960e3569e0600ff07ec9d059dd" Feb 26 18:24:50 crc kubenswrapper[4805]: E0226 18:24:50.956055 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:25:03 crc kubenswrapper[4805]: I0226 18:25:03.953962 4805 scope.go:117] "RemoveContainer" containerID="4415e2b16fe48fd329f918b4a5ab7fde8934f7960e3569e0600ff07ec9d059dd" Feb 26 18:25:03 crc kubenswrapper[4805]: E0226 18:25:03.955229 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:25:16 crc kubenswrapper[4805]: I0226 18:25:16.961737 4805 scope.go:117] "RemoveContainer" containerID="4415e2b16fe48fd329f918b4a5ab7fde8934f7960e3569e0600ff07ec9d059dd" Feb 26 18:25:16 crc kubenswrapper[4805]: E0226 18:25:16.962556 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:25:31 crc kubenswrapper[4805]: I0226 18:25:31.953514 4805 scope.go:117] "RemoveContainer" containerID="4415e2b16fe48fd329f918b4a5ab7fde8934f7960e3569e0600ff07ec9d059dd" Feb 26 18:25:31 crc kubenswrapper[4805]: E0226 18:25:31.954367 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:25:46 crc kubenswrapper[4805]: I0226 18:25:46.960797 4805 scope.go:117] "RemoveContainer" containerID="4415e2b16fe48fd329f918b4a5ab7fde8934f7960e3569e0600ff07ec9d059dd" Feb 26 18:25:47 crc kubenswrapper[4805]: I0226 18:25:47.921207 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" event={"ID":"25e83477-65d0-41be-8e55-fdacfc5871a8","Type":"ContainerStarted","Data":"3f9c3533af06cfd13430cde5a4664a1a9f4b41918996448ae9b81875688ff177"} Feb 26 18:26:00 crc kubenswrapper[4805]: I0226 18:26:00.145857 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535506-x5d75"] Feb 26 18:26:00 crc kubenswrapper[4805]: E0226 18:26:00.146738 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dae44122-1933-4cac-9b95-21b7c0cbe89f" containerName="oc" Feb 26 18:26:00 crc kubenswrapper[4805]: I0226 18:26:00.146751 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="dae44122-1933-4cac-9b95-21b7c0cbe89f" containerName="oc" Feb 26 18:26:00 crc kubenswrapper[4805]: E0226 18:26:00.146774 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce" containerName="extract-utilities" Feb 26 18:26:00 crc kubenswrapper[4805]: I0226 18:26:00.146782 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce" containerName="extract-utilities" Feb 26 18:26:00 crc kubenswrapper[4805]: E0226 18:26:00.146806 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce" containerName="registry-server" Feb 26 18:26:00 crc kubenswrapper[4805]: I0226 18:26:00.146813 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce" containerName="registry-server" Feb 26 18:26:00 crc kubenswrapper[4805]: E0226 18:26:00.146825 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce" containerName="extract-content" Feb 26 18:26:00 crc kubenswrapper[4805]: I0226 18:26:00.146831 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce" containerName="extract-content" Feb 26 18:26:00 crc kubenswrapper[4805]: I0226 18:26:00.147003 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="e55ea2ae-dc4b-4654-a11f-fb79d35fc2ce" containerName="registry-server" Feb 26 18:26:00 crc kubenswrapper[4805]: I0226 18:26:00.147048 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="dae44122-1933-4cac-9b95-21b7c0cbe89f" containerName="oc" Feb 26 18:26:00 crc kubenswrapper[4805]: I0226 18:26:00.147844 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535506-x5d75" Feb 26 18:26:00 crc kubenswrapper[4805]: I0226 18:26:00.150123 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 18:26:00 crc kubenswrapper[4805]: I0226 18:26:00.150874 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 18:26:00 crc kubenswrapper[4805]: I0226 18:26:00.164914 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 18:26:00 crc kubenswrapper[4805]: I0226 18:26:00.168413 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535506-x5d75"] Feb 26 18:26:00 crc kubenswrapper[4805]: I0226 18:26:00.268823 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqc2t\" (UniqueName: \"kubernetes.io/projected/1324f8bb-6699-494d-af28-301c250492e7-kube-api-access-gqc2t\") pod \"auto-csr-approver-29535506-x5d75\" (UID: \"1324f8bb-6699-494d-af28-301c250492e7\") " pod="openshift-infra/auto-csr-approver-29535506-x5d75" Feb 26 18:26:00 crc kubenswrapper[4805]: I0226 18:26:00.371435 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqc2t\" (UniqueName: \"kubernetes.io/projected/1324f8bb-6699-494d-af28-301c250492e7-kube-api-access-gqc2t\") pod \"auto-csr-approver-29535506-x5d75\" (UID: \"1324f8bb-6699-494d-af28-301c250492e7\") " pod="openshift-infra/auto-csr-approver-29535506-x5d75" Feb 26 18:26:00 crc kubenswrapper[4805]: I0226 18:26:00.389654 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqc2t\" (UniqueName: \"kubernetes.io/projected/1324f8bb-6699-494d-af28-301c250492e7-kube-api-access-gqc2t\") pod \"auto-csr-approver-29535506-x5d75\" (UID: \"1324f8bb-6699-494d-af28-301c250492e7\") " pod="openshift-infra/auto-csr-approver-29535506-x5d75" Feb 26 18:26:00 crc kubenswrapper[4805]: I0226 18:26:00.465243 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535506-x5d75" Feb 26 18:26:00 crc kubenswrapper[4805]: I0226 18:26:00.967304 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535506-x5d75"] Feb 26 18:26:01 crc kubenswrapper[4805]: W0226 18:26:01.370168 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1324f8bb_6699_494d_af28_301c250492e7.slice/crio-60a7477ad40689f88a74df3ea9ae9d68ebe0e7063172bb82179661f1ec0ff9f4 WatchSource:0}: Error finding container 60a7477ad40689f88a74df3ea9ae9d68ebe0e7063172bb82179661f1ec0ff9f4: Status 404 returned error can't find the container with id 60a7477ad40689f88a74df3ea9ae9d68ebe0e7063172bb82179661f1ec0ff9f4 Feb 26 18:26:02 crc kubenswrapper[4805]: I0226 18:26:02.062780 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535506-x5d75" event={"ID":"1324f8bb-6699-494d-af28-301c250492e7","Type":"ContainerStarted","Data":"60a7477ad40689f88a74df3ea9ae9d68ebe0e7063172bb82179661f1ec0ff9f4"} Feb 26 18:26:03 crc kubenswrapper[4805]: I0226 18:26:03.077876 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535506-x5d75" event={"ID":"1324f8bb-6699-494d-af28-301c250492e7","Type":"ContainerStarted","Data":"3c16f7b0ecda5d0eb84f7969d012c6353c1d243245825a0ed2e7de67728a0796"} Feb 26 18:26:03 crc kubenswrapper[4805]: I0226 18:26:03.095858 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535506-x5d75" podStartSLOduration=2.093084063 podStartE2EDuration="3.095838219s" podCreationTimestamp="2026-02-26 18:26:00 +0000 UTC" firstStartedPulling="2026-02-26 18:26:01.372688632 +0000 UTC m=+4275.934442981" lastFinishedPulling="2026-02-26 18:26:02.375442798 +0000 UTC m=+4276.937197137" observedRunningTime="2026-02-26 18:26:03.091463588 +0000 UTC m=+4277.653217917" watchObservedRunningTime="2026-02-26 18:26:03.095838219 +0000 UTC m=+4277.657592558" Feb 26 18:26:03 crc kubenswrapper[4805]: E0226 18:26:03.365509 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1324f8bb_6699_494d_af28_301c250492e7.slice/crio-conmon-3c16f7b0ecda5d0eb84f7969d012c6353c1d243245825a0ed2e7de67728a0796.scope\": RecentStats: unable to find data in memory cache]" Feb 26 18:26:04 crc kubenswrapper[4805]: I0226 18:26:04.089391 4805 generic.go:334] "Generic (PLEG): container finished" podID="1324f8bb-6699-494d-af28-301c250492e7" containerID="3c16f7b0ecda5d0eb84f7969d012c6353c1d243245825a0ed2e7de67728a0796" exitCode=0 Feb 26 18:26:04 crc kubenswrapper[4805]: I0226 18:26:04.089558 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535506-x5d75" event={"ID":"1324f8bb-6699-494d-af28-301c250492e7","Type":"ContainerDied","Data":"3c16f7b0ecda5d0eb84f7969d012c6353c1d243245825a0ed2e7de67728a0796"} Feb 26 18:26:05 crc kubenswrapper[4805]: I0226 18:26:05.675554 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535506-x5d75" Feb 26 18:26:05 crc kubenswrapper[4805]: I0226 18:26:05.855136 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqc2t\" (UniqueName: \"kubernetes.io/projected/1324f8bb-6699-494d-af28-301c250492e7-kube-api-access-gqc2t\") pod \"1324f8bb-6699-494d-af28-301c250492e7\" (UID: \"1324f8bb-6699-494d-af28-301c250492e7\") " Feb 26 18:26:05 crc kubenswrapper[4805]: I0226 18:26:05.861941 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1324f8bb-6699-494d-af28-301c250492e7-kube-api-access-gqc2t" (OuterVolumeSpecName: "kube-api-access-gqc2t") pod "1324f8bb-6699-494d-af28-301c250492e7" (UID: "1324f8bb-6699-494d-af28-301c250492e7"). InnerVolumeSpecName "kube-api-access-gqc2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:26:05 crc kubenswrapper[4805]: I0226 18:26:05.957717 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqc2t\" (UniqueName: \"kubernetes.io/projected/1324f8bb-6699-494d-af28-301c250492e7-kube-api-access-gqc2t\") on node \"crc\" DevicePath \"\"" Feb 26 18:26:06 crc kubenswrapper[4805]: I0226 18:26:06.108847 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535506-x5d75" event={"ID":"1324f8bb-6699-494d-af28-301c250492e7","Type":"ContainerDied","Data":"60a7477ad40689f88a74df3ea9ae9d68ebe0e7063172bb82179661f1ec0ff9f4"} Feb 26 18:26:06 crc kubenswrapper[4805]: I0226 18:26:06.108888 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60a7477ad40689f88a74df3ea9ae9d68ebe0e7063172bb82179661f1ec0ff9f4" Feb 26 18:26:06 crc kubenswrapper[4805]: I0226 18:26:06.109321 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535506-x5d75" Feb 26 18:26:06 crc kubenswrapper[4805]: I0226 18:26:06.174392 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535500-nkqcq"] Feb 26 18:26:06 crc kubenswrapper[4805]: I0226 18:26:06.183909 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535500-nkqcq"] Feb 26 18:26:06 crc kubenswrapper[4805]: I0226 18:26:06.975822 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aee106a-2544-47a1-bd85-4ddce85fd4e0" path="/var/lib/kubelet/pods/6aee106a-2544-47a1-bd85-4ddce85fd4e0/volumes" Feb 26 18:26:48 crc kubenswrapper[4805]: I0226 18:26:48.826742 4805 scope.go:117] "RemoveContainer" containerID="e156e37e910cada355630962ee0731e822e9446d66c7f5953e99860c174d3c77" Feb 26 18:28:00 crc kubenswrapper[4805]: I0226 18:28:00.144991 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535508-9cpt6"] Feb 26 18:28:00 crc kubenswrapper[4805]: E0226 18:28:00.146364 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1324f8bb-6699-494d-af28-301c250492e7" containerName="oc" Feb 26 18:28:00 crc kubenswrapper[4805]: I0226 18:28:00.146378 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1324f8bb-6699-494d-af28-301c250492e7" containerName="oc" Feb 26 18:28:00 crc kubenswrapper[4805]: I0226 18:28:00.146565 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1324f8bb-6699-494d-af28-301c250492e7" containerName="oc" Feb 26 18:28:00 crc kubenswrapper[4805]: I0226 18:28:00.147335 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535508-9cpt6" Feb 26 18:28:00 crc kubenswrapper[4805]: I0226 18:28:00.149337 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 18:28:00 crc kubenswrapper[4805]: I0226 18:28:00.149682 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 18:28:00 crc kubenswrapper[4805]: I0226 18:28:00.149745 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 18:28:00 crc kubenswrapper[4805]: I0226 18:28:00.162216 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535508-9cpt6"] Feb 26 18:28:00 crc kubenswrapper[4805]: I0226 18:28:00.263097 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq529\" (UniqueName: \"kubernetes.io/projected/e531362b-5fcf-4aac-bf9c-ddbee35eb1b7-kube-api-access-xq529\") pod \"auto-csr-approver-29535508-9cpt6\" (UID: \"e531362b-5fcf-4aac-bf9c-ddbee35eb1b7\") " pod="openshift-infra/auto-csr-approver-29535508-9cpt6" Feb 26 18:28:00 crc kubenswrapper[4805]: I0226 18:28:00.365472 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq529\" (UniqueName: \"kubernetes.io/projected/e531362b-5fcf-4aac-bf9c-ddbee35eb1b7-kube-api-access-xq529\") pod \"auto-csr-approver-29535508-9cpt6\" (UID: \"e531362b-5fcf-4aac-bf9c-ddbee35eb1b7\") " pod="openshift-infra/auto-csr-approver-29535508-9cpt6" Feb 26 18:28:00 crc kubenswrapper[4805]: I0226 18:28:00.393163 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq529\" (UniqueName: \"kubernetes.io/projected/e531362b-5fcf-4aac-bf9c-ddbee35eb1b7-kube-api-access-xq529\") pod \"auto-csr-approver-29535508-9cpt6\" (UID: \"e531362b-5fcf-4aac-bf9c-ddbee35eb1b7\") " pod="openshift-infra/auto-csr-approver-29535508-9cpt6" Feb 26 18:28:00 crc kubenswrapper[4805]: I0226 18:28:00.514765 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535508-9cpt6" Feb 26 18:28:01 crc kubenswrapper[4805]: I0226 18:28:01.008350 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535508-9cpt6"] Feb 26 18:28:01 crc kubenswrapper[4805]: W0226 18:28:01.015494 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode531362b_5fcf_4aac_bf9c_ddbee35eb1b7.slice/crio-538b0f5a55fead0b12fdb7a85d4576e731e7e80bae648ad1838f715d23bb3b2e WatchSource:0}: Error finding container 538b0f5a55fead0b12fdb7a85d4576e731e7e80bae648ad1838f715d23bb3b2e: Status 404 returned error can't find the container with id 538b0f5a55fead0b12fdb7a85d4576e731e7e80bae648ad1838f715d23bb3b2e Feb 26 18:28:01 crc kubenswrapper[4805]: I0226 18:28:01.288665 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535508-9cpt6" event={"ID":"e531362b-5fcf-4aac-bf9c-ddbee35eb1b7","Type":"ContainerStarted","Data":"538b0f5a55fead0b12fdb7a85d4576e731e7e80bae648ad1838f715d23bb3b2e"} Feb 26 18:28:02 crc kubenswrapper[4805]: I0226 18:28:02.302406 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535508-9cpt6" event={"ID":"e531362b-5fcf-4aac-bf9c-ddbee35eb1b7","Type":"ContainerStarted","Data":"284bb6e4973c75a892264e2366af98f23bed65692bcab320a4e10f1af9e107f2"} Feb 26 18:28:02 crc kubenswrapper[4805]: I0226 18:28:02.326589 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535508-9cpt6" podStartSLOduration=1.377649724 podStartE2EDuration="2.326570912s" podCreationTimestamp="2026-02-26 18:28:00 +0000 UTC" firstStartedPulling="2026-02-26 18:28:01.019780632 +0000 UTC m=+4395.581534981" lastFinishedPulling="2026-02-26 18:28:01.96870183 +0000 UTC m=+4396.530456169" observedRunningTime="2026-02-26 18:28:02.32018148 +0000 UTC m=+4396.881935839" watchObservedRunningTime="2026-02-26 18:28:02.326570912 +0000 UTC m=+4396.888325251" Feb 26 18:28:02 crc kubenswrapper[4805]: I0226 18:28:02.978458 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 18:28:02 crc kubenswrapper[4805]: I0226 18:28:02.978565 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 18:28:03 crc kubenswrapper[4805]: I0226 18:28:03.314455 4805 generic.go:334] "Generic (PLEG): container finished" podID="e531362b-5fcf-4aac-bf9c-ddbee35eb1b7" containerID="284bb6e4973c75a892264e2366af98f23bed65692bcab320a4e10f1af9e107f2" exitCode=0 Feb 26 18:28:03 crc kubenswrapper[4805]: I0226 18:28:03.314820 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535508-9cpt6" event={"ID":"e531362b-5fcf-4aac-bf9c-ddbee35eb1b7","Type":"ContainerDied","Data":"284bb6e4973c75a892264e2366af98f23bed65692bcab320a4e10f1af9e107f2"} Feb 26 18:28:04 crc kubenswrapper[4805]: I0226 18:28:04.809472 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535508-9cpt6" Feb 26 18:28:04 crc kubenswrapper[4805]: I0226 18:28:04.962371 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq529\" (UniqueName: \"kubernetes.io/projected/e531362b-5fcf-4aac-bf9c-ddbee35eb1b7-kube-api-access-xq529\") pod \"e531362b-5fcf-4aac-bf9c-ddbee35eb1b7\" (UID: \"e531362b-5fcf-4aac-bf9c-ddbee35eb1b7\") " Feb 26 18:28:04 crc kubenswrapper[4805]: I0226 18:28:04.968124 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e531362b-5fcf-4aac-bf9c-ddbee35eb1b7-kube-api-access-xq529" (OuterVolumeSpecName: "kube-api-access-xq529") pod "e531362b-5fcf-4aac-bf9c-ddbee35eb1b7" (UID: "e531362b-5fcf-4aac-bf9c-ddbee35eb1b7"). InnerVolumeSpecName "kube-api-access-xq529". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:28:05 crc kubenswrapper[4805]: I0226 18:28:05.067865 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xq529\" (UniqueName: \"kubernetes.io/projected/e531362b-5fcf-4aac-bf9c-ddbee35eb1b7-kube-api-access-xq529\") on node \"crc\" DevicePath \"\"" Feb 26 18:28:05 crc kubenswrapper[4805]: I0226 18:28:05.348264 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535508-9cpt6" event={"ID":"e531362b-5fcf-4aac-bf9c-ddbee35eb1b7","Type":"ContainerDied","Data":"538b0f5a55fead0b12fdb7a85d4576e731e7e80bae648ad1838f715d23bb3b2e"} Feb 26 18:28:05 crc kubenswrapper[4805]: I0226 18:28:05.348301 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="538b0f5a55fead0b12fdb7a85d4576e731e7e80bae648ad1838f715d23bb3b2e" Feb 26 18:28:05 crc kubenswrapper[4805]: I0226 18:28:05.348362 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535508-9cpt6" Feb 26 18:28:05 crc kubenswrapper[4805]: I0226 18:28:05.411898 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535502-rtkgq"] Feb 26 18:28:05 crc kubenswrapper[4805]: I0226 18:28:05.423731 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535502-rtkgq"] Feb 26 18:28:06 crc kubenswrapper[4805]: I0226 18:28:06.983337 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba1fd226-243b-4ef3-b29b-e33bf6d5bb5f" path="/var/lib/kubelet/pods/ba1fd226-243b-4ef3-b29b-e33bf6d5bb5f/volumes" Feb 26 18:28:32 crc kubenswrapper[4805]: I0226 18:28:32.978095 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 18:28:32 crc kubenswrapper[4805]: I0226 18:28:32.978754 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 18:28:48 crc kubenswrapper[4805]: I0226 18:28:48.931904 4805 scope.go:117] "RemoveContainer" containerID="db39d4ba9625bd285aa89e214111356fd5cd535f0b23796157adffe996d54bf6" Feb 26 18:29:02 crc kubenswrapper[4805]: I0226 18:29:02.978483 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 18:29:02 crc kubenswrapper[4805]: I0226 18:29:02.979146 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 18:29:02 crc kubenswrapper[4805]: I0226 18:29:02.979201 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" Feb 26 18:29:02 crc kubenswrapper[4805]: I0226 18:29:02.980165 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3f9c3533af06cfd13430cde5a4664a1a9f4b41918996448ae9b81875688ff177"} pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 18:29:02 crc kubenswrapper[4805]: I0226 18:29:02.980226 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" containerID="cri-o://3f9c3533af06cfd13430cde5a4664a1a9f4b41918996448ae9b81875688ff177" gracePeriod=600 Feb 26 18:29:03 crc kubenswrapper[4805]: I0226 18:29:03.954218 4805 generic.go:334] "Generic (PLEG): container finished" podID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerID="3f9c3533af06cfd13430cde5a4664a1a9f4b41918996448ae9b81875688ff177" exitCode=0 Feb 26 18:29:03 crc kubenswrapper[4805]: I0226 18:29:03.954288 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" event={"ID":"25e83477-65d0-41be-8e55-fdacfc5871a8","Type":"ContainerDied","Data":"3f9c3533af06cfd13430cde5a4664a1a9f4b41918996448ae9b81875688ff177"} Feb 26 18:29:03 crc kubenswrapper[4805]: I0226 18:29:03.955160 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" event={"ID":"25e83477-65d0-41be-8e55-fdacfc5871a8","Type":"ContainerStarted","Data":"b7c30223c5b76c8ad9d42938cacce991adefae21cfd696e2668eb62862ed8166"} Feb 26 18:29:03 crc kubenswrapper[4805]: I0226 18:29:03.955221 4805 scope.go:117] "RemoveContainer" containerID="4415e2b16fe48fd329f918b4a5ab7fde8934f7960e3569e0600ff07ec9d059dd" Feb 26 18:29:44 crc kubenswrapper[4805]: I0226 18:29:44.189727 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wpmw9"] Feb 26 18:29:44 crc kubenswrapper[4805]: E0226 18:29:44.190823 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e531362b-5fcf-4aac-bf9c-ddbee35eb1b7" containerName="oc" Feb 26 18:29:44 crc kubenswrapper[4805]: I0226 18:29:44.190842 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e531362b-5fcf-4aac-bf9c-ddbee35eb1b7" containerName="oc" Feb 26 18:29:44 crc kubenswrapper[4805]: I0226 18:29:44.191130 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="e531362b-5fcf-4aac-bf9c-ddbee35eb1b7" containerName="oc" Feb 26 18:29:44 crc kubenswrapper[4805]: I0226 18:29:44.192978 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wpmw9" Feb 26 18:29:44 crc kubenswrapper[4805]: I0226 18:29:44.234478 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wpmw9"] Feb 26 18:29:44 crc kubenswrapper[4805]: I0226 18:29:44.349592 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4833f528-f466-4ea0-bb4a-37053e4941bb-utilities\") pod \"certified-operators-wpmw9\" (UID: \"4833f528-f466-4ea0-bb4a-37053e4941bb\") " pod="openshift-marketplace/certified-operators-wpmw9" Feb 26 18:29:44 crc kubenswrapper[4805]: I0226 18:29:44.349741 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrnhn\" (UniqueName: \"kubernetes.io/projected/4833f528-f466-4ea0-bb4a-37053e4941bb-kube-api-access-jrnhn\") pod \"certified-operators-wpmw9\" (UID: \"4833f528-f466-4ea0-bb4a-37053e4941bb\") " pod="openshift-marketplace/certified-operators-wpmw9" Feb 26 18:29:44 crc kubenswrapper[4805]: I0226 18:29:44.349765 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4833f528-f466-4ea0-bb4a-37053e4941bb-catalog-content\") pod \"certified-operators-wpmw9\" (UID: \"4833f528-f466-4ea0-bb4a-37053e4941bb\") " pod="openshift-marketplace/certified-operators-wpmw9" Feb 26 18:29:44 crc kubenswrapper[4805]: I0226 18:29:44.389792 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2lm4h"] Feb 26 18:29:44 crc kubenswrapper[4805]: I0226 18:29:44.392932 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2lm4h" Feb 26 18:29:44 crc kubenswrapper[4805]: I0226 18:29:44.401346 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2lm4h"] Feb 26 18:29:44 crc kubenswrapper[4805]: I0226 18:29:44.452633 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrnhn\" (UniqueName: \"kubernetes.io/projected/4833f528-f466-4ea0-bb4a-37053e4941bb-kube-api-access-jrnhn\") pod \"certified-operators-wpmw9\" (UID: \"4833f528-f466-4ea0-bb4a-37053e4941bb\") " pod="openshift-marketplace/certified-operators-wpmw9" Feb 26 18:29:44 crc kubenswrapper[4805]: I0226 18:29:44.452695 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4833f528-f466-4ea0-bb4a-37053e4941bb-catalog-content\") pod \"certified-operators-wpmw9\" (UID: \"4833f528-f466-4ea0-bb4a-37053e4941bb\") " pod="openshift-marketplace/certified-operators-wpmw9" Feb 26 18:29:44 crc kubenswrapper[4805]: I0226 18:29:44.452859 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4833f528-f466-4ea0-bb4a-37053e4941bb-utilities\") pod \"certified-operators-wpmw9\" (UID: \"4833f528-f466-4ea0-bb4a-37053e4941bb\") " pod="openshift-marketplace/certified-operators-wpmw9" Feb 26 18:29:44 crc kubenswrapper[4805]: I0226 18:29:44.453722 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4833f528-f466-4ea0-bb4a-37053e4941bb-utilities\") pod \"certified-operators-wpmw9\" (UID: \"4833f528-f466-4ea0-bb4a-37053e4941bb\") " pod="openshift-marketplace/certified-operators-wpmw9" Feb 26 18:29:44 crc kubenswrapper[4805]: I0226 18:29:44.453752 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4833f528-f466-4ea0-bb4a-37053e4941bb-catalog-content\") pod \"certified-operators-wpmw9\" (UID: \"4833f528-f466-4ea0-bb4a-37053e4941bb\") " pod="openshift-marketplace/certified-operators-wpmw9" Feb 26 18:29:44 crc kubenswrapper[4805]: I0226 18:29:44.475885 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrnhn\" (UniqueName: \"kubernetes.io/projected/4833f528-f466-4ea0-bb4a-37053e4941bb-kube-api-access-jrnhn\") pod \"certified-operators-wpmw9\" (UID: \"4833f528-f466-4ea0-bb4a-37053e4941bb\") " pod="openshift-marketplace/certified-operators-wpmw9" Feb 26 18:29:44 crc kubenswrapper[4805]: I0226 18:29:44.525895 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wpmw9" Feb 26 18:29:44 crc kubenswrapper[4805]: I0226 18:29:44.554725 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmfj4\" (UniqueName: \"kubernetes.io/projected/04570f88-acf5-40a8-815b-442b40e626f3-kube-api-access-pmfj4\") pod \"community-operators-2lm4h\" (UID: \"04570f88-acf5-40a8-815b-442b40e626f3\") " pod="openshift-marketplace/community-operators-2lm4h" Feb 26 18:29:44 crc kubenswrapper[4805]: I0226 18:29:44.554790 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04570f88-acf5-40a8-815b-442b40e626f3-utilities\") pod \"community-operators-2lm4h\" (UID: \"04570f88-acf5-40a8-815b-442b40e626f3\") " pod="openshift-marketplace/community-operators-2lm4h" Feb 26 18:29:44 crc kubenswrapper[4805]: I0226 18:29:44.554825 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04570f88-acf5-40a8-815b-442b40e626f3-catalog-content\") pod \"community-operators-2lm4h\" (UID: \"04570f88-acf5-40a8-815b-442b40e626f3\") " pod="openshift-marketplace/community-operators-2lm4h" Feb 26 18:29:44 crc kubenswrapper[4805]: I0226 18:29:44.656745 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmfj4\" (UniqueName: \"kubernetes.io/projected/04570f88-acf5-40a8-815b-442b40e626f3-kube-api-access-pmfj4\") pod \"community-operators-2lm4h\" (UID: \"04570f88-acf5-40a8-815b-442b40e626f3\") " pod="openshift-marketplace/community-operators-2lm4h" Feb 26 18:29:44 crc kubenswrapper[4805]: I0226 18:29:44.656841 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04570f88-acf5-40a8-815b-442b40e626f3-utilities\") pod \"community-operators-2lm4h\" (UID: \"04570f88-acf5-40a8-815b-442b40e626f3\") " pod="openshift-marketplace/community-operators-2lm4h" Feb 26 18:29:44 crc kubenswrapper[4805]: I0226 18:29:44.656915 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04570f88-acf5-40a8-815b-442b40e626f3-catalog-content\") pod \"community-operators-2lm4h\" (UID: \"04570f88-acf5-40a8-815b-442b40e626f3\") " pod="openshift-marketplace/community-operators-2lm4h" Feb 26 18:29:44 crc kubenswrapper[4805]: I0226 18:29:44.657623 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04570f88-acf5-40a8-815b-442b40e626f3-catalog-content\") pod \"community-operators-2lm4h\" (UID: \"04570f88-acf5-40a8-815b-442b40e626f3\") " pod="openshift-marketplace/community-operators-2lm4h" Feb 26 18:29:44 crc kubenswrapper[4805]: I0226 18:29:44.658648 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04570f88-acf5-40a8-815b-442b40e626f3-utilities\") pod \"community-operators-2lm4h\" (UID: \"04570f88-acf5-40a8-815b-442b40e626f3\") " pod="openshift-marketplace/community-operators-2lm4h" Feb 26 18:29:44 crc kubenswrapper[4805]: I0226 18:29:44.685266 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmfj4\" (UniqueName: \"kubernetes.io/projected/04570f88-acf5-40a8-815b-442b40e626f3-kube-api-access-pmfj4\") pod \"community-operators-2lm4h\" (UID: \"04570f88-acf5-40a8-815b-442b40e626f3\") " pod="openshift-marketplace/community-operators-2lm4h" Feb 26 18:29:44 crc kubenswrapper[4805]: I0226 18:29:44.721348 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2lm4h" Feb 26 18:29:45 crc kubenswrapper[4805]: I0226 18:29:45.110187 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wpmw9"] Feb 26 18:29:45 crc kubenswrapper[4805]: I0226 18:29:45.358617 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2lm4h"] Feb 26 18:29:45 crc kubenswrapper[4805]: I0226 18:29:45.419090 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wpmw9" event={"ID":"4833f528-f466-4ea0-bb4a-37053e4941bb","Type":"ContainerStarted","Data":"0abe18811c0e74e56f556cd69709ba4a41e73603fdb4d0d3860b0df4d97d2e26"} Feb 26 18:29:45 crc kubenswrapper[4805]: I0226 18:29:45.423664 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2lm4h" event={"ID":"04570f88-acf5-40a8-815b-442b40e626f3","Type":"ContainerStarted","Data":"7771ee7f91fe40109c26cc45b18b69170402d49c18981cd119ab00090bce3f61"} Feb 26 18:29:46 crc kubenswrapper[4805]: I0226 18:29:46.432162 4805 generic.go:334] "Generic (PLEG): container finished" podID="4833f528-f466-4ea0-bb4a-37053e4941bb" containerID="7178b5650a5fb07e2b17ea0049133e636454cdb97eec883711dbd6e5fac4dd48" exitCode=0 Feb 26 18:29:46 crc kubenswrapper[4805]: I0226 18:29:46.432387 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wpmw9" event={"ID":"4833f528-f466-4ea0-bb4a-37053e4941bb","Type":"ContainerDied","Data":"7178b5650a5fb07e2b17ea0049133e636454cdb97eec883711dbd6e5fac4dd48"} Feb 26 18:29:46 crc kubenswrapper[4805]: I0226 18:29:46.434050 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 18:29:46 crc kubenswrapper[4805]: I0226 18:29:46.435585 4805 generic.go:334] "Generic (PLEG): container finished" podID="04570f88-acf5-40a8-815b-442b40e626f3" containerID="4c9b8e167cf7d3898ab5cab82858d4e5abadf510ed5bf2d9c9d2f8ad493133ca" exitCode=0 Feb 26 18:29:46 crc kubenswrapper[4805]: I0226 18:29:46.435619 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2lm4h" event={"ID":"04570f88-acf5-40a8-815b-442b40e626f3","Type":"ContainerDied","Data":"4c9b8e167cf7d3898ab5cab82858d4e5abadf510ed5bf2d9c9d2f8ad493133ca"} Feb 26 18:29:47 crc kubenswrapper[4805]: I0226 18:29:47.997360 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-68kjg"] Feb 26 18:29:48 crc kubenswrapper[4805]: I0226 18:29:48.001748 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-68kjg" Feb 26 18:29:48 crc kubenswrapper[4805]: I0226 18:29:48.014316 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-68kjg"] Feb 26 18:29:48 crc kubenswrapper[4805]: I0226 18:29:48.175325 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j8hs\" (UniqueName: \"kubernetes.io/projected/feba8a05-3532-40f7-829e-35feaf8a3f4b-kube-api-access-2j8hs\") pod \"redhat-marketplace-68kjg\" (UID: \"feba8a05-3532-40f7-829e-35feaf8a3f4b\") " pod="openshift-marketplace/redhat-marketplace-68kjg" Feb 26 18:29:48 crc kubenswrapper[4805]: I0226 18:29:48.175471 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feba8a05-3532-40f7-829e-35feaf8a3f4b-catalog-content\") pod \"redhat-marketplace-68kjg\" (UID: \"feba8a05-3532-40f7-829e-35feaf8a3f4b\") " pod="openshift-marketplace/redhat-marketplace-68kjg" Feb 26 18:29:48 crc kubenswrapper[4805]: I0226 18:29:48.175505 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feba8a05-3532-40f7-829e-35feaf8a3f4b-utilities\") pod \"redhat-marketplace-68kjg\" (UID: \"feba8a05-3532-40f7-829e-35feaf8a3f4b\") " pod="openshift-marketplace/redhat-marketplace-68kjg" Feb 26 18:29:48 crc kubenswrapper[4805]: I0226 18:29:48.277725 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j8hs\" (UniqueName: \"kubernetes.io/projected/feba8a05-3532-40f7-829e-35feaf8a3f4b-kube-api-access-2j8hs\") pod \"redhat-marketplace-68kjg\" (UID: \"feba8a05-3532-40f7-829e-35feaf8a3f4b\") " pod="openshift-marketplace/redhat-marketplace-68kjg" Feb 26 18:29:48 crc kubenswrapper[4805]: I0226 18:29:48.277886 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feba8a05-3532-40f7-829e-35feaf8a3f4b-catalog-content\") pod \"redhat-marketplace-68kjg\" (UID: \"feba8a05-3532-40f7-829e-35feaf8a3f4b\") " pod="openshift-marketplace/redhat-marketplace-68kjg" Feb 26 18:29:48 crc kubenswrapper[4805]: I0226 18:29:48.277933 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feba8a05-3532-40f7-829e-35feaf8a3f4b-utilities\") pod \"redhat-marketplace-68kjg\" (UID: \"feba8a05-3532-40f7-829e-35feaf8a3f4b\") " pod="openshift-marketplace/redhat-marketplace-68kjg" Feb 26 18:29:48 crc kubenswrapper[4805]: I0226 18:29:48.278672 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feba8a05-3532-40f7-829e-35feaf8a3f4b-catalog-content\") pod \"redhat-marketplace-68kjg\" (UID: \"feba8a05-3532-40f7-829e-35feaf8a3f4b\") " pod="openshift-marketplace/redhat-marketplace-68kjg" Feb 26 18:29:48 crc kubenswrapper[4805]: I0226 18:29:48.278920 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feba8a05-3532-40f7-829e-35feaf8a3f4b-utilities\") pod \"redhat-marketplace-68kjg\" (UID: \"feba8a05-3532-40f7-829e-35feaf8a3f4b\") " pod="openshift-marketplace/redhat-marketplace-68kjg" Feb 26 18:29:48 crc kubenswrapper[4805]: I0226 18:29:48.300333 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j8hs\" (UniqueName: \"kubernetes.io/projected/feba8a05-3532-40f7-829e-35feaf8a3f4b-kube-api-access-2j8hs\") pod \"redhat-marketplace-68kjg\" (UID: \"feba8a05-3532-40f7-829e-35feaf8a3f4b\") " pod="openshift-marketplace/redhat-marketplace-68kjg" Feb 26 18:29:48 crc kubenswrapper[4805]: I0226 18:29:48.326667 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-68kjg" Feb 26 18:29:48 crc kubenswrapper[4805]: I0226 18:29:48.462563 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wpmw9" event={"ID":"4833f528-f466-4ea0-bb4a-37053e4941bb","Type":"ContainerStarted","Data":"f8cbb671ec19d7441d14dc77e3fbbda74d4029078a7e69a95c1b7e7cb75e2d86"} Feb 26 18:29:48 crc kubenswrapper[4805]: I0226 18:29:48.469597 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2lm4h" event={"ID":"04570f88-acf5-40a8-815b-442b40e626f3","Type":"ContainerStarted","Data":"90e80933c9201eb287ea9c003e4f3573882fff2724877f49f92aa0323c22c0f7"} Feb 26 18:29:49 crc kubenswrapper[4805]: I0226 18:29:49.091342 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-68kjg"] Feb 26 18:29:49 crc kubenswrapper[4805]: W0226 18:29:49.097365 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfeba8a05_3532_40f7_829e_35feaf8a3f4b.slice/crio-9315d67f9eec0f41499d9608cfa49026e999723100c7a68706f8666f5f89010d WatchSource:0}: Error finding container 9315d67f9eec0f41499d9608cfa49026e999723100c7a68706f8666f5f89010d: Status 404 returned error can't find the container with id 9315d67f9eec0f41499d9608cfa49026e999723100c7a68706f8666f5f89010d Feb 26 18:29:49 crc kubenswrapper[4805]: I0226 18:29:49.480145 4805 generic.go:334] "Generic (PLEG): container finished" podID="feba8a05-3532-40f7-829e-35feaf8a3f4b" containerID="b6b77f6d4347a3cb23d9bc33f1944ffa0e72eddb99f977fc71a9267540851c07" exitCode=0 Feb 26 18:29:49 crc kubenswrapper[4805]: I0226 18:29:49.481137 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-68kjg" event={"ID":"feba8a05-3532-40f7-829e-35feaf8a3f4b","Type":"ContainerDied","Data":"b6b77f6d4347a3cb23d9bc33f1944ffa0e72eddb99f977fc71a9267540851c07"} Feb 26 18:29:49 crc kubenswrapper[4805]: I0226 18:29:49.481294 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-68kjg" event={"ID":"feba8a05-3532-40f7-829e-35feaf8a3f4b","Type":"ContainerStarted","Data":"9315d67f9eec0f41499d9608cfa49026e999723100c7a68706f8666f5f89010d"} Feb 26 18:29:50 crc kubenswrapper[4805]: I0226 18:29:50.510529 4805 generic.go:334] "Generic (PLEG): container finished" podID="04570f88-acf5-40a8-815b-442b40e626f3" containerID="90e80933c9201eb287ea9c003e4f3573882fff2724877f49f92aa0323c22c0f7" exitCode=0 Feb 26 18:29:50 crc kubenswrapper[4805]: I0226 18:29:50.511253 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2lm4h" event={"ID":"04570f88-acf5-40a8-815b-442b40e626f3","Type":"ContainerDied","Data":"90e80933c9201eb287ea9c003e4f3573882fff2724877f49f92aa0323c22c0f7"} Feb 26 18:29:50 crc kubenswrapper[4805]: I0226 18:29:50.523046 4805 generic.go:334] "Generic (PLEG): container finished" podID="4833f528-f466-4ea0-bb4a-37053e4941bb" containerID="f8cbb671ec19d7441d14dc77e3fbbda74d4029078a7e69a95c1b7e7cb75e2d86" exitCode=0 Feb 26 18:29:50 crc kubenswrapper[4805]: I0226 18:29:50.523118 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wpmw9" event={"ID":"4833f528-f466-4ea0-bb4a-37053e4941bb","Type":"ContainerDied","Data":"f8cbb671ec19d7441d14dc77e3fbbda74d4029078a7e69a95c1b7e7cb75e2d86"} Feb 26 18:29:51 crc kubenswrapper[4805]: I0226 18:29:51.542443 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2lm4h" event={"ID":"04570f88-acf5-40a8-815b-442b40e626f3","Type":"ContainerStarted","Data":"e61add93abc2f46cb6beea681d82575191ba54e01af5a40024ddabf28b6f1b54"} Feb 26 18:29:51 crc kubenswrapper[4805]: I0226 18:29:51.545264 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wpmw9" event={"ID":"4833f528-f466-4ea0-bb4a-37053e4941bb","Type":"ContainerStarted","Data":"927fe61511ee1dcae275b8b793ee161bede096c9a77cb6a8b9594894f5cf06fb"} Feb 26 18:29:51 crc kubenswrapper[4805]: I0226 18:29:51.547569 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-68kjg" event={"ID":"feba8a05-3532-40f7-829e-35feaf8a3f4b","Type":"ContainerStarted","Data":"6c772f2af5fcf962075380b190c9da85da7802807aa4aef28e04908ffc56437b"} Feb 26 18:29:51 crc kubenswrapper[4805]: I0226 18:29:51.576418 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2lm4h" podStartSLOduration=3.0885185 podStartE2EDuration="7.576400341s" podCreationTimestamp="2026-02-26 18:29:44 +0000 UTC" firstStartedPulling="2026-02-26 18:29:46.437326947 +0000 UTC m=+4500.999081286" lastFinishedPulling="2026-02-26 18:29:50.925208788 +0000 UTC m=+4505.486963127" observedRunningTime="2026-02-26 18:29:51.559904865 +0000 UTC m=+4506.121659204" watchObservedRunningTime="2026-02-26 18:29:51.576400341 +0000 UTC m=+4506.138154680" Feb 26 18:29:51 crc kubenswrapper[4805]: I0226 18:29:51.587141 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wpmw9" podStartSLOduration=2.7527908290000003 podStartE2EDuration="7.587115982s" podCreationTimestamp="2026-02-26 18:29:44 +0000 UTC" firstStartedPulling="2026-02-26 18:29:46.433760667 +0000 UTC m=+4500.995515006" lastFinishedPulling="2026-02-26 18:29:51.26808582 +0000 UTC m=+4505.829840159" observedRunningTime="2026-02-26 18:29:51.57951621 +0000 UTC m=+4506.141270549" watchObservedRunningTime="2026-02-26 18:29:51.587115982 +0000 UTC m=+4506.148870331" Feb 26 18:29:52 crc kubenswrapper[4805]: I0226 18:29:52.558537 4805 generic.go:334] "Generic (PLEG): container finished" podID="feba8a05-3532-40f7-829e-35feaf8a3f4b" containerID="6c772f2af5fcf962075380b190c9da85da7802807aa4aef28e04908ffc56437b" exitCode=0 Feb 26 18:29:52 crc kubenswrapper[4805]: I0226 18:29:52.558621 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-68kjg" event={"ID":"feba8a05-3532-40f7-829e-35feaf8a3f4b","Type":"ContainerDied","Data":"6c772f2af5fcf962075380b190c9da85da7802807aa4aef28e04908ffc56437b"} Feb 26 18:29:53 crc kubenswrapper[4805]: I0226 18:29:53.571929 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-68kjg" event={"ID":"feba8a05-3532-40f7-829e-35feaf8a3f4b","Type":"ContainerStarted","Data":"26898196b0c9235d9818b1d42ec24eec5a9d594ac0a456ecf3eeabf62615eb1f"} Feb 26 18:29:53 crc kubenswrapper[4805]: I0226 18:29:53.612313 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-68kjg" podStartSLOduration=3.13478254 podStartE2EDuration="6.612290512s" podCreationTimestamp="2026-02-26 18:29:47 +0000 UTC" firstStartedPulling="2026-02-26 18:29:49.482847587 +0000 UTC m=+4504.044601926" lastFinishedPulling="2026-02-26 18:29:52.960355559 +0000 UTC m=+4507.522109898" observedRunningTime="2026-02-26 18:29:53.599322444 +0000 UTC m=+4508.161076793" watchObservedRunningTime="2026-02-26 18:29:53.612290512 +0000 UTC m=+4508.174044851" Feb 26 18:29:54 crc kubenswrapper[4805]: I0226 18:29:54.526342 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wpmw9" Feb 26 18:29:54 crc kubenswrapper[4805]: I0226 18:29:54.526395 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wpmw9" Feb 26 18:29:54 crc kubenswrapper[4805]: I0226 18:29:54.721911 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2lm4h" Feb 26 18:29:54 crc kubenswrapper[4805]: I0226 18:29:54.722372 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2lm4h" Feb 26 18:29:56 crc kubenswrapper[4805]: I0226 18:29:56.117325 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-2lm4h" podUID="04570f88-acf5-40a8-815b-442b40e626f3" containerName="registry-server" probeResult="failure" output=< Feb 26 18:29:56 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 26 18:29:56 crc kubenswrapper[4805]: > Feb 26 18:29:56 crc kubenswrapper[4805]: I0226 18:29:56.151255 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-wpmw9" podUID="4833f528-f466-4ea0-bb4a-37053e4941bb" containerName="registry-server" probeResult="failure" output=< Feb 26 18:29:56 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 26 18:29:56 crc kubenswrapper[4805]: > Feb 26 18:29:58 crc kubenswrapper[4805]: I0226 18:29:58.327675 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-68kjg" Feb 26 18:29:58 crc kubenswrapper[4805]: I0226 18:29:58.328043 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-68kjg" Feb 26 18:29:59 crc kubenswrapper[4805]: I0226 18:29:59.378748 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-68kjg" podUID="feba8a05-3532-40f7-829e-35feaf8a3f4b" containerName="registry-server" probeResult="failure" output=< Feb 26 18:29:59 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 26 18:29:59 crc kubenswrapper[4805]: > Feb 26 18:30:00 crc kubenswrapper[4805]: I0226 18:30:00.238558 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535510-lt547"] Feb 26 18:30:00 crc kubenswrapper[4805]: I0226 18:30:00.240623 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535510-lt547" Feb 26 18:30:00 crc kubenswrapper[4805]: I0226 18:30:00.242542 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 18:30:00 crc kubenswrapper[4805]: I0226 18:30:00.243949 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 18:30:00 crc kubenswrapper[4805]: I0226 18:30:00.244916 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 18:30:00 crc kubenswrapper[4805]: I0226 18:30:00.259536 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535510-nx7tf"] Feb 26 18:30:00 crc kubenswrapper[4805]: I0226 18:30:00.261334 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535510-nx7tf" Feb 26 18:30:00 crc kubenswrapper[4805]: I0226 18:30:00.263178 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 26 18:30:00 crc kubenswrapper[4805]: I0226 18:30:00.263205 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 26 18:30:00 crc kubenswrapper[4805]: I0226 18:30:00.276152 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535510-nx7tf"] Feb 26 18:30:00 crc kubenswrapper[4805]: I0226 18:30:00.294115 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535510-lt547"] Feb 26 18:30:00 crc kubenswrapper[4805]: I0226 18:30:00.419964 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvgmf\" (UniqueName: \"kubernetes.io/projected/78fd51e0-ba17-4be2-8e62-d1be273a222e-kube-api-access-nvgmf\") pod \"auto-csr-approver-29535510-lt547\" (UID: \"78fd51e0-ba17-4be2-8e62-d1be273a222e\") " pod="openshift-infra/auto-csr-approver-29535510-lt547" Feb 26 18:30:00 crc kubenswrapper[4805]: I0226 18:30:00.420371 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e29539a5-e1a8-4774-bcfd-11194f5ef1f6-config-volume\") pod \"collect-profiles-29535510-nx7tf\" (UID: \"e29539a5-e1a8-4774-bcfd-11194f5ef1f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535510-nx7tf" Feb 26 18:30:00 crc kubenswrapper[4805]: I0226 18:30:00.420395 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e29539a5-e1a8-4774-bcfd-11194f5ef1f6-secret-volume\") pod \"collect-profiles-29535510-nx7tf\" (UID: \"e29539a5-e1a8-4774-bcfd-11194f5ef1f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535510-nx7tf" Feb 26 18:30:00 crc kubenswrapper[4805]: I0226 18:30:00.420497 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvm2t\" (UniqueName: \"kubernetes.io/projected/e29539a5-e1a8-4774-bcfd-11194f5ef1f6-kube-api-access-cvm2t\") pod \"collect-profiles-29535510-nx7tf\" (UID: \"e29539a5-e1a8-4774-bcfd-11194f5ef1f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535510-nx7tf" Feb 26 18:30:00 crc kubenswrapper[4805]: I0226 18:30:00.522643 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvgmf\" (UniqueName: \"kubernetes.io/projected/78fd51e0-ba17-4be2-8e62-d1be273a222e-kube-api-access-nvgmf\") pod \"auto-csr-approver-29535510-lt547\" (UID: \"78fd51e0-ba17-4be2-8e62-d1be273a222e\") " pod="openshift-infra/auto-csr-approver-29535510-lt547" Feb 26 18:30:00 crc kubenswrapper[4805]: I0226 18:30:00.522716 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e29539a5-e1a8-4774-bcfd-11194f5ef1f6-config-volume\") pod \"collect-profiles-29535510-nx7tf\" (UID: \"e29539a5-e1a8-4774-bcfd-11194f5ef1f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535510-nx7tf" Feb 26 18:30:00 crc kubenswrapper[4805]: I0226 18:30:00.522746 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e29539a5-e1a8-4774-bcfd-11194f5ef1f6-secret-volume\") pod \"collect-profiles-29535510-nx7tf\" (UID: \"e29539a5-e1a8-4774-bcfd-11194f5ef1f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535510-nx7tf" Feb 26 18:30:00 crc kubenswrapper[4805]: I0226 18:30:00.522842 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvm2t\" (UniqueName: \"kubernetes.io/projected/e29539a5-e1a8-4774-bcfd-11194f5ef1f6-kube-api-access-cvm2t\") pod \"collect-profiles-29535510-nx7tf\" (UID: \"e29539a5-e1a8-4774-bcfd-11194f5ef1f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535510-nx7tf" Feb 26 18:30:00 crc kubenswrapper[4805]: I0226 18:30:00.524380 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e29539a5-e1a8-4774-bcfd-11194f5ef1f6-config-volume\") pod \"collect-profiles-29535510-nx7tf\" (UID: \"e29539a5-e1a8-4774-bcfd-11194f5ef1f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535510-nx7tf" Feb 26 18:30:00 crc kubenswrapper[4805]: I0226 18:30:00.533282 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e29539a5-e1a8-4774-bcfd-11194f5ef1f6-secret-volume\") pod \"collect-profiles-29535510-nx7tf\" (UID: \"e29539a5-e1a8-4774-bcfd-11194f5ef1f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535510-nx7tf" Feb 26 18:30:00 crc kubenswrapper[4805]: I0226 18:30:00.540606 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvm2t\" (UniqueName: \"kubernetes.io/projected/e29539a5-e1a8-4774-bcfd-11194f5ef1f6-kube-api-access-cvm2t\") pod \"collect-profiles-29535510-nx7tf\" (UID: \"e29539a5-e1a8-4774-bcfd-11194f5ef1f6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29535510-nx7tf" Feb 26 18:30:00 crc kubenswrapper[4805]: I0226 18:30:00.541240 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvgmf\" (UniqueName: \"kubernetes.io/projected/78fd51e0-ba17-4be2-8e62-d1be273a222e-kube-api-access-nvgmf\") pod \"auto-csr-approver-29535510-lt547\" (UID: \"78fd51e0-ba17-4be2-8e62-d1be273a222e\") " pod="openshift-infra/auto-csr-approver-29535510-lt547" Feb 26 18:30:00 crc kubenswrapper[4805]: I0226 18:30:00.570965 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535510-lt547" Feb 26 18:30:00 crc kubenswrapper[4805]: I0226 18:30:00.582763 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535510-nx7tf" Feb 26 18:30:01 crc kubenswrapper[4805]: I0226 18:30:01.104090 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535510-lt547"] Feb 26 18:30:01 crc kubenswrapper[4805]: I0226 18:30:01.118813 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535510-nx7tf"] Feb 26 18:30:01 crc kubenswrapper[4805]: I0226 18:30:01.652116 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535510-lt547" event={"ID":"78fd51e0-ba17-4be2-8e62-d1be273a222e","Type":"ContainerStarted","Data":"459a75f82a24a3aad289203de6e83428b5672ddb3d9e4da16078f1a15d2702d0"} Feb 26 18:30:01 crc kubenswrapper[4805]: I0226 18:30:01.655243 4805 generic.go:334] "Generic (PLEG): container finished" podID="e29539a5-e1a8-4774-bcfd-11194f5ef1f6" containerID="3b33d76c2f38a1c2ca5dfcbef185cd06b57d6f379a44575f6aa90f716fd12b55" exitCode=0 Feb 26 18:30:01 crc kubenswrapper[4805]: I0226 18:30:01.655289 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535510-nx7tf" event={"ID":"e29539a5-e1a8-4774-bcfd-11194f5ef1f6","Type":"ContainerDied","Data":"3b33d76c2f38a1c2ca5dfcbef185cd06b57d6f379a44575f6aa90f716fd12b55"} Feb 26 18:30:01 crc kubenswrapper[4805]: I0226 18:30:01.655314 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535510-nx7tf" event={"ID":"e29539a5-e1a8-4774-bcfd-11194f5ef1f6","Type":"ContainerStarted","Data":"6207f39d5c4a2afd1ad4c10e4a7cfb942d353cc3431774772a41273ab46d3551"} Feb 26 18:30:03 crc kubenswrapper[4805]: I0226 18:30:03.228617 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535510-nx7tf" Feb 26 18:30:03 crc kubenswrapper[4805]: I0226 18:30:03.394702 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e29539a5-e1a8-4774-bcfd-11194f5ef1f6-secret-volume\") pod \"e29539a5-e1a8-4774-bcfd-11194f5ef1f6\" (UID: \"e29539a5-e1a8-4774-bcfd-11194f5ef1f6\") " Feb 26 18:30:03 crc kubenswrapper[4805]: I0226 18:30:03.394832 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvm2t\" (UniqueName: \"kubernetes.io/projected/e29539a5-e1a8-4774-bcfd-11194f5ef1f6-kube-api-access-cvm2t\") pod \"e29539a5-e1a8-4774-bcfd-11194f5ef1f6\" (UID: \"e29539a5-e1a8-4774-bcfd-11194f5ef1f6\") " Feb 26 18:30:03 crc kubenswrapper[4805]: I0226 18:30:03.394967 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e29539a5-e1a8-4774-bcfd-11194f5ef1f6-config-volume\") pod \"e29539a5-e1a8-4774-bcfd-11194f5ef1f6\" (UID: \"e29539a5-e1a8-4774-bcfd-11194f5ef1f6\") " Feb 26 18:30:03 crc kubenswrapper[4805]: I0226 18:30:03.396430 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e29539a5-e1a8-4774-bcfd-11194f5ef1f6-config-volume" (OuterVolumeSpecName: "config-volume") pod "e29539a5-e1a8-4774-bcfd-11194f5ef1f6" (UID: "e29539a5-e1a8-4774-bcfd-11194f5ef1f6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 18:30:03 crc kubenswrapper[4805]: I0226 18:30:03.410262 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e29539a5-e1a8-4774-bcfd-11194f5ef1f6-kube-api-access-cvm2t" (OuterVolumeSpecName: "kube-api-access-cvm2t") pod "e29539a5-e1a8-4774-bcfd-11194f5ef1f6" (UID: "e29539a5-e1a8-4774-bcfd-11194f5ef1f6"). InnerVolumeSpecName "kube-api-access-cvm2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:30:03 crc kubenswrapper[4805]: I0226 18:30:03.416266 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e29539a5-e1a8-4774-bcfd-11194f5ef1f6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e29539a5-e1a8-4774-bcfd-11194f5ef1f6" (UID: "e29539a5-e1a8-4774-bcfd-11194f5ef1f6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 18:30:03 crc kubenswrapper[4805]: I0226 18:30:03.497788 4805 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e29539a5-e1a8-4774-bcfd-11194f5ef1f6-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 26 18:30:03 crc kubenswrapper[4805]: I0226 18:30:03.497820 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvm2t\" (UniqueName: \"kubernetes.io/projected/e29539a5-e1a8-4774-bcfd-11194f5ef1f6-kube-api-access-cvm2t\") on node \"crc\" DevicePath \"\"" Feb 26 18:30:03 crc kubenswrapper[4805]: I0226 18:30:03.497829 4805 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e29539a5-e1a8-4774-bcfd-11194f5ef1f6-config-volume\") on node \"crc\" DevicePath \"\"" Feb 26 18:30:03 crc kubenswrapper[4805]: I0226 18:30:03.674968 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29535510-nx7tf" event={"ID":"e29539a5-e1a8-4774-bcfd-11194f5ef1f6","Type":"ContainerDied","Data":"6207f39d5c4a2afd1ad4c10e4a7cfb942d353cc3431774772a41273ab46d3551"} Feb 26 18:30:03 crc kubenswrapper[4805]: I0226 18:30:03.675031 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6207f39d5c4a2afd1ad4c10e4a7cfb942d353cc3431774772a41273ab46d3551" Feb 26 18:30:03 crc kubenswrapper[4805]: I0226 18:30:03.675126 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29535510-nx7tf" Feb 26 18:30:04 crc kubenswrapper[4805]: I0226 18:30:04.309280 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535465-7j9hb"] Feb 26 18:30:04 crc kubenswrapper[4805]: I0226 18:30:04.321573 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29535465-7j9hb"] Feb 26 18:30:04 crc kubenswrapper[4805]: I0226 18:30:04.779581 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2lm4h" Feb 26 18:30:04 crc kubenswrapper[4805]: I0226 18:30:04.834163 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2lm4h" Feb 26 18:30:04 crc kubenswrapper[4805]: I0226 18:30:04.965597 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2abd567d-6626-47ee-8469-0e70cff16a4d" path="/var/lib/kubelet/pods/2abd567d-6626-47ee-8469-0e70cff16a4d/volumes" Feb 26 18:30:05 crc kubenswrapper[4805]: I0226 18:30:05.019389 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2lm4h"] Feb 26 18:30:05 crc kubenswrapper[4805]: I0226 18:30:05.694413 4805 generic.go:334] "Generic (PLEG): container finished" podID="78fd51e0-ba17-4be2-8e62-d1be273a222e" containerID="f4a8e881a17b95a2c7a6190e7b8649d99646b4b4cce6c07197bb7491211cf453" exitCode=0 Feb 26 18:30:05 crc kubenswrapper[4805]: I0226 18:30:05.694459 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535510-lt547" event={"ID":"78fd51e0-ba17-4be2-8e62-d1be273a222e","Type":"ContainerDied","Data":"f4a8e881a17b95a2c7a6190e7b8649d99646b4b4cce6c07197bb7491211cf453"} Feb 26 18:30:05 crc kubenswrapper[4805]: I0226 18:30:05.715796 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-wpmw9" podUID="4833f528-f466-4ea0-bb4a-37053e4941bb" containerName="registry-server" probeResult="failure" output=< Feb 26 18:30:05 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 26 18:30:05 crc kubenswrapper[4805]: > Feb 26 18:30:06 crc kubenswrapper[4805]: I0226 18:30:06.703846 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2lm4h" podUID="04570f88-acf5-40a8-815b-442b40e626f3" containerName="registry-server" containerID="cri-o://e61add93abc2f46cb6beea681d82575191ba54e01af5a40024ddabf28b6f1b54" gracePeriod=2 Feb 26 18:30:07 crc kubenswrapper[4805]: I0226 18:30:07.299382 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535510-lt547" Feb 26 18:30:07 crc kubenswrapper[4805]: I0226 18:30:07.385797 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvgmf\" (UniqueName: \"kubernetes.io/projected/78fd51e0-ba17-4be2-8e62-d1be273a222e-kube-api-access-nvgmf\") pod \"78fd51e0-ba17-4be2-8e62-d1be273a222e\" (UID: \"78fd51e0-ba17-4be2-8e62-d1be273a222e\") " Feb 26 18:30:07 crc kubenswrapper[4805]: I0226 18:30:07.392906 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78fd51e0-ba17-4be2-8e62-d1be273a222e-kube-api-access-nvgmf" (OuterVolumeSpecName: "kube-api-access-nvgmf") pod "78fd51e0-ba17-4be2-8e62-d1be273a222e" (UID: "78fd51e0-ba17-4be2-8e62-d1be273a222e"). InnerVolumeSpecName "kube-api-access-nvgmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:30:07 crc kubenswrapper[4805]: I0226 18:30:07.471133 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2lm4h" Feb 26 18:30:07 crc kubenswrapper[4805]: I0226 18:30:07.487254 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04570f88-acf5-40a8-815b-442b40e626f3-catalog-content\") pod \"04570f88-acf5-40a8-815b-442b40e626f3\" (UID: \"04570f88-acf5-40a8-815b-442b40e626f3\") " Feb 26 18:30:07 crc kubenswrapper[4805]: I0226 18:30:07.487316 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmfj4\" (UniqueName: \"kubernetes.io/projected/04570f88-acf5-40a8-815b-442b40e626f3-kube-api-access-pmfj4\") pod \"04570f88-acf5-40a8-815b-442b40e626f3\" (UID: \"04570f88-acf5-40a8-815b-442b40e626f3\") " Feb 26 18:30:07 crc kubenswrapper[4805]: I0226 18:30:07.487420 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04570f88-acf5-40a8-815b-442b40e626f3-utilities\") pod \"04570f88-acf5-40a8-815b-442b40e626f3\" (UID: \"04570f88-acf5-40a8-815b-442b40e626f3\") " Feb 26 18:30:07 crc kubenswrapper[4805]: I0226 18:30:07.487829 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvgmf\" (UniqueName: \"kubernetes.io/projected/78fd51e0-ba17-4be2-8e62-d1be273a222e-kube-api-access-nvgmf\") on node \"crc\" DevicePath \"\"" Feb 26 18:30:07 crc kubenswrapper[4805]: I0226 18:30:07.488770 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04570f88-acf5-40a8-815b-442b40e626f3-utilities" (OuterVolumeSpecName: "utilities") pod "04570f88-acf5-40a8-815b-442b40e626f3" (UID: "04570f88-acf5-40a8-815b-442b40e626f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 18:30:07 crc kubenswrapper[4805]: I0226 18:30:07.539817 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04570f88-acf5-40a8-815b-442b40e626f3-kube-api-access-pmfj4" (OuterVolumeSpecName: "kube-api-access-pmfj4") pod "04570f88-acf5-40a8-815b-442b40e626f3" (UID: "04570f88-acf5-40a8-815b-442b40e626f3"). InnerVolumeSpecName "kube-api-access-pmfj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:30:07 crc kubenswrapper[4805]: I0226 18:30:07.540276 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04570f88-acf5-40a8-815b-442b40e626f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "04570f88-acf5-40a8-815b-442b40e626f3" (UID: "04570f88-acf5-40a8-815b-442b40e626f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 18:30:07 crc kubenswrapper[4805]: I0226 18:30:07.589175 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04570f88-acf5-40a8-815b-442b40e626f3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 18:30:07 crc kubenswrapper[4805]: I0226 18:30:07.589221 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmfj4\" (UniqueName: \"kubernetes.io/projected/04570f88-acf5-40a8-815b-442b40e626f3-kube-api-access-pmfj4\") on node \"crc\" DevicePath \"\"" Feb 26 18:30:07 crc kubenswrapper[4805]: I0226 18:30:07.589237 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04570f88-acf5-40a8-815b-442b40e626f3-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 18:30:07 crc kubenswrapper[4805]: I0226 18:30:07.714655 4805 generic.go:334] "Generic (PLEG): container finished" podID="630c211f-3dd5-4951-9476-249d0f6bc049" containerID="ee898cafad946825d4f5ed26253700d08f31f8e0c4d5e5ed869eb79c9a933357" exitCode=0 Feb 26 18:30:07 crc kubenswrapper[4805]: I0226 18:30:07.714749 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"630c211f-3dd5-4951-9476-249d0f6bc049","Type":"ContainerDied","Data":"ee898cafad946825d4f5ed26253700d08f31f8e0c4d5e5ed869eb79c9a933357"} Feb 26 18:30:07 crc kubenswrapper[4805]: I0226 18:30:07.718531 4805 generic.go:334] "Generic (PLEG): container finished" podID="04570f88-acf5-40a8-815b-442b40e626f3" containerID="e61add93abc2f46cb6beea681d82575191ba54e01af5a40024ddabf28b6f1b54" exitCode=0 Feb 26 18:30:07 crc kubenswrapper[4805]: I0226 18:30:07.718564 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2lm4h" event={"ID":"04570f88-acf5-40a8-815b-442b40e626f3","Type":"ContainerDied","Data":"e61add93abc2f46cb6beea681d82575191ba54e01af5a40024ddabf28b6f1b54"} Feb 26 18:30:07 crc kubenswrapper[4805]: I0226 18:30:07.718593 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2lm4h" event={"ID":"04570f88-acf5-40a8-815b-442b40e626f3","Type":"ContainerDied","Data":"7771ee7f91fe40109c26cc45b18b69170402d49c18981cd119ab00090bce3f61"} Feb 26 18:30:07 crc kubenswrapper[4805]: I0226 18:30:07.718615 4805 scope.go:117] "RemoveContainer" containerID="e61add93abc2f46cb6beea681d82575191ba54e01af5a40024ddabf28b6f1b54" Feb 26 18:30:07 crc kubenswrapper[4805]: I0226 18:30:07.718547 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2lm4h" Feb 26 18:30:07 crc kubenswrapper[4805]: I0226 18:30:07.720705 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535510-lt547" event={"ID":"78fd51e0-ba17-4be2-8e62-d1be273a222e","Type":"ContainerDied","Data":"459a75f82a24a3aad289203de6e83428b5672ddb3d9e4da16078f1a15d2702d0"} Feb 26 18:30:07 crc kubenswrapper[4805]: I0226 18:30:07.720727 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="459a75f82a24a3aad289203de6e83428b5672ddb3d9e4da16078f1a15d2702d0" Feb 26 18:30:07 crc kubenswrapper[4805]: I0226 18:30:07.720772 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535510-lt547" Feb 26 18:30:07 crc kubenswrapper[4805]: I0226 18:30:07.747138 4805 scope.go:117] "RemoveContainer" containerID="90e80933c9201eb287ea9c003e4f3573882fff2724877f49f92aa0323c22c0f7" Feb 26 18:30:07 crc kubenswrapper[4805]: I0226 18:30:07.777194 4805 scope.go:117] "RemoveContainer" containerID="4c9b8e167cf7d3898ab5cab82858d4e5abadf510ed5bf2d9c9d2f8ad493133ca" Feb 26 18:30:07 crc kubenswrapper[4805]: I0226 18:30:07.800903 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2lm4h"] Feb 26 18:30:07 crc kubenswrapper[4805]: I0226 18:30:07.811645 4805 scope.go:117] "RemoveContainer" containerID="e61add93abc2f46cb6beea681d82575191ba54e01af5a40024ddabf28b6f1b54" Feb 26 18:30:07 crc kubenswrapper[4805]: E0226 18:30:07.812264 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e61add93abc2f46cb6beea681d82575191ba54e01af5a40024ddabf28b6f1b54\": container with ID starting with e61add93abc2f46cb6beea681d82575191ba54e01af5a40024ddabf28b6f1b54 not found: ID does not exist" containerID="e61add93abc2f46cb6beea681d82575191ba54e01af5a40024ddabf28b6f1b54" Feb 26 18:30:07 crc kubenswrapper[4805]: I0226 18:30:07.812300 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e61add93abc2f46cb6beea681d82575191ba54e01af5a40024ddabf28b6f1b54"} err="failed to get container status \"e61add93abc2f46cb6beea681d82575191ba54e01af5a40024ddabf28b6f1b54\": rpc error: code = NotFound desc = could not find container \"e61add93abc2f46cb6beea681d82575191ba54e01af5a40024ddabf28b6f1b54\": container with ID starting with e61add93abc2f46cb6beea681d82575191ba54e01af5a40024ddabf28b6f1b54 not found: ID does not exist" Feb 26 18:30:07 crc kubenswrapper[4805]: I0226 18:30:07.812326 4805 scope.go:117] "RemoveContainer" containerID="90e80933c9201eb287ea9c003e4f3573882fff2724877f49f92aa0323c22c0f7" Feb 26 18:30:07 crc kubenswrapper[4805]: E0226 18:30:07.812824 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90e80933c9201eb287ea9c003e4f3573882fff2724877f49f92aa0323c22c0f7\": container with ID starting with 90e80933c9201eb287ea9c003e4f3573882fff2724877f49f92aa0323c22c0f7 not found: ID does not exist" containerID="90e80933c9201eb287ea9c003e4f3573882fff2724877f49f92aa0323c22c0f7" Feb 26 18:30:07 crc kubenswrapper[4805]: I0226 18:30:07.812860 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90e80933c9201eb287ea9c003e4f3573882fff2724877f49f92aa0323c22c0f7"} err="failed to get container status \"90e80933c9201eb287ea9c003e4f3573882fff2724877f49f92aa0323c22c0f7\": rpc error: code = NotFound desc = could not find container \"90e80933c9201eb287ea9c003e4f3573882fff2724877f49f92aa0323c22c0f7\": container with ID starting with 90e80933c9201eb287ea9c003e4f3573882fff2724877f49f92aa0323c22c0f7 not found: ID does not exist" Feb 26 18:30:07 crc kubenswrapper[4805]: I0226 18:30:07.812887 4805 scope.go:117] "RemoveContainer" containerID="4c9b8e167cf7d3898ab5cab82858d4e5abadf510ed5bf2d9c9d2f8ad493133ca" Feb 26 18:30:07 crc kubenswrapper[4805]: E0226 18:30:07.813298 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c9b8e167cf7d3898ab5cab82858d4e5abadf510ed5bf2d9c9d2f8ad493133ca\": container with ID starting with 4c9b8e167cf7d3898ab5cab82858d4e5abadf510ed5bf2d9c9d2f8ad493133ca not found: ID does not exist" containerID="4c9b8e167cf7d3898ab5cab82858d4e5abadf510ed5bf2d9c9d2f8ad493133ca" Feb 26 18:30:07 crc kubenswrapper[4805]: I0226 18:30:07.813327 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c9b8e167cf7d3898ab5cab82858d4e5abadf510ed5bf2d9c9d2f8ad493133ca"} err="failed to get container status \"4c9b8e167cf7d3898ab5cab82858d4e5abadf510ed5bf2d9c9d2f8ad493133ca\": rpc error: code = NotFound desc = could not find container \"4c9b8e167cf7d3898ab5cab82858d4e5abadf510ed5bf2d9c9d2f8ad493133ca\": container with ID starting with 4c9b8e167cf7d3898ab5cab82858d4e5abadf510ed5bf2d9c9d2f8ad493133ca not found: ID does not exist" Feb 26 18:30:07 crc kubenswrapper[4805]: I0226 18:30:07.817517 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2lm4h"] Feb 26 18:30:08 crc kubenswrapper[4805]: I0226 18:30:08.363903 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535504-xn8cm"] Feb 26 18:30:08 crc kubenswrapper[4805]: I0226 18:30:08.375547 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535504-xn8cm"] Feb 26 18:30:08 crc kubenswrapper[4805]: I0226 18:30:08.386684 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-68kjg" Feb 26 18:30:08 crc kubenswrapper[4805]: I0226 18:30:08.456902 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-68kjg" Feb 26 18:30:08 crc kubenswrapper[4805]: I0226 18:30:08.968524 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04570f88-acf5-40a8-815b-442b40e626f3" path="/var/lib/kubelet/pods/04570f88-acf5-40a8-815b-442b40e626f3/volumes" Feb 26 18:30:08 crc kubenswrapper[4805]: I0226 18:30:08.969217 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dae44122-1933-4cac-9b95-21b7c0cbe89f" path="/var/lib/kubelet/pods/dae44122-1933-4cac-9b95-21b7c0cbe89f/volumes" Feb 26 18:30:09 crc kubenswrapper[4805]: I0226 18:30:09.416687 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 26 18:30:09 crc kubenswrapper[4805]: I0226 18:30:09.436890 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/630c211f-3dd5-4951-9476-249d0f6bc049-ssh-key\") pod \"630c211f-3dd5-4951-9476-249d0f6bc049\" (UID: \"630c211f-3dd5-4951-9476-249d0f6bc049\") " Feb 26 18:30:09 crc kubenswrapper[4805]: I0226 18:30:09.437098 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/630c211f-3dd5-4951-9476-249d0f6bc049-config-data\") pod \"630c211f-3dd5-4951-9476-249d0f6bc049\" (UID: \"630c211f-3dd5-4951-9476-249d0f6bc049\") " Feb 26 18:30:09 crc kubenswrapper[4805]: I0226 18:30:09.437132 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/630c211f-3dd5-4951-9476-249d0f6bc049-openstack-config-secret\") pod \"630c211f-3dd5-4951-9476-249d0f6bc049\" (UID: \"630c211f-3dd5-4951-9476-249d0f6bc049\") " Feb 26 18:30:09 crc kubenswrapper[4805]: I0226 18:30:09.437656 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/630c211f-3dd5-4951-9476-249d0f6bc049-openstack-config\") pod \"630c211f-3dd5-4951-9476-249d0f6bc049\" (UID: \"630c211f-3dd5-4951-9476-249d0f6bc049\") " Feb 26 18:30:09 crc kubenswrapper[4805]: I0226 18:30:09.437728 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"630c211f-3dd5-4951-9476-249d0f6bc049\" (UID: \"630c211f-3dd5-4951-9476-249d0f6bc049\") " Feb 26 18:30:09 crc kubenswrapper[4805]: I0226 18:30:09.437939 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/630c211f-3dd5-4951-9476-249d0f6bc049-config-data" (OuterVolumeSpecName: "config-data") pod "630c211f-3dd5-4951-9476-249d0f6bc049" (UID: "630c211f-3dd5-4951-9476-249d0f6bc049"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 18:30:09 crc kubenswrapper[4805]: I0226 18:30:09.437977 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kp6jg\" (UniqueName: \"kubernetes.io/projected/630c211f-3dd5-4951-9476-249d0f6bc049-kube-api-access-kp6jg\") pod \"630c211f-3dd5-4951-9476-249d0f6bc049\" (UID: \"630c211f-3dd5-4951-9476-249d0f6bc049\") " Feb 26 18:30:09 crc kubenswrapper[4805]: I0226 18:30:09.438196 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/630c211f-3dd5-4951-9476-249d0f6bc049-test-operator-ephemeral-workdir\") pod \"630c211f-3dd5-4951-9476-249d0f6bc049\" (UID: \"630c211f-3dd5-4951-9476-249d0f6bc049\") " Feb 26 18:30:09 crc kubenswrapper[4805]: I0226 18:30:09.438219 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/630c211f-3dd5-4951-9476-249d0f6bc049-ca-certs\") pod \"630c211f-3dd5-4951-9476-249d0f6bc049\" (UID: \"630c211f-3dd5-4951-9476-249d0f6bc049\") " Feb 26 18:30:09 crc kubenswrapper[4805]: I0226 18:30:09.438293 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/630c211f-3dd5-4951-9476-249d0f6bc049-test-operator-ephemeral-temporary\") pod \"630c211f-3dd5-4951-9476-249d0f6bc049\" (UID: \"630c211f-3dd5-4951-9476-249d0f6bc049\") " Feb 26 18:30:09 crc kubenswrapper[4805]: I0226 18:30:09.439053 4805 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/630c211f-3dd5-4951-9476-249d0f6bc049-config-data\") on node \"crc\" DevicePath \"\"" Feb 26 18:30:09 crc kubenswrapper[4805]: I0226 18:30:09.439441 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/630c211f-3dd5-4951-9476-249d0f6bc049-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "630c211f-3dd5-4951-9476-249d0f6bc049" (UID: "630c211f-3dd5-4951-9476-249d0f6bc049"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 18:30:09 crc kubenswrapper[4805]: I0226 18:30:09.443284 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "test-operator-logs") pod "630c211f-3dd5-4951-9476-249d0f6bc049" (UID: "630c211f-3dd5-4951-9476-249d0f6bc049"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 26 18:30:09 crc kubenswrapper[4805]: I0226 18:30:09.444228 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/630c211f-3dd5-4951-9476-249d0f6bc049-kube-api-access-kp6jg" (OuterVolumeSpecName: "kube-api-access-kp6jg") pod "630c211f-3dd5-4951-9476-249d0f6bc049" (UID: "630c211f-3dd5-4951-9476-249d0f6bc049"). InnerVolumeSpecName "kube-api-access-kp6jg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:30:09 crc kubenswrapper[4805]: I0226 18:30:09.482438 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/630c211f-3dd5-4951-9476-249d0f6bc049-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "630c211f-3dd5-4951-9476-249d0f6bc049" (UID: "630c211f-3dd5-4951-9476-249d0f6bc049"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 18:30:09 crc kubenswrapper[4805]: I0226 18:30:09.490249 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/630c211f-3dd5-4951-9476-249d0f6bc049-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "630c211f-3dd5-4951-9476-249d0f6bc049" (UID: "630c211f-3dd5-4951-9476-249d0f6bc049"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 18:30:09 crc kubenswrapper[4805]: I0226 18:30:09.507715 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/630c211f-3dd5-4951-9476-249d0f6bc049-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "630c211f-3dd5-4951-9476-249d0f6bc049" (UID: "630c211f-3dd5-4951-9476-249d0f6bc049"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 26 18:30:09 crc kubenswrapper[4805]: I0226 18:30:09.515733 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/630c211f-3dd5-4951-9476-249d0f6bc049-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "630c211f-3dd5-4951-9476-249d0f6bc049" (UID: "630c211f-3dd5-4951-9476-249d0f6bc049"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 26 18:30:09 crc kubenswrapper[4805]: I0226 18:30:09.542411 4805 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/630c211f-3dd5-4951-9476-249d0f6bc049-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 26 18:30:09 crc kubenswrapper[4805]: I0226 18:30:09.542451 4805 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/630c211f-3dd5-4951-9476-249d0f6bc049-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 26 18:30:09 crc kubenswrapper[4805]: I0226 18:30:09.542495 4805 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Feb 26 18:30:09 crc kubenswrapper[4805]: I0226 18:30:09.542510 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kp6jg\" (UniqueName: \"kubernetes.io/projected/630c211f-3dd5-4951-9476-249d0f6bc049-kube-api-access-kp6jg\") on node \"crc\" DevicePath \"\"" Feb 26 18:30:09 crc kubenswrapper[4805]: I0226 18:30:09.542522 4805 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/630c211f-3dd5-4951-9476-249d0f6bc049-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 26 18:30:09 crc kubenswrapper[4805]: I0226 18:30:09.542535 4805 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/630c211f-3dd5-4951-9476-249d0f6bc049-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 26 18:30:09 crc kubenswrapper[4805]: I0226 18:30:09.542551 4805 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/630c211f-3dd5-4951-9476-249d0f6bc049-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 26 18:30:09 crc kubenswrapper[4805]: I0226 18:30:09.572548 4805 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Feb 26 18:30:09 crc kubenswrapper[4805]: I0226 18:30:09.647006 4805 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Feb 26 18:30:09 crc kubenswrapper[4805]: I0226 18:30:09.741439 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"630c211f-3dd5-4951-9476-249d0f6bc049","Type":"ContainerDied","Data":"204b756e2eee983200f5b6c59a5bb335be88cb87c4761c8089d8906af4ab2223"} Feb 26 18:30:09 crc kubenswrapper[4805]: I0226 18:30:09.741476 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="204b756e2eee983200f5b6c59a5bb335be88cb87c4761c8089d8906af4ab2223" Feb 26 18:30:09 crc kubenswrapper[4805]: I0226 18:30:09.741525 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 26 18:30:09 crc kubenswrapper[4805]: I0226 18:30:09.854387 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/630c211f-3dd5-4951-9476-249d0f6bc049-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "630c211f-3dd5-4951-9476-249d0f6bc049" (UID: "630c211f-3dd5-4951-9476-249d0f6bc049"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 18:30:09 crc kubenswrapper[4805]: I0226 18:30:09.950527 4805 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/630c211f-3dd5-4951-9476-249d0f6bc049-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 26 18:30:10 crc kubenswrapper[4805]: I0226 18:30:10.429377 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-68kjg"] Feb 26 18:30:10 crc kubenswrapper[4805]: I0226 18:30:10.431129 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-68kjg" podUID="feba8a05-3532-40f7-829e-35feaf8a3f4b" containerName="registry-server" containerID="cri-o://26898196b0c9235d9818b1d42ec24eec5a9d594ac0a456ecf3eeabf62615eb1f" gracePeriod=2 Feb 26 18:30:10 crc kubenswrapper[4805]: I0226 18:30:10.755092 4805 generic.go:334] "Generic (PLEG): container finished" podID="feba8a05-3532-40f7-829e-35feaf8a3f4b" containerID="26898196b0c9235d9818b1d42ec24eec5a9d594ac0a456ecf3eeabf62615eb1f" exitCode=0 Feb 26 18:30:10 crc kubenswrapper[4805]: I0226 18:30:10.755149 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-68kjg" event={"ID":"feba8a05-3532-40f7-829e-35feaf8a3f4b","Type":"ContainerDied","Data":"26898196b0c9235d9818b1d42ec24eec5a9d594ac0a456ecf3eeabf62615eb1f"} Feb 26 18:30:11 crc kubenswrapper[4805]: I0226 18:30:11.604792 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-68kjg" Feb 26 18:30:11 crc kubenswrapper[4805]: I0226 18:30:11.766810 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-68kjg" event={"ID":"feba8a05-3532-40f7-829e-35feaf8a3f4b","Type":"ContainerDied","Data":"9315d67f9eec0f41499d9608cfa49026e999723100c7a68706f8666f5f89010d"} Feb 26 18:30:11 crc kubenswrapper[4805]: I0226 18:30:11.766876 4805 scope.go:117] "RemoveContainer" containerID="26898196b0c9235d9818b1d42ec24eec5a9d594ac0a456ecf3eeabf62615eb1f" Feb 26 18:30:11 crc kubenswrapper[4805]: I0226 18:30:11.766916 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-68kjg" Feb 26 18:30:11 crc kubenswrapper[4805]: I0226 18:30:11.781838 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feba8a05-3532-40f7-829e-35feaf8a3f4b-utilities\") pod \"feba8a05-3532-40f7-829e-35feaf8a3f4b\" (UID: \"feba8a05-3532-40f7-829e-35feaf8a3f4b\") " Feb 26 18:30:11 crc kubenswrapper[4805]: I0226 18:30:11.782104 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2j8hs\" (UniqueName: \"kubernetes.io/projected/feba8a05-3532-40f7-829e-35feaf8a3f4b-kube-api-access-2j8hs\") pod \"feba8a05-3532-40f7-829e-35feaf8a3f4b\" (UID: \"feba8a05-3532-40f7-829e-35feaf8a3f4b\") " Feb 26 18:30:11 crc kubenswrapper[4805]: I0226 18:30:11.782182 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feba8a05-3532-40f7-829e-35feaf8a3f4b-catalog-content\") pod \"feba8a05-3532-40f7-829e-35feaf8a3f4b\" (UID: \"feba8a05-3532-40f7-829e-35feaf8a3f4b\") " Feb 26 18:30:11 crc kubenswrapper[4805]: I0226 18:30:11.783409 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/feba8a05-3532-40f7-829e-35feaf8a3f4b-utilities" (OuterVolumeSpecName: "utilities") pod "feba8a05-3532-40f7-829e-35feaf8a3f4b" (UID: "feba8a05-3532-40f7-829e-35feaf8a3f4b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 18:30:11 crc kubenswrapper[4805]: I0226 18:30:11.789139 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/feba8a05-3532-40f7-829e-35feaf8a3f4b-kube-api-access-2j8hs" (OuterVolumeSpecName: "kube-api-access-2j8hs") pod "feba8a05-3532-40f7-829e-35feaf8a3f4b" (UID: "feba8a05-3532-40f7-829e-35feaf8a3f4b"). InnerVolumeSpecName "kube-api-access-2j8hs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:30:11 crc kubenswrapper[4805]: I0226 18:30:11.810055 4805 scope.go:117] "RemoveContainer" containerID="6c772f2af5fcf962075380b190c9da85da7802807aa4aef28e04908ffc56437b" Feb 26 18:30:11 crc kubenswrapper[4805]: I0226 18:30:11.813413 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/feba8a05-3532-40f7-829e-35feaf8a3f4b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "feba8a05-3532-40f7-829e-35feaf8a3f4b" (UID: "feba8a05-3532-40f7-829e-35feaf8a3f4b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 18:30:11 crc kubenswrapper[4805]: I0226 18:30:11.855206 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 26 18:30:11 crc kubenswrapper[4805]: E0226 18:30:11.857993 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feba8a05-3532-40f7-829e-35feaf8a3f4b" containerName="extract-content" Feb 26 18:30:11 crc kubenswrapper[4805]: I0226 18:30:11.858045 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="feba8a05-3532-40f7-829e-35feaf8a3f4b" containerName="extract-content" Feb 26 18:30:11 crc kubenswrapper[4805]: E0226 18:30:11.858064 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="630c211f-3dd5-4951-9476-249d0f6bc049" containerName="tempest-tests-tempest-tests-runner" Feb 26 18:30:11 crc kubenswrapper[4805]: I0226 18:30:11.858074 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="630c211f-3dd5-4951-9476-249d0f6bc049" containerName="tempest-tests-tempest-tests-runner" Feb 26 18:30:11 crc kubenswrapper[4805]: E0226 18:30:11.858083 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e29539a5-e1a8-4774-bcfd-11194f5ef1f6" containerName="collect-profiles" Feb 26 18:30:11 crc kubenswrapper[4805]: I0226 18:30:11.858104 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="e29539a5-e1a8-4774-bcfd-11194f5ef1f6" containerName="collect-profiles" Feb 26 18:30:11 crc kubenswrapper[4805]: E0226 18:30:11.858125 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04570f88-acf5-40a8-815b-442b40e626f3" containerName="registry-server" Feb 26 18:30:11 crc kubenswrapper[4805]: I0226 18:30:11.858135 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="04570f88-acf5-40a8-815b-442b40e626f3" containerName="registry-server" Feb 26 18:30:11 crc kubenswrapper[4805]: E0226 18:30:11.871492 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feba8a05-3532-40f7-829e-35feaf8a3f4b" containerName="extract-utilities" Feb 26 18:30:11 crc kubenswrapper[4805]: I0226 18:30:11.871541 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="feba8a05-3532-40f7-829e-35feaf8a3f4b" containerName="extract-utilities" Feb 26 18:30:11 crc kubenswrapper[4805]: E0226 18:30:11.871616 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04570f88-acf5-40a8-815b-442b40e626f3" containerName="extract-utilities" Feb 26 18:30:11 crc kubenswrapper[4805]: I0226 18:30:11.871626 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="04570f88-acf5-40a8-815b-442b40e626f3" containerName="extract-utilities" Feb 26 18:30:11 crc kubenswrapper[4805]: E0226 18:30:11.871649 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feba8a05-3532-40f7-829e-35feaf8a3f4b" containerName="registry-server" Feb 26 18:30:11 crc kubenswrapper[4805]: I0226 18:30:11.871658 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="feba8a05-3532-40f7-829e-35feaf8a3f4b" containerName="registry-server" Feb 26 18:30:11 crc kubenswrapper[4805]: E0226 18:30:11.871689 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04570f88-acf5-40a8-815b-442b40e626f3" containerName="extract-content" Feb 26 18:30:11 crc kubenswrapper[4805]: I0226 18:30:11.871698 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="04570f88-acf5-40a8-815b-442b40e626f3" containerName="extract-content" Feb 26 18:30:11 crc kubenswrapper[4805]: E0226 18:30:11.871716 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78fd51e0-ba17-4be2-8e62-d1be273a222e" containerName="oc" Feb 26 18:30:11 crc kubenswrapper[4805]: I0226 18:30:11.871724 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="78fd51e0-ba17-4be2-8e62-d1be273a222e" containerName="oc" Feb 26 18:30:11 crc kubenswrapper[4805]: I0226 18:30:11.872237 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="04570f88-acf5-40a8-815b-442b40e626f3" containerName="registry-server" Feb 26 18:30:11 crc kubenswrapper[4805]: I0226 18:30:11.872273 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="feba8a05-3532-40f7-829e-35feaf8a3f4b" containerName="registry-server" Feb 26 18:30:11 crc kubenswrapper[4805]: I0226 18:30:11.872295 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="78fd51e0-ba17-4be2-8e62-d1be273a222e" containerName="oc" Feb 26 18:30:11 crc kubenswrapper[4805]: I0226 18:30:11.872305 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="e29539a5-e1a8-4774-bcfd-11194f5ef1f6" containerName="collect-profiles" Feb 26 18:30:11 crc kubenswrapper[4805]: I0226 18:30:11.872322 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="630c211f-3dd5-4951-9476-249d0f6bc049" containerName="tempest-tests-tempest-tests-runner" Feb 26 18:30:11 crc kubenswrapper[4805]: I0226 18:30:11.876849 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 26 18:30:11 crc kubenswrapper[4805]: I0226 18:30:11.876964 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 18:30:11 crc kubenswrapper[4805]: I0226 18:30:11.879427 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-qcl6q" Feb 26 18:30:11 crc kubenswrapper[4805]: I0226 18:30:11.884707 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feba8a05-3532-40f7-829e-35feaf8a3f4b-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 18:30:11 crc kubenswrapper[4805]: I0226 18:30:11.885731 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2j8hs\" (UniqueName: \"kubernetes.io/projected/feba8a05-3532-40f7-829e-35feaf8a3f4b-kube-api-access-2j8hs\") on node \"crc\" DevicePath \"\"" Feb 26 18:30:11 crc kubenswrapper[4805]: I0226 18:30:11.885806 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feba8a05-3532-40f7-829e-35feaf8a3f4b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 18:30:11 crc kubenswrapper[4805]: I0226 18:30:11.912732 4805 scope.go:117] "RemoveContainer" containerID="b6b77f6d4347a3cb23d9bc33f1944ffa0e72eddb99f977fc71a9267540851c07" Feb 26 18:30:11 crc kubenswrapper[4805]: I0226 18:30:11.988463 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2hgk\" (UniqueName: \"kubernetes.io/projected/c2a06e59-5209-4469-b044-b644558241b8-kube-api-access-p2hgk\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c2a06e59-5209-4469-b044-b644558241b8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 18:30:11 crc kubenswrapper[4805]: I0226 18:30:11.988605 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c2a06e59-5209-4469-b044-b644558241b8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 18:30:12 crc kubenswrapper[4805]: I0226 18:30:12.091827 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2hgk\" (UniqueName: \"kubernetes.io/projected/c2a06e59-5209-4469-b044-b644558241b8-kube-api-access-p2hgk\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c2a06e59-5209-4469-b044-b644558241b8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 18:30:12 crc kubenswrapper[4805]: I0226 18:30:12.091911 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c2a06e59-5209-4469-b044-b644558241b8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 18:30:12 crc kubenswrapper[4805]: I0226 18:30:12.092505 4805 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c2a06e59-5209-4469-b044-b644558241b8\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 18:30:12 crc kubenswrapper[4805]: I0226 18:30:12.097677 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-68kjg"] Feb 26 18:30:12 crc kubenswrapper[4805]: I0226 18:30:12.107509 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-68kjg"] Feb 26 18:30:12 crc kubenswrapper[4805]: I0226 18:30:12.113863 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2hgk\" (UniqueName: \"kubernetes.io/projected/c2a06e59-5209-4469-b044-b644558241b8-kube-api-access-p2hgk\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c2a06e59-5209-4469-b044-b644558241b8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 18:30:12 crc kubenswrapper[4805]: I0226 18:30:12.120621 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c2a06e59-5209-4469-b044-b644558241b8\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 18:30:12 crc kubenswrapper[4805]: I0226 18:30:12.283410 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 26 18:30:12 crc kubenswrapper[4805]: I0226 18:30:12.746178 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 26 18:30:12 crc kubenswrapper[4805]: W0226 18:30:12.760073 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2a06e59_5209_4469_b044_b644558241b8.slice/crio-2dfff16538ccd96c943ef4cf1579072ff328af425d621c77b73b723aeb4d2194 WatchSource:0}: Error finding container 2dfff16538ccd96c943ef4cf1579072ff328af425d621c77b73b723aeb4d2194: Status 404 returned error can't find the container with id 2dfff16538ccd96c943ef4cf1579072ff328af425d621c77b73b723aeb4d2194 Feb 26 18:30:12 crc kubenswrapper[4805]: I0226 18:30:12.777924 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"c2a06e59-5209-4469-b044-b644558241b8","Type":"ContainerStarted","Data":"2dfff16538ccd96c943ef4cf1579072ff328af425d621c77b73b723aeb4d2194"} Feb 26 18:30:12 crc kubenswrapper[4805]: I0226 18:30:12.969209 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="feba8a05-3532-40f7-829e-35feaf8a3f4b" path="/var/lib/kubelet/pods/feba8a05-3532-40f7-829e-35feaf8a3f4b/volumes" Feb 26 18:30:14 crc kubenswrapper[4805]: I0226 18:30:14.573519 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wpmw9" Feb 26 18:30:14 crc kubenswrapper[4805]: I0226 18:30:14.622804 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wpmw9" Feb 26 18:30:14 crc kubenswrapper[4805]: I0226 18:30:14.804901 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"c2a06e59-5209-4469-b044-b644558241b8","Type":"ContainerStarted","Data":"39d9512edc61b21b4550b35c59e9fb3a24f5904d5b88e21c7e0501168550763b"} Feb 26 18:30:14 crc kubenswrapper[4805]: I0226 18:30:14.854431 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=3.036963699 podStartE2EDuration="3.854413289s" podCreationTimestamp="2026-02-26 18:30:11 +0000 UTC" firstStartedPulling="2026-02-26 18:30:12.762817804 +0000 UTC m=+4527.324572143" lastFinishedPulling="2026-02-26 18:30:13.580267384 +0000 UTC m=+4528.142021733" observedRunningTime="2026-02-26 18:30:14.824267639 +0000 UTC m=+4529.386021988" watchObservedRunningTime="2026-02-26 18:30:14.854413289 +0000 UTC m=+4529.416167628" Feb 26 18:30:15 crc kubenswrapper[4805]: I0226 18:30:15.224103 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wpmw9"] Feb 26 18:30:15 crc kubenswrapper[4805]: I0226 18:30:15.818614 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wpmw9" podUID="4833f528-f466-4ea0-bb4a-37053e4941bb" containerName="registry-server" containerID="cri-o://927fe61511ee1dcae275b8b793ee161bede096c9a77cb6a8b9594894f5cf06fb" gracePeriod=2 Feb 26 18:30:16 crc kubenswrapper[4805]: I0226 18:30:16.456724 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wpmw9" Feb 26 18:30:16 crc kubenswrapper[4805]: I0226 18:30:16.592679 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4833f528-f466-4ea0-bb4a-37053e4941bb-utilities\") pod \"4833f528-f466-4ea0-bb4a-37053e4941bb\" (UID: \"4833f528-f466-4ea0-bb4a-37053e4941bb\") " Feb 26 18:30:16 crc kubenswrapper[4805]: I0226 18:30:16.592838 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrnhn\" (UniqueName: \"kubernetes.io/projected/4833f528-f466-4ea0-bb4a-37053e4941bb-kube-api-access-jrnhn\") pod \"4833f528-f466-4ea0-bb4a-37053e4941bb\" (UID: \"4833f528-f466-4ea0-bb4a-37053e4941bb\") " Feb 26 18:30:16 crc kubenswrapper[4805]: I0226 18:30:16.592947 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4833f528-f466-4ea0-bb4a-37053e4941bb-catalog-content\") pod \"4833f528-f466-4ea0-bb4a-37053e4941bb\" (UID: \"4833f528-f466-4ea0-bb4a-37053e4941bb\") " Feb 26 18:30:16 crc kubenswrapper[4805]: I0226 18:30:16.593689 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4833f528-f466-4ea0-bb4a-37053e4941bb-utilities" (OuterVolumeSpecName: "utilities") pod "4833f528-f466-4ea0-bb4a-37053e4941bb" (UID: "4833f528-f466-4ea0-bb4a-37053e4941bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 18:30:16 crc kubenswrapper[4805]: I0226 18:30:16.599806 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4833f528-f466-4ea0-bb4a-37053e4941bb-kube-api-access-jrnhn" (OuterVolumeSpecName: "kube-api-access-jrnhn") pod "4833f528-f466-4ea0-bb4a-37053e4941bb" (UID: "4833f528-f466-4ea0-bb4a-37053e4941bb"). InnerVolumeSpecName "kube-api-access-jrnhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:30:16 crc kubenswrapper[4805]: I0226 18:30:16.695894 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4833f528-f466-4ea0-bb4a-37053e4941bb-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 18:30:16 crc kubenswrapper[4805]: I0226 18:30:16.696145 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrnhn\" (UniqueName: \"kubernetes.io/projected/4833f528-f466-4ea0-bb4a-37053e4941bb-kube-api-access-jrnhn\") on node \"crc\" DevicePath \"\"" Feb 26 18:30:16 crc kubenswrapper[4805]: I0226 18:30:16.803400 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4833f528-f466-4ea0-bb4a-37053e4941bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4833f528-f466-4ea0-bb4a-37053e4941bb" (UID: "4833f528-f466-4ea0-bb4a-37053e4941bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 18:30:16 crc kubenswrapper[4805]: I0226 18:30:16.830335 4805 generic.go:334] "Generic (PLEG): container finished" podID="4833f528-f466-4ea0-bb4a-37053e4941bb" containerID="927fe61511ee1dcae275b8b793ee161bede096c9a77cb6a8b9594894f5cf06fb" exitCode=0 Feb 26 18:30:16 crc kubenswrapper[4805]: I0226 18:30:16.830413 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wpmw9" Feb 26 18:30:16 crc kubenswrapper[4805]: I0226 18:30:16.830418 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wpmw9" event={"ID":"4833f528-f466-4ea0-bb4a-37053e4941bb","Type":"ContainerDied","Data":"927fe61511ee1dcae275b8b793ee161bede096c9a77cb6a8b9594894f5cf06fb"} Feb 26 18:30:16 crc kubenswrapper[4805]: I0226 18:30:16.830768 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wpmw9" event={"ID":"4833f528-f466-4ea0-bb4a-37053e4941bb","Type":"ContainerDied","Data":"0abe18811c0e74e56f556cd69709ba4a41e73603fdb4d0d3860b0df4d97d2e26"} Feb 26 18:30:16 crc kubenswrapper[4805]: I0226 18:30:16.830785 4805 scope.go:117] "RemoveContainer" containerID="927fe61511ee1dcae275b8b793ee161bede096c9a77cb6a8b9594894f5cf06fb" Feb 26 18:30:16 crc kubenswrapper[4805]: I0226 18:30:16.859249 4805 scope.go:117] "RemoveContainer" containerID="f8cbb671ec19d7441d14dc77e3fbbda74d4029078a7e69a95c1b7e7cb75e2d86" Feb 26 18:30:16 crc kubenswrapper[4805]: I0226 18:30:16.870914 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wpmw9"] Feb 26 18:30:16 crc kubenswrapper[4805]: I0226 18:30:16.880210 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wpmw9"] Feb 26 18:30:16 crc kubenswrapper[4805]: I0226 18:30:16.899951 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4833f528-f466-4ea0-bb4a-37053e4941bb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 18:30:16 crc kubenswrapper[4805]: I0226 18:30:16.970122 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4833f528-f466-4ea0-bb4a-37053e4941bb" path="/var/lib/kubelet/pods/4833f528-f466-4ea0-bb4a-37053e4941bb/volumes" Feb 26 18:30:17 crc kubenswrapper[4805]: I0226 18:30:17.699754 4805 scope.go:117] "RemoveContainer" containerID="7178b5650a5fb07e2b17ea0049133e636454cdb97eec883711dbd6e5fac4dd48" Feb 26 18:30:17 crc kubenswrapper[4805]: I0226 18:30:17.762248 4805 scope.go:117] "RemoveContainer" containerID="927fe61511ee1dcae275b8b793ee161bede096c9a77cb6a8b9594894f5cf06fb" Feb 26 18:30:17 crc kubenswrapper[4805]: E0226 18:30:17.762667 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"927fe61511ee1dcae275b8b793ee161bede096c9a77cb6a8b9594894f5cf06fb\": container with ID starting with 927fe61511ee1dcae275b8b793ee161bede096c9a77cb6a8b9594894f5cf06fb not found: ID does not exist" containerID="927fe61511ee1dcae275b8b793ee161bede096c9a77cb6a8b9594894f5cf06fb" Feb 26 18:30:17 crc kubenswrapper[4805]: I0226 18:30:17.762712 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"927fe61511ee1dcae275b8b793ee161bede096c9a77cb6a8b9594894f5cf06fb"} err="failed to get container status \"927fe61511ee1dcae275b8b793ee161bede096c9a77cb6a8b9594894f5cf06fb\": rpc error: code = NotFound desc = could not find container \"927fe61511ee1dcae275b8b793ee161bede096c9a77cb6a8b9594894f5cf06fb\": container with ID starting with 927fe61511ee1dcae275b8b793ee161bede096c9a77cb6a8b9594894f5cf06fb not found: ID does not exist" Feb 26 18:30:17 crc kubenswrapper[4805]: I0226 18:30:17.762743 4805 scope.go:117] "RemoveContainer" containerID="f8cbb671ec19d7441d14dc77e3fbbda74d4029078a7e69a95c1b7e7cb75e2d86" Feb 26 18:30:17 crc kubenswrapper[4805]: E0226 18:30:17.763007 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8cbb671ec19d7441d14dc77e3fbbda74d4029078a7e69a95c1b7e7cb75e2d86\": container with ID starting with f8cbb671ec19d7441d14dc77e3fbbda74d4029078a7e69a95c1b7e7cb75e2d86 not found: ID does not exist" containerID="f8cbb671ec19d7441d14dc77e3fbbda74d4029078a7e69a95c1b7e7cb75e2d86" Feb 26 18:30:17 crc kubenswrapper[4805]: I0226 18:30:17.763054 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8cbb671ec19d7441d14dc77e3fbbda74d4029078a7e69a95c1b7e7cb75e2d86"} err="failed to get container status \"f8cbb671ec19d7441d14dc77e3fbbda74d4029078a7e69a95c1b7e7cb75e2d86\": rpc error: code = NotFound desc = could not find container \"f8cbb671ec19d7441d14dc77e3fbbda74d4029078a7e69a95c1b7e7cb75e2d86\": container with ID starting with f8cbb671ec19d7441d14dc77e3fbbda74d4029078a7e69a95c1b7e7cb75e2d86 not found: ID does not exist" Feb 26 18:30:17 crc kubenswrapper[4805]: I0226 18:30:17.763074 4805 scope.go:117] "RemoveContainer" containerID="7178b5650a5fb07e2b17ea0049133e636454cdb97eec883711dbd6e5fac4dd48" Feb 26 18:30:17 crc kubenswrapper[4805]: E0226 18:30:17.763273 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7178b5650a5fb07e2b17ea0049133e636454cdb97eec883711dbd6e5fac4dd48\": container with ID starting with 7178b5650a5fb07e2b17ea0049133e636454cdb97eec883711dbd6e5fac4dd48 not found: ID does not exist" containerID="7178b5650a5fb07e2b17ea0049133e636454cdb97eec883711dbd6e5fac4dd48" Feb 26 18:30:17 crc kubenswrapper[4805]: I0226 18:30:17.763298 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7178b5650a5fb07e2b17ea0049133e636454cdb97eec883711dbd6e5fac4dd48"} err="failed to get container status \"7178b5650a5fb07e2b17ea0049133e636454cdb97eec883711dbd6e5fac4dd48\": rpc error: code = NotFound desc = could not find container \"7178b5650a5fb07e2b17ea0049133e636454cdb97eec883711dbd6e5fac4dd48\": container with ID starting with 7178b5650a5fb07e2b17ea0049133e636454cdb97eec883711dbd6e5fac4dd48 not found: ID does not exist" Feb 26 18:30:49 crc kubenswrapper[4805]: I0226 18:30:49.042367 4805 scope.go:117] "RemoveContainer" containerID="c5d675fd4f23b5836b516950fd0440ed74fd2b004d03765d871bf6b708eebd27" Feb 26 18:30:49 crc kubenswrapper[4805]: I0226 18:30:49.083786 4805 scope.go:117] "RemoveContainer" containerID="b6568c0cd7a6a44894a07372f9f19a245c1d97ba938ac263ae4eedc5b51ae9da" Feb 26 18:30:56 crc kubenswrapper[4805]: I0226 18:30:56.561172 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7cvl2/must-gather-m4mgg"] Feb 26 18:30:56 crc kubenswrapper[4805]: E0226 18:30:56.562213 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4833f528-f466-4ea0-bb4a-37053e4941bb" containerName="extract-content" Feb 26 18:30:56 crc kubenswrapper[4805]: I0226 18:30:56.562227 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="4833f528-f466-4ea0-bb4a-37053e4941bb" containerName="extract-content" Feb 26 18:30:56 crc kubenswrapper[4805]: E0226 18:30:56.562249 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4833f528-f466-4ea0-bb4a-37053e4941bb" containerName="registry-server" Feb 26 18:30:56 crc kubenswrapper[4805]: I0226 18:30:56.562254 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="4833f528-f466-4ea0-bb4a-37053e4941bb" containerName="registry-server" Feb 26 18:30:56 crc kubenswrapper[4805]: E0226 18:30:56.562294 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4833f528-f466-4ea0-bb4a-37053e4941bb" containerName="extract-utilities" Feb 26 18:30:56 crc kubenswrapper[4805]: I0226 18:30:56.562300 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="4833f528-f466-4ea0-bb4a-37053e4941bb" containerName="extract-utilities" Feb 26 18:30:56 crc kubenswrapper[4805]: I0226 18:30:56.562539 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="4833f528-f466-4ea0-bb4a-37053e4941bb" containerName="registry-server" Feb 26 18:30:56 crc kubenswrapper[4805]: I0226 18:30:56.563773 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7cvl2/must-gather-m4mgg" Feb 26 18:30:56 crc kubenswrapper[4805]: I0226 18:30:56.572006 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-7cvl2"/"default-dockercfg-84bxt" Feb 26 18:30:56 crc kubenswrapper[4805]: I0226 18:30:56.572258 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-7cvl2"/"kube-root-ca.crt" Feb 26 18:30:56 crc kubenswrapper[4805]: I0226 18:30:56.572389 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-7cvl2"/"openshift-service-ca.crt" Feb 26 18:30:56 crc kubenswrapper[4805]: I0226 18:30:56.579675 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7cvl2/must-gather-m4mgg"] Feb 26 18:30:56 crc kubenswrapper[4805]: I0226 18:30:56.635398 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kzt6\" (UniqueName: \"kubernetes.io/projected/d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4-kube-api-access-7kzt6\") pod \"must-gather-m4mgg\" (UID: \"d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4\") " pod="openshift-must-gather-7cvl2/must-gather-m4mgg" Feb 26 18:30:56 crc kubenswrapper[4805]: I0226 18:30:56.635579 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4-must-gather-output\") pod \"must-gather-m4mgg\" (UID: \"d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4\") " pod="openshift-must-gather-7cvl2/must-gather-m4mgg" Feb 26 18:30:56 crc kubenswrapper[4805]: I0226 18:30:56.738189 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4-must-gather-output\") pod \"must-gather-m4mgg\" (UID: \"d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4\") " pod="openshift-must-gather-7cvl2/must-gather-m4mgg" Feb 26 18:30:56 crc kubenswrapper[4805]: I0226 18:30:56.738357 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kzt6\" (UniqueName: \"kubernetes.io/projected/d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4-kube-api-access-7kzt6\") pod \"must-gather-m4mgg\" (UID: \"d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4\") " pod="openshift-must-gather-7cvl2/must-gather-m4mgg" Feb 26 18:30:56 crc kubenswrapper[4805]: I0226 18:30:56.738696 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4-must-gather-output\") pod \"must-gather-m4mgg\" (UID: \"d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4\") " pod="openshift-must-gather-7cvl2/must-gather-m4mgg" Feb 26 18:30:56 crc kubenswrapper[4805]: I0226 18:30:56.766758 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kzt6\" (UniqueName: \"kubernetes.io/projected/d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4-kube-api-access-7kzt6\") pod \"must-gather-m4mgg\" (UID: \"d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4\") " pod="openshift-must-gather-7cvl2/must-gather-m4mgg" Feb 26 18:30:56 crc kubenswrapper[4805]: I0226 18:30:56.884060 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7cvl2/must-gather-m4mgg" Feb 26 18:30:57 crc kubenswrapper[4805]: W0226 18:30:57.436523 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd363f2b6_6c6f_4c6a_acb9_f1fd00a306a4.slice/crio-1ee3dc40b39d8b0d1defc3febba1ccc01d0b6e33815aaa19500844d945530df9 WatchSource:0}: Error finding container 1ee3dc40b39d8b0d1defc3febba1ccc01d0b6e33815aaa19500844d945530df9: Status 404 returned error can't find the container with id 1ee3dc40b39d8b0d1defc3febba1ccc01d0b6e33815aaa19500844d945530df9 Feb 26 18:30:57 crc kubenswrapper[4805]: I0226 18:30:57.437842 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7cvl2/must-gather-m4mgg"] Feb 26 18:30:58 crc kubenswrapper[4805]: I0226 18:30:58.387387 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7cvl2/must-gather-m4mgg" event={"ID":"d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4","Type":"ContainerStarted","Data":"1ee3dc40b39d8b0d1defc3febba1ccc01d0b6e33815aaa19500844d945530df9"} Feb 26 18:31:05 crc kubenswrapper[4805]: I0226 18:31:05.486639 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7cvl2/must-gather-m4mgg" event={"ID":"d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4","Type":"ContainerStarted","Data":"08146ac29046346b848998f16192d184fced07ca1bf020240e2d0ca1acaa3817"} Feb 26 18:31:06 crc kubenswrapper[4805]: I0226 18:31:06.498258 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7cvl2/must-gather-m4mgg" event={"ID":"d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4","Type":"ContainerStarted","Data":"fe7875ff4b71c2dfef59f52ff9d9d4994b9ed3fa10917df12b79e1484fdaec9b"} Feb 26 18:31:06 crc kubenswrapper[4805]: I0226 18:31:06.518847 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-7cvl2/must-gather-m4mgg" podStartSLOduration=2.874486503 podStartE2EDuration="10.518824504s" podCreationTimestamp="2026-02-26 18:30:56 +0000 UTC" firstStartedPulling="2026-02-26 18:30:57.442077644 +0000 UTC m=+4572.003831983" lastFinishedPulling="2026-02-26 18:31:05.086415645 +0000 UTC m=+4579.648169984" observedRunningTime="2026-02-26 18:31:06.511211192 +0000 UTC m=+4581.072965531" watchObservedRunningTime="2026-02-26 18:31:06.518824504 +0000 UTC m=+4581.080578843" Feb 26 18:31:09 crc kubenswrapper[4805]: I0226 18:31:09.411204 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7cvl2/crc-debug-8p22r"] Feb 26 18:31:09 crc kubenswrapper[4805]: I0226 18:31:09.413506 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7cvl2/crc-debug-8p22r" Feb 26 18:31:09 crc kubenswrapper[4805]: I0226 18:31:09.527315 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkcl5\" (UniqueName: \"kubernetes.io/projected/a9eb3b15-aee4-4a41-9003-88ee94ccf12d-kube-api-access-tkcl5\") pod \"crc-debug-8p22r\" (UID: \"a9eb3b15-aee4-4a41-9003-88ee94ccf12d\") " pod="openshift-must-gather-7cvl2/crc-debug-8p22r" Feb 26 18:31:09 crc kubenswrapper[4805]: I0226 18:31:09.527515 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a9eb3b15-aee4-4a41-9003-88ee94ccf12d-host\") pod \"crc-debug-8p22r\" (UID: \"a9eb3b15-aee4-4a41-9003-88ee94ccf12d\") " pod="openshift-must-gather-7cvl2/crc-debug-8p22r" Feb 26 18:31:09 crc kubenswrapper[4805]: I0226 18:31:09.629800 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a9eb3b15-aee4-4a41-9003-88ee94ccf12d-host\") pod \"crc-debug-8p22r\" (UID: \"a9eb3b15-aee4-4a41-9003-88ee94ccf12d\") " pod="openshift-must-gather-7cvl2/crc-debug-8p22r" Feb 26 18:31:09 crc kubenswrapper[4805]: I0226 18:31:09.629967 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a9eb3b15-aee4-4a41-9003-88ee94ccf12d-host\") pod \"crc-debug-8p22r\" (UID: \"a9eb3b15-aee4-4a41-9003-88ee94ccf12d\") " pod="openshift-must-gather-7cvl2/crc-debug-8p22r" Feb 26 18:31:09 crc kubenswrapper[4805]: I0226 18:31:09.629983 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkcl5\" (UniqueName: \"kubernetes.io/projected/a9eb3b15-aee4-4a41-9003-88ee94ccf12d-kube-api-access-tkcl5\") pod \"crc-debug-8p22r\" (UID: \"a9eb3b15-aee4-4a41-9003-88ee94ccf12d\") " pod="openshift-must-gather-7cvl2/crc-debug-8p22r" Feb 26 18:31:09 crc kubenswrapper[4805]: I0226 18:31:09.650837 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkcl5\" (UniqueName: \"kubernetes.io/projected/a9eb3b15-aee4-4a41-9003-88ee94ccf12d-kube-api-access-tkcl5\") pod \"crc-debug-8p22r\" (UID: \"a9eb3b15-aee4-4a41-9003-88ee94ccf12d\") " pod="openshift-must-gather-7cvl2/crc-debug-8p22r" Feb 26 18:31:09 crc kubenswrapper[4805]: I0226 18:31:09.733189 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7cvl2/crc-debug-8p22r" Feb 26 18:31:10 crc kubenswrapper[4805]: I0226 18:31:10.542279 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7cvl2/crc-debug-8p22r" event={"ID":"a9eb3b15-aee4-4a41-9003-88ee94ccf12d","Type":"ContainerStarted","Data":"c8aba7534b5f11c9b1df0a11f47ae4639ed708c31fed04570075132d9549076e"} Feb 26 18:31:24 crc kubenswrapper[4805]: I0226 18:31:24.687629 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7cvl2/crc-debug-8p22r" event={"ID":"a9eb3b15-aee4-4a41-9003-88ee94ccf12d","Type":"ContainerStarted","Data":"6e9fecc8a373882648e9b316833ccd4fc6599bc5c954578c0fb6e6622bcc96cd"} Feb 26 18:31:24 crc kubenswrapper[4805]: I0226 18:31:24.724980 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-7cvl2/crc-debug-8p22r" podStartSLOduration=2.102565013 podStartE2EDuration="15.724958663s" podCreationTimestamp="2026-02-26 18:31:09 +0000 UTC" firstStartedPulling="2026-02-26 18:31:09.780265113 +0000 UTC m=+4584.342019452" lastFinishedPulling="2026-02-26 18:31:23.402658763 +0000 UTC m=+4597.964413102" observedRunningTime="2026-02-26 18:31:24.715426413 +0000 UTC m=+4599.277180752" watchObservedRunningTime="2026-02-26 18:31:24.724958663 +0000 UTC m=+4599.286713012" Feb 26 18:31:32 crc kubenswrapper[4805]: I0226 18:31:32.978524 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 18:31:32 crc kubenswrapper[4805]: I0226 18:31:32.979170 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 18:32:00 crc kubenswrapper[4805]: I0226 18:32:00.138109 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535512-lkw4c"] Feb 26 18:32:00 crc kubenswrapper[4805]: I0226 18:32:00.140576 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535512-lkw4c" Feb 26 18:32:00 crc kubenswrapper[4805]: I0226 18:32:00.143358 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 18:32:00 crc kubenswrapper[4805]: I0226 18:32:00.143748 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 18:32:00 crc kubenswrapper[4805]: I0226 18:32:00.146224 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 18:32:00 crc kubenswrapper[4805]: I0226 18:32:00.147828 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535512-lkw4c"] Feb 26 18:32:00 crc kubenswrapper[4805]: I0226 18:32:00.219217 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdjhx\" (UniqueName: \"kubernetes.io/projected/5c38d130-9e7f-4edf-bb86-f62f1bfe000b-kube-api-access-xdjhx\") pod \"auto-csr-approver-29535512-lkw4c\" (UID: \"5c38d130-9e7f-4edf-bb86-f62f1bfe000b\") " pod="openshift-infra/auto-csr-approver-29535512-lkw4c" Feb 26 18:32:00 crc kubenswrapper[4805]: I0226 18:32:00.321112 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdjhx\" (UniqueName: \"kubernetes.io/projected/5c38d130-9e7f-4edf-bb86-f62f1bfe000b-kube-api-access-xdjhx\") pod \"auto-csr-approver-29535512-lkw4c\" (UID: \"5c38d130-9e7f-4edf-bb86-f62f1bfe000b\") " pod="openshift-infra/auto-csr-approver-29535512-lkw4c" Feb 26 18:32:00 crc kubenswrapper[4805]: I0226 18:32:00.355063 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdjhx\" (UniqueName: \"kubernetes.io/projected/5c38d130-9e7f-4edf-bb86-f62f1bfe000b-kube-api-access-xdjhx\") pod \"auto-csr-approver-29535512-lkw4c\" (UID: \"5c38d130-9e7f-4edf-bb86-f62f1bfe000b\") " pod="openshift-infra/auto-csr-approver-29535512-lkw4c" Feb 26 18:32:00 crc kubenswrapper[4805]: I0226 18:32:00.458805 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535512-lkw4c" Feb 26 18:32:01 crc kubenswrapper[4805]: I0226 18:32:01.723968 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535512-lkw4c"] Feb 26 18:32:02 crc kubenswrapper[4805]: I0226 18:32:02.318057 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535512-lkw4c" event={"ID":"5c38d130-9e7f-4edf-bb86-f62f1bfe000b","Type":"ContainerStarted","Data":"e1adc922359d3fdeaa846b36dfa051b932c42dea4fd6f11b14dbc3d0507d8b3f"} Feb 26 18:32:02 crc kubenswrapper[4805]: I0226 18:32:02.977596 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 18:32:02 crc kubenswrapper[4805]: I0226 18:32:02.978504 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 18:32:03 crc kubenswrapper[4805]: I0226 18:32:03.329173 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535512-lkw4c" event={"ID":"5c38d130-9e7f-4edf-bb86-f62f1bfe000b","Type":"ContainerStarted","Data":"16ab2789883d381c1d32c9c2adc5ad689ae87e46691707a72370d718f0efa8ad"} Feb 26 18:32:03 crc kubenswrapper[4805]: I0226 18:32:03.347527 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535512-lkw4c" podStartSLOduration=2.4757170889999998 podStartE2EDuration="3.347512131s" podCreationTimestamp="2026-02-26 18:32:00 +0000 UTC" firstStartedPulling="2026-02-26 18:32:01.727601829 +0000 UTC m=+4636.289356178" lastFinishedPulling="2026-02-26 18:32:02.599396871 +0000 UTC m=+4637.161151220" observedRunningTime="2026-02-26 18:32:03.342972147 +0000 UTC m=+4637.904726486" watchObservedRunningTime="2026-02-26 18:32:03.347512131 +0000 UTC m=+4637.909266470" Feb 26 18:32:04 crc kubenswrapper[4805]: I0226 18:32:04.349299 4805 generic.go:334] "Generic (PLEG): container finished" podID="5c38d130-9e7f-4edf-bb86-f62f1bfe000b" containerID="16ab2789883d381c1d32c9c2adc5ad689ae87e46691707a72370d718f0efa8ad" exitCode=0 Feb 26 18:32:04 crc kubenswrapper[4805]: I0226 18:32:04.349372 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535512-lkw4c" event={"ID":"5c38d130-9e7f-4edf-bb86-f62f1bfe000b","Type":"ContainerDied","Data":"16ab2789883d381c1d32c9c2adc5ad689ae87e46691707a72370d718f0efa8ad"} Feb 26 18:32:05 crc kubenswrapper[4805]: I0226 18:32:05.928549 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535512-lkw4c" Feb 26 18:32:06 crc kubenswrapper[4805]: I0226 18:32:06.061955 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdjhx\" (UniqueName: \"kubernetes.io/projected/5c38d130-9e7f-4edf-bb86-f62f1bfe000b-kube-api-access-xdjhx\") pod \"5c38d130-9e7f-4edf-bb86-f62f1bfe000b\" (UID: \"5c38d130-9e7f-4edf-bb86-f62f1bfe000b\") " Feb 26 18:32:06 crc kubenswrapper[4805]: I0226 18:32:06.069337 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c38d130-9e7f-4edf-bb86-f62f1bfe000b-kube-api-access-xdjhx" (OuterVolumeSpecName: "kube-api-access-xdjhx") pod "5c38d130-9e7f-4edf-bb86-f62f1bfe000b" (UID: "5c38d130-9e7f-4edf-bb86-f62f1bfe000b"). InnerVolumeSpecName "kube-api-access-xdjhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:32:06 crc kubenswrapper[4805]: I0226 18:32:06.164215 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdjhx\" (UniqueName: \"kubernetes.io/projected/5c38d130-9e7f-4edf-bb86-f62f1bfe000b-kube-api-access-xdjhx\") on node \"crc\" DevicePath \"\"" Feb 26 18:32:06 crc kubenswrapper[4805]: I0226 18:32:06.377103 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535512-lkw4c" event={"ID":"5c38d130-9e7f-4edf-bb86-f62f1bfe000b","Type":"ContainerDied","Data":"e1adc922359d3fdeaa846b36dfa051b932c42dea4fd6f11b14dbc3d0507d8b3f"} Feb 26 18:32:06 crc kubenswrapper[4805]: I0226 18:32:06.377154 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1adc922359d3fdeaa846b36dfa051b932c42dea4fd6f11b14dbc3d0507d8b3f" Feb 26 18:32:06 crc kubenswrapper[4805]: I0226 18:32:06.377218 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535512-lkw4c" Feb 26 18:32:06 crc kubenswrapper[4805]: I0226 18:32:06.428883 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535506-x5d75"] Feb 26 18:32:06 crc kubenswrapper[4805]: I0226 18:32:06.440536 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535506-x5d75"] Feb 26 18:32:06 crc kubenswrapper[4805]: I0226 18:32:06.968989 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1324f8bb-6699-494d-af28-301c250492e7" path="/var/lib/kubelet/pods/1324f8bb-6699-494d-af28-301c250492e7/volumes" Feb 26 18:32:21 crc kubenswrapper[4805]: I0226 18:32:21.536474 4805 generic.go:334] "Generic (PLEG): container finished" podID="a9eb3b15-aee4-4a41-9003-88ee94ccf12d" containerID="6e9fecc8a373882648e9b316833ccd4fc6599bc5c954578c0fb6e6622bcc96cd" exitCode=0 Feb 26 18:32:21 crc kubenswrapper[4805]: I0226 18:32:21.536573 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7cvl2/crc-debug-8p22r" event={"ID":"a9eb3b15-aee4-4a41-9003-88ee94ccf12d","Type":"ContainerDied","Data":"6e9fecc8a373882648e9b316833ccd4fc6599bc5c954578c0fb6e6622bcc96cd"} Feb 26 18:32:23 crc kubenswrapper[4805]: I0226 18:32:23.006328 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7cvl2/crc-debug-8p22r" Feb 26 18:32:23 crc kubenswrapper[4805]: I0226 18:32:23.051723 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7cvl2/crc-debug-8p22r"] Feb 26 18:32:23 crc kubenswrapper[4805]: I0226 18:32:23.060656 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7cvl2/crc-debug-8p22r"] Feb 26 18:32:23 crc kubenswrapper[4805]: I0226 18:32:23.062263 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkcl5\" (UniqueName: \"kubernetes.io/projected/a9eb3b15-aee4-4a41-9003-88ee94ccf12d-kube-api-access-tkcl5\") pod \"a9eb3b15-aee4-4a41-9003-88ee94ccf12d\" (UID: \"a9eb3b15-aee4-4a41-9003-88ee94ccf12d\") " Feb 26 18:32:23 crc kubenswrapper[4805]: I0226 18:32:23.062321 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a9eb3b15-aee4-4a41-9003-88ee94ccf12d-host\") pod \"a9eb3b15-aee4-4a41-9003-88ee94ccf12d\" (UID: \"a9eb3b15-aee4-4a41-9003-88ee94ccf12d\") " Feb 26 18:32:23 crc kubenswrapper[4805]: I0226 18:32:23.064736 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9eb3b15-aee4-4a41-9003-88ee94ccf12d-host" (OuterVolumeSpecName: "host") pod "a9eb3b15-aee4-4a41-9003-88ee94ccf12d" (UID: "a9eb3b15-aee4-4a41-9003-88ee94ccf12d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 18:32:23 crc kubenswrapper[4805]: I0226 18:32:23.072399 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9eb3b15-aee4-4a41-9003-88ee94ccf12d-kube-api-access-tkcl5" (OuterVolumeSpecName: "kube-api-access-tkcl5") pod "a9eb3b15-aee4-4a41-9003-88ee94ccf12d" (UID: "a9eb3b15-aee4-4a41-9003-88ee94ccf12d"). InnerVolumeSpecName "kube-api-access-tkcl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:32:23 crc kubenswrapper[4805]: I0226 18:32:23.164185 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkcl5\" (UniqueName: \"kubernetes.io/projected/a9eb3b15-aee4-4a41-9003-88ee94ccf12d-kube-api-access-tkcl5\") on node \"crc\" DevicePath \"\"" Feb 26 18:32:23 crc kubenswrapper[4805]: I0226 18:32:23.164211 4805 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a9eb3b15-aee4-4a41-9003-88ee94ccf12d-host\") on node \"crc\" DevicePath \"\"" Feb 26 18:32:23 crc kubenswrapper[4805]: I0226 18:32:23.556956 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8aba7534b5f11c9b1df0a11f47ae4639ed708c31fed04570075132d9549076e" Feb 26 18:32:23 crc kubenswrapper[4805]: I0226 18:32:23.557008 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7cvl2/crc-debug-8p22r" Feb 26 18:32:23 crc kubenswrapper[4805]: E0226 18:32:23.665760 4805 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9eb3b15_aee4_4a41_9003_88ee94ccf12d.slice/crio-c8aba7534b5f11c9b1df0a11f47ae4639ed708c31fed04570075132d9549076e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9eb3b15_aee4_4a41_9003_88ee94ccf12d.slice\": RecentStats: unable to find data in memory cache]" Feb 26 18:32:24 crc kubenswrapper[4805]: I0226 18:32:24.294427 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7cvl2/crc-debug-nm456"] Feb 26 18:32:24 crc kubenswrapper[4805]: E0226 18:32:24.295109 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9eb3b15-aee4-4a41-9003-88ee94ccf12d" containerName="container-00" Feb 26 18:32:24 crc kubenswrapper[4805]: I0226 18:32:24.295123 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9eb3b15-aee4-4a41-9003-88ee94ccf12d" containerName="container-00" Feb 26 18:32:24 crc kubenswrapper[4805]: E0226 18:32:24.295149 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c38d130-9e7f-4edf-bb86-f62f1bfe000b" containerName="oc" Feb 26 18:32:24 crc kubenswrapper[4805]: I0226 18:32:24.295155 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c38d130-9e7f-4edf-bb86-f62f1bfe000b" containerName="oc" Feb 26 18:32:24 crc kubenswrapper[4805]: I0226 18:32:24.295352 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c38d130-9e7f-4edf-bb86-f62f1bfe000b" containerName="oc" Feb 26 18:32:24 crc kubenswrapper[4805]: I0226 18:32:24.295366 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9eb3b15-aee4-4a41-9003-88ee94ccf12d" containerName="container-00" Feb 26 18:32:24 crc kubenswrapper[4805]: I0226 18:32:24.296063 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7cvl2/crc-debug-nm456" Feb 26 18:32:24 crc kubenswrapper[4805]: I0226 18:32:24.387809 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kngd6\" (UniqueName: \"kubernetes.io/projected/1ebc53b3-5f95-43f4-8db9-b42133d242a2-kube-api-access-kngd6\") pod \"crc-debug-nm456\" (UID: \"1ebc53b3-5f95-43f4-8db9-b42133d242a2\") " pod="openshift-must-gather-7cvl2/crc-debug-nm456" Feb 26 18:32:24 crc kubenswrapper[4805]: I0226 18:32:24.387985 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1ebc53b3-5f95-43f4-8db9-b42133d242a2-host\") pod \"crc-debug-nm456\" (UID: \"1ebc53b3-5f95-43f4-8db9-b42133d242a2\") " pod="openshift-must-gather-7cvl2/crc-debug-nm456" Feb 26 18:32:24 crc kubenswrapper[4805]: I0226 18:32:24.489711 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kngd6\" (UniqueName: \"kubernetes.io/projected/1ebc53b3-5f95-43f4-8db9-b42133d242a2-kube-api-access-kngd6\") pod \"crc-debug-nm456\" (UID: \"1ebc53b3-5f95-43f4-8db9-b42133d242a2\") " pod="openshift-must-gather-7cvl2/crc-debug-nm456" Feb 26 18:32:24 crc kubenswrapper[4805]: I0226 18:32:24.489774 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1ebc53b3-5f95-43f4-8db9-b42133d242a2-host\") pod \"crc-debug-nm456\" (UID: \"1ebc53b3-5f95-43f4-8db9-b42133d242a2\") " pod="openshift-must-gather-7cvl2/crc-debug-nm456" Feb 26 18:32:24 crc kubenswrapper[4805]: I0226 18:32:24.489934 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1ebc53b3-5f95-43f4-8db9-b42133d242a2-host\") pod \"crc-debug-nm456\" (UID: \"1ebc53b3-5f95-43f4-8db9-b42133d242a2\") " pod="openshift-must-gather-7cvl2/crc-debug-nm456" Feb 26 18:32:24 crc kubenswrapper[4805]: I0226 18:32:24.578993 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kngd6\" (UniqueName: \"kubernetes.io/projected/1ebc53b3-5f95-43f4-8db9-b42133d242a2-kube-api-access-kngd6\") pod \"crc-debug-nm456\" (UID: \"1ebc53b3-5f95-43f4-8db9-b42133d242a2\") " pod="openshift-must-gather-7cvl2/crc-debug-nm456" Feb 26 18:32:24 crc kubenswrapper[4805]: I0226 18:32:24.614739 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7cvl2/crc-debug-nm456" Feb 26 18:32:24 crc kubenswrapper[4805]: I0226 18:32:24.967801 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9eb3b15-aee4-4a41-9003-88ee94ccf12d" path="/var/lib/kubelet/pods/a9eb3b15-aee4-4a41-9003-88ee94ccf12d/volumes" Feb 26 18:32:25 crc kubenswrapper[4805]: I0226 18:32:25.575379 4805 generic.go:334] "Generic (PLEG): container finished" podID="1ebc53b3-5f95-43f4-8db9-b42133d242a2" containerID="c3f94d9a7ee84d714afd682e47cc63bf12aa8e9572def3b9a444a3da4bcb3619" exitCode=0 Feb 26 18:32:25 crc kubenswrapper[4805]: I0226 18:32:25.575486 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7cvl2/crc-debug-nm456" event={"ID":"1ebc53b3-5f95-43f4-8db9-b42133d242a2","Type":"ContainerDied","Data":"c3f94d9a7ee84d714afd682e47cc63bf12aa8e9572def3b9a444a3da4bcb3619"} Feb 26 18:32:25 crc kubenswrapper[4805]: I0226 18:32:25.575705 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7cvl2/crc-debug-nm456" event={"ID":"1ebc53b3-5f95-43f4-8db9-b42133d242a2","Type":"ContainerStarted","Data":"41ee67c16f2c91a72e02dc9b8193a780c8d9c8249aa95f86db1d8b63b388e9ad"} Feb 26 18:32:26 crc kubenswrapper[4805]: I0226 18:32:26.728823 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7cvl2/crc-debug-nm456" Feb 26 18:32:26 crc kubenswrapper[4805]: I0226 18:32:26.833272 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kngd6\" (UniqueName: \"kubernetes.io/projected/1ebc53b3-5f95-43f4-8db9-b42133d242a2-kube-api-access-kngd6\") pod \"1ebc53b3-5f95-43f4-8db9-b42133d242a2\" (UID: \"1ebc53b3-5f95-43f4-8db9-b42133d242a2\") " Feb 26 18:32:26 crc kubenswrapper[4805]: I0226 18:32:26.833393 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1ebc53b3-5f95-43f4-8db9-b42133d242a2-host\") pod \"1ebc53b3-5f95-43f4-8db9-b42133d242a2\" (UID: \"1ebc53b3-5f95-43f4-8db9-b42133d242a2\") " Feb 26 18:32:26 crc kubenswrapper[4805]: I0226 18:32:26.833551 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ebc53b3-5f95-43f4-8db9-b42133d242a2-host" (OuterVolumeSpecName: "host") pod "1ebc53b3-5f95-43f4-8db9-b42133d242a2" (UID: "1ebc53b3-5f95-43f4-8db9-b42133d242a2"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 18:32:26 crc kubenswrapper[4805]: I0226 18:32:26.833820 4805 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1ebc53b3-5f95-43f4-8db9-b42133d242a2-host\") on node \"crc\" DevicePath \"\"" Feb 26 18:32:26 crc kubenswrapper[4805]: I0226 18:32:26.838486 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ebc53b3-5f95-43f4-8db9-b42133d242a2-kube-api-access-kngd6" (OuterVolumeSpecName: "kube-api-access-kngd6") pod "1ebc53b3-5f95-43f4-8db9-b42133d242a2" (UID: "1ebc53b3-5f95-43f4-8db9-b42133d242a2"). InnerVolumeSpecName "kube-api-access-kngd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:32:26 crc kubenswrapper[4805]: I0226 18:32:26.935540 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kngd6\" (UniqueName: \"kubernetes.io/projected/1ebc53b3-5f95-43f4-8db9-b42133d242a2-kube-api-access-kngd6\") on node \"crc\" DevicePath \"\"" Feb 26 18:32:27 crc kubenswrapper[4805]: I0226 18:32:27.078094 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7cvl2/crc-debug-nm456"] Feb 26 18:32:27 crc kubenswrapper[4805]: I0226 18:32:27.088437 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7cvl2/crc-debug-nm456"] Feb 26 18:32:27 crc kubenswrapper[4805]: I0226 18:32:27.601523 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41ee67c16f2c91a72e02dc9b8193a780c8d9c8249aa95f86db1d8b63b388e9ad" Feb 26 18:32:27 crc kubenswrapper[4805]: I0226 18:32:27.601592 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7cvl2/crc-debug-nm456" Feb 26 18:32:28 crc kubenswrapper[4805]: I0226 18:32:28.259585 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7cvl2/crc-debug-ktvb6"] Feb 26 18:32:28 crc kubenswrapper[4805]: E0226 18:32:28.260191 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ebc53b3-5f95-43f4-8db9-b42133d242a2" containerName="container-00" Feb 26 18:32:28 crc kubenswrapper[4805]: I0226 18:32:28.260210 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ebc53b3-5f95-43f4-8db9-b42133d242a2" containerName="container-00" Feb 26 18:32:28 crc kubenswrapper[4805]: I0226 18:32:28.260526 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ebc53b3-5f95-43f4-8db9-b42133d242a2" containerName="container-00" Feb 26 18:32:28 crc kubenswrapper[4805]: I0226 18:32:28.261620 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7cvl2/crc-debug-ktvb6" Feb 26 18:32:28 crc kubenswrapper[4805]: I0226 18:32:28.366909 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8m79\" (UniqueName: \"kubernetes.io/projected/2846a75f-e617-41be-8892-18cbfafee47b-kube-api-access-n8m79\") pod \"crc-debug-ktvb6\" (UID: \"2846a75f-e617-41be-8892-18cbfafee47b\") " pod="openshift-must-gather-7cvl2/crc-debug-ktvb6" Feb 26 18:32:28 crc kubenswrapper[4805]: I0226 18:32:28.367290 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2846a75f-e617-41be-8892-18cbfafee47b-host\") pod \"crc-debug-ktvb6\" (UID: \"2846a75f-e617-41be-8892-18cbfafee47b\") " pod="openshift-must-gather-7cvl2/crc-debug-ktvb6" Feb 26 18:32:28 crc kubenswrapper[4805]: I0226 18:32:28.469778 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2846a75f-e617-41be-8892-18cbfafee47b-host\") pod \"crc-debug-ktvb6\" (UID: \"2846a75f-e617-41be-8892-18cbfafee47b\") " pod="openshift-must-gather-7cvl2/crc-debug-ktvb6" Feb 26 18:32:28 crc kubenswrapper[4805]: I0226 18:32:28.469976 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2846a75f-e617-41be-8892-18cbfafee47b-host\") pod \"crc-debug-ktvb6\" (UID: \"2846a75f-e617-41be-8892-18cbfafee47b\") " pod="openshift-must-gather-7cvl2/crc-debug-ktvb6" Feb 26 18:32:28 crc kubenswrapper[4805]: I0226 18:32:28.470183 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8m79\" (UniqueName: \"kubernetes.io/projected/2846a75f-e617-41be-8892-18cbfafee47b-kube-api-access-n8m79\") pod \"crc-debug-ktvb6\" (UID: \"2846a75f-e617-41be-8892-18cbfafee47b\") " pod="openshift-must-gather-7cvl2/crc-debug-ktvb6" Feb 26 18:32:28 crc kubenswrapper[4805]: I0226 18:32:28.495663 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8m79\" (UniqueName: \"kubernetes.io/projected/2846a75f-e617-41be-8892-18cbfafee47b-kube-api-access-n8m79\") pod \"crc-debug-ktvb6\" (UID: \"2846a75f-e617-41be-8892-18cbfafee47b\") " pod="openshift-must-gather-7cvl2/crc-debug-ktvb6" Feb 26 18:32:28 crc kubenswrapper[4805]: I0226 18:32:28.586409 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7cvl2/crc-debug-ktvb6" Feb 26 18:32:28 crc kubenswrapper[4805]: I0226 18:32:28.965680 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ebc53b3-5f95-43f4-8db9-b42133d242a2" path="/var/lib/kubelet/pods/1ebc53b3-5f95-43f4-8db9-b42133d242a2/volumes" Feb 26 18:32:29 crc kubenswrapper[4805]: I0226 18:32:29.623185 4805 generic.go:334] "Generic (PLEG): container finished" podID="2846a75f-e617-41be-8892-18cbfafee47b" containerID="049dc2e70a143fce6d6a756eefa2777a5c3c6c40013add6dcbf88e8c3eeb2ea2" exitCode=0 Feb 26 18:32:29 crc kubenswrapper[4805]: I0226 18:32:29.623244 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7cvl2/crc-debug-ktvb6" event={"ID":"2846a75f-e617-41be-8892-18cbfafee47b","Type":"ContainerDied","Data":"049dc2e70a143fce6d6a756eefa2777a5c3c6c40013add6dcbf88e8c3eeb2ea2"} Feb 26 18:32:29 crc kubenswrapper[4805]: I0226 18:32:29.623297 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7cvl2/crc-debug-ktvb6" event={"ID":"2846a75f-e617-41be-8892-18cbfafee47b","Type":"ContainerStarted","Data":"ccc119b97fd9cb049358f68640cefd043f364c4315bc3f8df5cd8b1504322789"} Feb 26 18:32:29 crc kubenswrapper[4805]: I0226 18:32:29.702934 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7cvl2/crc-debug-ktvb6"] Feb 26 18:32:29 crc kubenswrapper[4805]: I0226 18:32:29.718549 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7cvl2/crc-debug-ktvb6"] Feb 26 18:32:30 crc kubenswrapper[4805]: I0226 18:32:30.755140 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7cvl2/crc-debug-ktvb6" Feb 26 18:32:30 crc kubenswrapper[4805]: I0226 18:32:30.949285 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2846a75f-e617-41be-8892-18cbfafee47b-host\") pod \"2846a75f-e617-41be-8892-18cbfafee47b\" (UID: \"2846a75f-e617-41be-8892-18cbfafee47b\") " Feb 26 18:32:30 crc kubenswrapper[4805]: I0226 18:32:30.949360 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2846a75f-e617-41be-8892-18cbfafee47b-host" (OuterVolumeSpecName: "host") pod "2846a75f-e617-41be-8892-18cbfafee47b" (UID: "2846a75f-e617-41be-8892-18cbfafee47b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 26 18:32:30 crc kubenswrapper[4805]: I0226 18:32:30.949600 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8m79\" (UniqueName: \"kubernetes.io/projected/2846a75f-e617-41be-8892-18cbfafee47b-kube-api-access-n8m79\") pod \"2846a75f-e617-41be-8892-18cbfafee47b\" (UID: \"2846a75f-e617-41be-8892-18cbfafee47b\") " Feb 26 18:32:30 crc kubenswrapper[4805]: I0226 18:32:30.950216 4805 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2846a75f-e617-41be-8892-18cbfafee47b-host\") on node \"crc\" DevicePath \"\"" Feb 26 18:32:30 crc kubenswrapper[4805]: I0226 18:32:30.957011 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2846a75f-e617-41be-8892-18cbfafee47b-kube-api-access-n8m79" (OuterVolumeSpecName: "kube-api-access-n8m79") pod "2846a75f-e617-41be-8892-18cbfafee47b" (UID: "2846a75f-e617-41be-8892-18cbfafee47b"). InnerVolumeSpecName "kube-api-access-n8m79". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:32:30 crc kubenswrapper[4805]: I0226 18:32:30.968999 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2846a75f-e617-41be-8892-18cbfafee47b" path="/var/lib/kubelet/pods/2846a75f-e617-41be-8892-18cbfafee47b/volumes" Feb 26 18:32:31 crc kubenswrapper[4805]: I0226 18:32:31.053191 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8m79\" (UniqueName: \"kubernetes.io/projected/2846a75f-e617-41be-8892-18cbfafee47b-kube-api-access-n8m79\") on node \"crc\" DevicePath \"\"" Feb 26 18:32:31 crc kubenswrapper[4805]: I0226 18:32:31.643520 4805 scope.go:117] "RemoveContainer" containerID="049dc2e70a143fce6d6a756eefa2777a5c3c6c40013add6dcbf88e8c3eeb2ea2" Feb 26 18:32:31 crc kubenswrapper[4805]: I0226 18:32:31.643548 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7cvl2/crc-debug-ktvb6" Feb 26 18:32:32 crc kubenswrapper[4805]: I0226 18:32:32.977842 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 18:32:32 crc kubenswrapper[4805]: I0226 18:32:32.978235 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 18:32:32 crc kubenswrapper[4805]: I0226 18:32:32.978288 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" Feb 26 18:32:32 crc kubenswrapper[4805]: I0226 18:32:32.979163 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b7c30223c5b76c8ad9d42938cacce991adefae21cfd696e2668eb62862ed8166"} pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 18:32:32 crc kubenswrapper[4805]: I0226 18:32:32.979233 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" containerID="cri-o://b7c30223c5b76c8ad9d42938cacce991adefae21cfd696e2668eb62862ed8166" gracePeriod=600 Feb 26 18:32:33 crc kubenswrapper[4805]: E0226 18:32:33.493294 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:32:33 crc kubenswrapper[4805]: I0226 18:32:33.666531 4805 generic.go:334] "Generic (PLEG): container finished" podID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerID="b7c30223c5b76c8ad9d42938cacce991adefae21cfd696e2668eb62862ed8166" exitCode=0 Feb 26 18:32:33 crc kubenswrapper[4805]: I0226 18:32:33.666576 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" event={"ID":"25e83477-65d0-41be-8e55-fdacfc5871a8","Type":"ContainerDied","Data":"b7c30223c5b76c8ad9d42938cacce991adefae21cfd696e2668eb62862ed8166"} Feb 26 18:32:33 crc kubenswrapper[4805]: I0226 18:32:33.666829 4805 scope.go:117] "RemoveContainer" containerID="3f9c3533af06cfd13430cde5a4664a1a9f4b41918996448ae9b81875688ff177" Feb 26 18:32:33 crc kubenswrapper[4805]: I0226 18:32:33.667960 4805 scope.go:117] "RemoveContainer" containerID="b7c30223c5b76c8ad9d42938cacce991adefae21cfd696e2668eb62862ed8166" Feb 26 18:32:33 crc kubenswrapper[4805]: E0226 18:32:33.668397 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:32:44 crc kubenswrapper[4805]: I0226 18:32:44.955407 4805 scope.go:117] "RemoveContainer" containerID="b7c30223c5b76c8ad9d42938cacce991adefae21cfd696e2668eb62862ed8166" Feb 26 18:32:44 crc kubenswrapper[4805]: E0226 18:32:44.956599 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:32:49 crc kubenswrapper[4805]: I0226 18:32:49.283438 4805 scope.go:117] "RemoveContainer" containerID="3c16f7b0ecda5d0eb84f7969d012c6353c1d243245825a0ed2e7de67728a0796" Feb 26 18:32:57 crc kubenswrapper[4805]: I0226 18:32:57.954245 4805 scope.go:117] "RemoveContainer" containerID="b7c30223c5b76c8ad9d42938cacce991adefae21cfd696e2668eb62862ed8166" Feb 26 18:32:57 crc kubenswrapper[4805]: E0226 18:32:57.954971 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:32:58 crc kubenswrapper[4805]: I0226 18:32:58.816928 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_26cecc08-6d2b-4e0f-a231-8ac8764e8ddf/init-config-reloader/0.log" Feb 26 18:32:59 crc kubenswrapper[4805]: I0226 18:32:59.041144 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_26cecc08-6d2b-4e0f-a231-8ac8764e8ddf/init-config-reloader/0.log" Feb 26 18:32:59 crc kubenswrapper[4805]: I0226 18:32:59.094509 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_26cecc08-6d2b-4e0f-a231-8ac8764e8ddf/alertmanager/0.log" Feb 26 18:32:59 crc kubenswrapper[4805]: I0226 18:32:59.130332 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_26cecc08-6d2b-4e0f-a231-8ac8764e8ddf/config-reloader/0.log" Feb 26 18:32:59 crc kubenswrapper[4805]: I0226 18:32:59.269621 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-69668dd468-5vzck_70e967a7-f90a-42a8-9d73-087c05c2ad6f/barbican-api-log/0.log" Feb 26 18:32:59 crc kubenswrapper[4805]: I0226 18:32:59.338751 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-74b677ff4d-kgqc6_b3cf852a-8ab9-4f96-b639-cdf3d7cf8407/barbican-keystone-listener/0.log" Feb 26 18:32:59 crc kubenswrapper[4805]: I0226 18:32:59.407864 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-69668dd468-5vzck_70e967a7-f90a-42a8-9d73-087c05c2ad6f/barbican-api/0.log" Feb 26 18:32:59 crc kubenswrapper[4805]: I0226 18:32:59.661777 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-74b677ff4d-kgqc6_b3cf852a-8ab9-4f96-b639-cdf3d7cf8407/barbican-keystone-listener-log/0.log" Feb 26 18:32:59 crc kubenswrapper[4805]: I0226 18:32:59.727445 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-664585c8f5-7x9kv_1232829e-cb44-4a21-b33b-c58be5fbd656/barbican-worker/0.log" Feb 26 18:32:59 crc kubenswrapper[4805]: I0226 18:32:59.761716 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-664585c8f5-7x9kv_1232829e-cb44-4a21-b33b-c58be5fbd656/barbican-worker-log/0.log" Feb 26 18:32:59 crc kubenswrapper[4805]: I0226 18:32:59.953881 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-g4pbh_ea8e6080-7eee-41dd-a4f6-6753bb1cc0de/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 18:33:00 crc kubenswrapper[4805]: I0226 18:33:00.022812 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6c49186e-522c-4f97-8d17-40c887d09de8/ceilometer-central-agent/0.log" Feb 26 18:33:00 crc kubenswrapper[4805]: I0226 18:33:00.118400 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6c49186e-522c-4f97-8d17-40c887d09de8/ceilometer-notification-agent/0.log" Feb 26 18:33:00 crc kubenswrapper[4805]: I0226 18:33:00.174253 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6c49186e-522c-4f97-8d17-40c887d09de8/proxy-httpd/0.log" Feb 26 18:33:00 crc kubenswrapper[4805]: I0226 18:33:00.215684 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6c49186e-522c-4f97-8d17-40c887d09de8/sg-core/0.log" Feb 26 18:33:00 crc kubenswrapper[4805]: I0226 18:33:00.358934 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_ce5c6445-f359-4ebd-ab9c-d86269a25d2e/cinder-api/0.log" Feb 26 18:33:00 crc kubenswrapper[4805]: I0226 18:33:00.390394 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_ce5c6445-f359-4ebd-ab9c-d86269a25d2e/cinder-api-log/0.log" Feb 26 18:33:00 crc kubenswrapper[4805]: I0226 18:33:00.516441 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e1b0e927-cb66-4694-890d-33b20573ccca/cinder-scheduler/0.log" Feb 26 18:33:00 crc kubenswrapper[4805]: I0226 18:33:00.604476 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e1b0e927-cb66-4694-890d-33b20573ccca/probe/0.log" Feb 26 18:33:00 crc kubenswrapper[4805]: I0226 18:33:00.796634 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_b6cb0afe-79a7-421d-a18f-b42cebd4398f/cloudkitty-api/0.log" Feb 26 18:33:00 crc kubenswrapper[4805]: I0226 18:33:00.835427 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_b6cb0afe-79a7-421d-a18f-b42cebd4398f/cloudkitty-api-log/0.log" Feb 26 18:33:00 crc kubenswrapper[4805]: I0226 18:33:00.945999 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-compactor-0_5f96258c-e535-406a-b67e-601f769b7e2e/loki-compactor/0.log" Feb 26 18:33:01 crc kubenswrapper[4805]: I0226 18:33:01.028395 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-distributor-585d9bcbc-tk4vq_5e792f59-e6d1-48d3-bc1b-e17d2e0da457/loki-distributor/0.log" Feb 26 18:33:01 crc kubenswrapper[4805]: I0226 18:33:01.199791 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7f8685b49f-k2wqw_5512e840-a01e-4669-bae6-a677ae85819c/gateway/0.log" Feb 26 18:33:01 crc kubenswrapper[4805]: I0226 18:33:01.243058 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7f8685b49f-psb9w_64b9816e-18ec-481f-95e9-7dc3e56534f6/gateway/0.log" Feb 26 18:33:01 crc kubenswrapper[4805]: I0226 18:33:01.542615 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-ingester-0_537eaeba-93f9-4d28-871b-049946f86c2b/loki-ingester/0.log" Feb 26 18:33:01 crc kubenswrapper[4805]: I0226 18:33:01.641953 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-index-gateway-0_4ac81593-9624-455e-89a0-fd8e84a4e8b5/loki-index-gateway/0.log" Feb 26 18:33:01 crc kubenswrapper[4805]: I0226 18:33:01.987964 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-query-frontend-67bb4dfcd8-v8lm6_8cc224a9-ec5a-40b2-b0b6-0905f3553e08/loki-query-frontend/0.log" Feb 26 18:33:02 crc kubenswrapper[4805]: I0226 18:33:02.287165 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-querier-58c84b5844-wxfvc_3dfb8794-f574-4514-b4e3-b7cdcc1460b5/loki-querier/0.log" Feb 26 18:33:02 crc kubenswrapper[4805]: I0226 18:33:02.376240 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-5hg6g_2cefa4ec-85ad-4c95-a9dc-06978d1325c2/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 18:33:02 crc kubenswrapper[4805]: I0226 18:33:02.745781 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-q5n5f_4fd5faad-e8f5-48cd-93a1-5818fc463c2e/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 18:33:02 crc kubenswrapper[4805]: I0226 18:33:02.899238 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-c4b758ff5-4znt7_40a64ea1-bab4-4761-a37f-865fdcf16fc6/init/0.log" Feb 26 18:33:03 crc kubenswrapper[4805]: I0226 18:33:03.225974 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-c4b758ff5-4znt7_40a64ea1-bab4-4761-a37f-865fdcf16fc6/init/0.log" Feb 26 18:33:03 crc kubenswrapper[4805]: I0226 18:33:03.370406 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-c4b758ff5-4znt7_40a64ea1-bab4-4761-a37f-865fdcf16fc6/dnsmasq-dns/0.log" Feb 26 18:33:03 crc kubenswrapper[4805]: I0226 18:33:03.456318 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-xjqb9_8c450535-29d9-4f24-9d80-e1059a310a12/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 18:33:03 crc kubenswrapper[4805]: I0226 18:33:03.571029 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ab72a0e1-fd82-4334-9124-8a3cc815bdd6/glance-httpd/0.log" Feb 26 18:33:03 crc kubenswrapper[4805]: I0226 18:33:03.587118 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ab72a0e1-fd82-4334-9124-8a3cc815bdd6/glance-log/0.log" Feb 26 18:33:03 crc kubenswrapper[4805]: I0226 18:33:03.753430 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f2e2871a-f575-4f8a-ae77-51f8da2aad53/glance-httpd/0.log" Feb 26 18:33:03 crc kubenswrapper[4805]: I0226 18:33:03.861349 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f2e2871a-f575-4f8a-ae77-51f8da2aad53/glance-log/0.log" Feb 26 18:33:04 crc kubenswrapper[4805]: I0226 18:33:04.008900 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-kmrn9_48c3fb2b-9621-4ad8-aa10-4c2b0fb8d882/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 18:33:04 crc kubenswrapper[4805]: I0226 18:33:04.223579 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-pfzsv_705e88a7-b9ad-435e-9e9e-802e433d3bb0/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 18:33:04 crc kubenswrapper[4805]: I0226 18:33:04.458909 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29535481-jqwcv_b79390f8-65c0-4333-b6a1-a19baf15714c/keystone-cron/0.log" Feb 26 18:33:04 crc kubenswrapper[4805]: I0226 18:33:04.756150 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-847cbf4c89-k2szx_2e15f4c2-3c78-4bb0-b1e0-ae8876e28cba/keystone-api/0.log" Feb 26 18:33:05 crc kubenswrapper[4805]: I0226 18:33:05.177980 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_ce1b9317-a017-46fb-8be3-686d80446649/kube-state-metrics/0.log" Feb 26 18:33:05 crc kubenswrapper[4805]: I0226 18:33:05.230189 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-ns5jt_9acbe40b-7d2c-44f9-bb4d-d2cd0a4a7753/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 18:33:05 crc kubenswrapper[4805]: I0226 18:33:05.643715 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7cb9cbd877-vr7d7_df0744eb-153e-4f1e-a678-cd7ed256e5dc/neutron-httpd/0.log" Feb 26 18:33:05 crc kubenswrapper[4805]: I0226 18:33:05.709424 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7cb9cbd877-vr7d7_df0744eb-153e-4f1e-a678-cd7ed256e5dc/neutron-api/0.log" Feb 26 18:33:05 crc kubenswrapper[4805]: I0226 18:33:05.879493 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-f86tr_70a36330-f6cb-4235-a39a-902ccc54bdde/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 18:33:06 crc kubenswrapper[4805]: I0226 18:33:06.509946 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_dcfa6af2-6085-43cb-94a8-a02bebd05f49/nova-api-log/0.log" Feb 26 18:33:06 crc kubenswrapper[4805]: I0226 18:33:06.762954 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_dcfa6af2-6085-43cb-94a8-a02bebd05f49/nova-api-api/0.log" Feb 26 18:33:06 crc kubenswrapper[4805]: I0226 18:33:06.997804 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_59691810-7196-4633-8fc6-b46a505b653d/nova-cell0-conductor-conductor/0.log" Feb 26 18:33:07 crc kubenswrapper[4805]: I0226 18:33:07.099471 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_82029b76-8e8c-4aab-ac16-906345d63ad8/nova-cell1-conductor-conductor/0.log" Feb 26 18:33:07 crc kubenswrapper[4805]: I0226 18:33:07.312257 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_404c1e92-b497-4dfe-aeb4-54db98639b48/nova-cell1-novncproxy-novncproxy/0.log" Feb 26 18:33:07 crc kubenswrapper[4805]: I0226 18:33:07.432280 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-btpdr_16ac99ed-1590-491d-938b-7a795e72c605/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 18:33:07 crc kubenswrapper[4805]: I0226 18:33:07.731412 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_14771d36-b57b-4f52-9367-0694e42e2cca/nova-metadata-log/0.log" Feb 26 18:33:08 crc kubenswrapper[4805]: I0226 18:33:08.128761 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_a6b7a4cd-7367-4048-9560-99759278c58b/nova-scheduler-scheduler/0.log" Feb 26 18:33:08 crc kubenswrapper[4805]: I0226 18:33:08.295490 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c1f73362-f45b-43a1-a1c7-ec280cb0f3c8/mysql-bootstrap/0.log" Feb 26 18:33:08 crc kubenswrapper[4805]: I0226 18:33:08.495730 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c1f73362-f45b-43a1-a1c7-ec280cb0f3c8/galera/0.log" Feb 26 18:33:08 crc kubenswrapper[4805]: I0226 18:33:08.560993 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c1f73362-f45b-43a1-a1c7-ec280cb0f3c8/mysql-bootstrap/0.log" Feb 26 18:33:08 crc kubenswrapper[4805]: I0226 18:33:08.819373 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0be4e187-2328-4b07-825d-2435d153499d/mysql-bootstrap/0.log" Feb 26 18:33:09 crc kubenswrapper[4805]: I0226 18:33:09.047264 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0be4e187-2328-4b07-825d-2435d153499d/mysql-bootstrap/0.log" Feb 26 18:33:09 crc kubenswrapper[4805]: I0226 18:33:09.060084 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0be4e187-2328-4b07-825d-2435d153499d/galera/0.log" Feb 26 18:33:09 crc kubenswrapper[4805]: I0226 18:33:09.269108 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_3bfa8b25-9a4e-482c-b2f6-15347757aec2/openstackclient/0.log" Feb 26 18:33:09 crc kubenswrapper[4805]: I0226 18:33:09.468659 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-2k9nw_6ff856bd-109f-4978-9b06-546d2afaf577/ovn-controller/0.log" Feb 26 18:33:09 crc kubenswrapper[4805]: I0226 18:33:09.775164 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_14771d36-b57b-4f52-9367-0694e42e2cca/nova-metadata-metadata/0.log" Feb 26 18:33:09 crc kubenswrapper[4805]: I0226 18:33:09.775708 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-ns67s_01cf09f3-76a8-4c3b-ae5e-320e3fcc38cd/openstack-network-exporter/0.log" Feb 26 18:33:09 crc kubenswrapper[4805]: I0226 18:33:09.957611 4805 scope.go:117] "RemoveContainer" containerID="b7c30223c5b76c8ad9d42938cacce991adefae21cfd696e2668eb62862ed8166" Feb 26 18:33:09 crc kubenswrapper[4805]: E0226 18:33:09.957896 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:33:10 crc kubenswrapper[4805]: I0226 18:33:10.004512 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tzv64_3645c31c-6e0b-4f42-b270-91cf46d0aaf9/ovsdb-server-init/0.log" Feb 26 18:33:10 crc kubenswrapper[4805]: I0226 18:33:10.216367 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tzv64_3645c31c-6e0b-4f42-b270-91cf46d0aaf9/ovsdb-server/0.log" Feb 26 18:33:10 crc kubenswrapper[4805]: I0226 18:33:10.264570 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tzv64_3645c31c-6e0b-4f42-b270-91cf46d0aaf9/ovs-vswitchd/0.log" Feb 26 18:33:10 crc kubenswrapper[4805]: I0226 18:33:10.269160 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-tzv64_3645c31c-6e0b-4f42-b270-91cf46d0aaf9/ovsdb-server-init/0.log" Feb 26 18:33:10 crc kubenswrapper[4805]: I0226 18:33:10.535362 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-ghqf8_92900aa9-a844-4669-993f-b2250e3093a1/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 18:33:10 crc kubenswrapper[4805]: I0226 18:33:10.724742 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_8563b335-2f2c-40da-811d-2ceaf2299da8/ovn-northd/0.log" Feb 26 18:33:10 crc kubenswrapper[4805]: I0226 18:33:10.738134 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_8563b335-2f2c-40da-811d-2ceaf2299da8/openstack-network-exporter/0.log" Feb 26 18:33:10 crc kubenswrapper[4805]: I0226 18:33:10.995412 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9/ovsdbserver-nb/0.log" Feb 26 18:33:11 crc kubenswrapper[4805]: I0226 18:33:11.035589 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f8fb0872-9bb6-4ccf-92fd-58bf0a7868d9/openstack-network-exporter/0.log" Feb 26 18:33:11 crc kubenswrapper[4805]: I0226 18:33:11.203597 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb/ovsdbserver-sb/0.log" Feb 26 18:33:11 crc kubenswrapper[4805]: I0226 18:33:11.213934 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_2afcdfc8-6c48-4cd0-bd1c-9ed03c1ad8fb/openstack-network-exporter/0.log" Feb 26 18:33:11 crc kubenswrapper[4805]: I0226 18:33:11.486861 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-59c7c6ffc6-vlxnj_91531636-caf5-4454-8cff-96134d359116/placement-api/0.log" Feb 26 18:33:11 crc kubenswrapper[4805]: I0226 18:33:11.617255 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-59c7c6ffc6-vlxnj_91531636-caf5-4454-8cff-96134d359116/placement-log/0.log" Feb 26 18:33:11 crc kubenswrapper[4805]: I0226 18:33:11.688539 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_596906a4-e4c6-4ede-826b-d349dc6a8dbf/init-config-reloader/0.log" Feb 26 18:33:11 crc kubenswrapper[4805]: I0226 18:33:11.853449 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_596906a4-e4c6-4ede-826b-d349dc6a8dbf/init-config-reloader/0.log" Feb 26 18:33:11 crc kubenswrapper[4805]: I0226 18:33:11.916565 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_596906a4-e4c6-4ede-826b-d349dc6a8dbf/config-reloader/0.log" Feb 26 18:33:12 crc kubenswrapper[4805]: I0226 18:33:12.000177 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_596906a4-e4c6-4ede-826b-d349dc6a8dbf/prometheus/0.log" Feb 26 18:33:12 crc kubenswrapper[4805]: I0226 18:33:12.129769 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_596906a4-e4c6-4ede-826b-d349dc6a8dbf/thanos-sidecar/0.log" Feb 26 18:33:12 crc kubenswrapper[4805]: I0226 18:33:12.241847 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a/setup-container/0.log" Feb 26 18:33:12 crc kubenswrapper[4805]: I0226 18:33:12.438724 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a/setup-container/0.log" Feb 26 18:33:12 crc kubenswrapper[4805]: I0226 18:33:12.504673 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0e0bb0e1-3109-4794-9b9e-78ca1e8ee75a/rabbitmq/0.log" Feb 26 18:33:12 crc kubenswrapper[4805]: I0226 18:33:12.646509 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2/setup-container/0.log" Feb 26 18:33:12 crc kubenswrapper[4805]: I0226 18:33:12.927165 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2/setup-container/0.log" Feb 26 18:33:12 crc kubenswrapper[4805]: I0226 18:33:12.934129 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3bfcf1ee-e3a3-4d5b-bc4e-23f7a53992b2/rabbitmq/0.log" Feb 26 18:33:13 crc kubenswrapper[4805]: I0226 18:33:13.190467 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-xsfks_9562d61f-fcfb-40c3-8a39-500bfa314c5e/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 18:33:13 crc kubenswrapper[4805]: I0226 18:33:13.332171 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-pllc8_bf741ef5-d678-40c2-99b1-7e2f4db7787a/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 18:33:13 crc kubenswrapper[4805]: I0226 18:33:13.591316 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-cvhz2_2b789c4f-f811-4d44-8337-115a3a9d1ca7/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 18:33:13 crc kubenswrapper[4805]: I0226 18:33:13.737712 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-pxdcx_26d29c02-84a8-41af-b5ae-7ad977cc33a1/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 18:33:13 crc kubenswrapper[4805]: I0226 18:33:13.849826 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-wmbgb_d27a40c7-3e84-48cd-849c-a318aac82222/ssh-known-hosts-edpm-deployment/0.log" Feb 26 18:33:14 crc kubenswrapper[4805]: I0226 18:33:14.085455 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7d9687b795-zqn9b_01b8f655-2944-4562-89c4-d2bcf9516cde/proxy-server/0.log" Feb 26 18:33:14 crc kubenswrapper[4805]: I0226 18:33:14.267824 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7d9687b795-zqn9b_01b8f655-2944-4562-89c4-d2bcf9516cde/proxy-httpd/0.log" Feb 26 18:33:14 crc kubenswrapper[4805]: I0226 18:33:14.788608 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a466ee40-e6ef-4a36-96c6-88e7ce00a28c/account-auditor/0.log" Feb 26 18:33:14 crc kubenswrapper[4805]: I0226 18:33:14.795515 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-wlxbg_81fa1ce3-014a-4a1d-b880-7d1ea1fb6975/swift-ring-rebalance/0.log" Feb 26 18:33:14 crc kubenswrapper[4805]: I0226 18:33:14.964418 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a466ee40-e6ef-4a36-96c6-88e7ce00a28c/account-reaper/0.log" Feb 26 18:33:15 crc kubenswrapper[4805]: I0226 18:33:15.040474 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a466ee40-e6ef-4a36-96c6-88e7ce00a28c/account-replicator/0.log" Feb 26 18:33:15 crc kubenswrapper[4805]: I0226 18:33:15.072861 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a466ee40-e6ef-4a36-96c6-88e7ce00a28c/account-server/0.log" Feb 26 18:33:15 crc kubenswrapper[4805]: I0226 18:33:15.159433 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a466ee40-e6ef-4a36-96c6-88e7ce00a28c/container-auditor/0.log" Feb 26 18:33:15 crc kubenswrapper[4805]: I0226 18:33:15.284771 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a466ee40-e6ef-4a36-96c6-88e7ce00a28c/container-replicator/0.log" Feb 26 18:33:15 crc kubenswrapper[4805]: I0226 18:33:15.338784 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a466ee40-e6ef-4a36-96c6-88e7ce00a28c/container-server/0.log" Feb 26 18:33:15 crc kubenswrapper[4805]: I0226 18:33:15.462443 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a466ee40-e6ef-4a36-96c6-88e7ce00a28c/container-updater/0.log" Feb 26 18:33:15 crc kubenswrapper[4805]: I0226 18:33:15.576736 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a466ee40-e6ef-4a36-96c6-88e7ce00a28c/object-auditor/0.log" Feb 26 18:33:15 crc kubenswrapper[4805]: I0226 18:33:15.588177 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a466ee40-e6ef-4a36-96c6-88e7ce00a28c/object-expirer/0.log" Feb 26 18:33:15 crc kubenswrapper[4805]: I0226 18:33:15.724752 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a466ee40-e6ef-4a36-96c6-88e7ce00a28c/object-replicator/0.log" Feb 26 18:33:15 crc kubenswrapper[4805]: I0226 18:33:15.783520 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a466ee40-e6ef-4a36-96c6-88e7ce00a28c/object-updater/0.log" Feb 26 18:33:15 crc kubenswrapper[4805]: I0226 18:33:15.805763 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a466ee40-e6ef-4a36-96c6-88e7ce00a28c/object-server/0.log" Feb 26 18:33:15 crc kubenswrapper[4805]: I0226 18:33:15.909665 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a466ee40-e6ef-4a36-96c6-88e7ce00a28c/rsync/0.log" Feb 26 18:33:15 crc kubenswrapper[4805]: I0226 18:33:15.991637 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a466ee40-e6ef-4a36-96c6-88e7ce00a28c/swift-recon-cron/0.log" Feb 26 18:33:16 crc kubenswrapper[4805]: I0226 18:33:16.178390 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-xnt74_b4cbe0c5-8c10-4739-9b1b-7c3b84f7fdc0/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 18:33:16 crc kubenswrapper[4805]: I0226 18:33:16.774950 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_630c211f-3dd5-4951-9476-249d0f6bc049/tempest-tests-tempest-tests-runner/0.log" Feb 26 18:33:16 crc kubenswrapper[4805]: I0226 18:33:16.919889 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_c2a06e59-5209-4469-b044-b644558241b8/test-operator-logs-container/0.log" Feb 26 18:33:17 crc kubenswrapper[4805]: I0226 18:33:17.001004 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-42gxs_40f2cf4c-0815-415c-930f-90aebbfa5d64/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 26 18:33:22 crc kubenswrapper[4805]: I0226 18:33:22.498653 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-proc-0_a048b7a1-414b-4465-931c-cf987921d7e6/cloudkitty-proc/0.log" Feb 26 18:33:23 crc kubenswrapper[4805]: I0226 18:33:23.937135 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_2a78640b-0558-468f-893e-db7794aeb8b1/memcached/0.log" Feb 26 18:33:24 crc kubenswrapper[4805]: I0226 18:33:24.953235 4805 scope.go:117] "RemoveContainer" containerID="b7c30223c5b76c8ad9d42938cacce991adefae21cfd696e2668eb62862ed8166" Feb 26 18:33:24 crc kubenswrapper[4805]: E0226 18:33:24.954308 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:33:37 crc kubenswrapper[4805]: I0226 18:33:37.952893 4805 scope.go:117] "RemoveContainer" containerID="b7c30223c5b76c8ad9d42938cacce991adefae21cfd696e2668eb62862ed8166" Feb 26 18:33:37 crc kubenswrapper[4805]: E0226 18:33:37.954989 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:33:50 crc kubenswrapper[4805]: I0226 18:33:50.956107 4805 scope.go:117] "RemoveContainer" containerID="b7c30223c5b76c8ad9d42938cacce991adefae21cfd696e2668eb62862ed8166" Feb 26 18:33:50 crc kubenswrapper[4805]: E0226 18:33:50.957031 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:33:50 crc kubenswrapper[4805]: I0226 18:33:50.990753 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-tzsf5_b5c2bc9e-6d6f-4d6e-9fa1-379b26708be3/manager/0.log" Feb 26 18:33:51 crc kubenswrapper[4805]: I0226 18:33:51.174275 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e8e56771b3ca26b306ce98435cdf3ec981b348d8eb54fc86b3202e39b74bkfd_780ef4ce-e438-4cdf-8a02-9d7e5fda96f3/util/0.log" Feb 26 18:33:51 crc kubenswrapper[4805]: I0226 18:33:51.506456 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e8e56771b3ca26b306ce98435cdf3ec981b348d8eb54fc86b3202e39b74bkfd_780ef4ce-e438-4cdf-8a02-9d7e5fda96f3/pull/0.log" Feb 26 18:33:51 crc kubenswrapper[4805]: I0226 18:33:51.519998 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e8e56771b3ca26b306ce98435cdf3ec981b348d8eb54fc86b3202e39b74bkfd_780ef4ce-e438-4cdf-8a02-9d7e5fda96f3/util/0.log" Feb 26 18:33:51 crc kubenswrapper[4805]: I0226 18:33:51.748100 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e8e56771b3ca26b306ce98435cdf3ec981b348d8eb54fc86b3202e39b74bkfd_780ef4ce-e438-4cdf-8a02-9d7e5fda96f3/pull/0.log" Feb 26 18:33:51 crc kubenswrapper[4805]: I0226 18:33:51.976246 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e8e56771b3ca26b306ce98435cdf3ec981b348d8eb54fc86b3202e39b74bkfd_780ef4ce-e438-4cdf-8a02-9d7e5fda96f3/util/0.log" Feb 26 18:33:52 crc kubenswrapper[4805]: I0226 18:33:52.049526 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e8e56771b3ca26b306ce98435cdf3ec981b348d8eb54fc86b3202e39b74bkfd_780ef4ce-e438-4cdf-8a02-9d7e5fda96f3/pull/0.log" Feb 26 18:33:52 crc kubenswrapper[4805]: I0226 18:33:52.178740 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e8e56771b3ca26b306ce98435cdf3ec981b348d8eb54fc86b3202e39b74bkfd_780ef4ce-e438-4cdf-8a02-9d7e5fda96f3/extract/0.log" Feb 26 18:33:52 crc kubenswrapper[4805]: I0226 18:33:52.521966 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-784b5bb6c5-qcp9p_e649f248-07e5-4bf2-83bf-0c7fb532dc16/manager/0.log" Feb 26 18:33:52 crc kubenswrapper[4805]: I0226 18:33:52.730164 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-69rn4_1a0573de-bd9c-4917-93d6-4fe8ae9fde94/manager/0.log" Feb 26 18:33:52 crc kubenswrapper[4805]: I0226 18:33:52.909627 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-55d77d7b5c-lss24_d7d01c08-16c4-411f-82f4-b7747d6222f7/manager/0.log" Feb 26 18:33:53 crc kubenswrapper[4805]: I0226 18:33:53.039252 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-24fxg_2cf12233-5655-4a94-8f0a-bdd68756de74/manager/0.log" Feb 26 18:33:53 crc kubenswrapper[4805]: I0226 18:33:53.560317 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-6xzvs_8f2e984c-ff5c-419b-bcd2-3e0d53825b89/manager/0.log" Feb 26 18:33:53 crc kubenswrapper[4805]: I0226 18:33:53.810995 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-8xw2w_23a0badb-7b1f-4d18-8622-3248adbfe0ea/manager/0.log" Feb 26 18:33:54 crc kubenswrapper[4805]: I0226 18:33:54.120739 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-wxtbt_a824c389-facb-49e7-91e9-a05c95cdd2b9/manager/0.log" Feb 26 18:33:54 crc kubenswrapper[4805]: I0226 18:33:54.282469 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-67d996989d-7lmx7_e4c0ad7f-8daf-4599-a457-135483730ac6/manager/0.log" Feb 26 18:33:54 crc kubenswrapper[4805]: I0226 18:33:54.578818 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-5x6cj_171cc925-66fe-4ceb-b9b5-56b48a121642/manager/0.log" Feb 26 18:33:54 crc kubenswrapper[4805]: I0226 18:33:54.772144 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6bd4687957-sfq49_4a5c2658-ad3f-49b7-bb08-64aa33210ea4/manager/0.log" Feb 26 18:33:54 crc kubenswrapper[4805]: I0226 18:33:54.914047 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-n8jrv_a0bc07cc-8639-49ce-824d-b1cde1a7c500/manager/0.log" Feb 26 18:33:55 crc kubenswrapper[4805]: I0226 18:33:55.502986 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cphz8p_ce3a5e9a-ddf3-47a4-8b9c-cf04573c34c8/manager/0.log" Feb 26 18:33:55 crc kubenswrapper[4805]: I0226 18:33:55.522129 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-659dc6bbfc-pc48x_10b3e5a9-53df-421d-a6dc-ccb44f03f432/manager/0.log" Feb 26 18:33:55 crc kubenswrapper[4805]: I0226 18:33:55.926249 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-558769b65d-mlxxb_2e41fb9f-3f0b-4e98-b5da-6d3b8cc681a5/operator/0.log" Feb 26 18:33:56 crc kubenswrapper[4805]: I0226 18:33:56.018335 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-gq9w7_c98178ae-c45b-4d31-a1b9-afa30a2b25d2/registry-server/0.log" Feb 26 18:33:56 crc kubenswrapper[4805]: I0226 18:33:56.247889 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5955d8c787-qk4bw_a566e6f7-f550-4c90-a3fe-f5b66061d126/manager/0.log" Feb 26 18:33:56 crc kubenswrapper[4805]: I0226 18:33:56.346804 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-s49nh_e234f401-20b1-4f4b-b884-0ccae8a82887/manager/0.log" Feb 26 18:33:56 crc kubenswrapper[4805]: I0226 18:33:56.515034 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-7gnkt_7bdb778d-d9d7-4d46-ba96-208bc22804e9/operator/0.log" Feb 26 18:33:56 crc kubenswrapper[4805]: I0226 18:33:56.729582 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-rdgbm_2e440888-07ce-4a09-ac04-ab52fae67596/manager/0.log" Feb 26 18:33:56 crc kubenswrapper[4805]: I0226 18:33:56.970770 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5dc6794d5b-4czft_8afb1a24-2085-497a-aea6-c5e35d58d2c2/manager/0.log" Feb 26 18:33:57 crc kubenswrapper[4805]: I0226 18:33:57.246982 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-bccc79885-29g7k_12c25d57-ec5f-48f9-83c6-9f099d56c313/manager/0.log" Feb 26 18:33:57 crc kubenswrapper[4805]: I0226 18:33:57.481291 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-554d785765-j74p9_dfb3f113-7bae-465f-aac0-2697ba32d846/manager/0.log" Feb 26 18:33:57 crc kubenswrapper[4805]: I0226 18:33:57.666713 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-79f489f8d5-t8gwl_32486358-f7d6-45da-bade-d5af6fc319fd/manager/0.log" Feb 26 18:34:00 crc kubenswrapper[4805]: I0226 18:34:00.152893 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535514-tf9js"] Feb 26 18:34:00 crc kubenswrapper[4805]: E0226 18:34:00.153698 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2846a75f-e617-41be-8892-18cbfafee47b" containerName="container-00" Feb 26 18:34:00 crc kubenswrapper[4805]: I0226 18:34:00.153715 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="2846a75f-e617-41be-8892-18cbfafee47b" containerName="container-00" Feb 26 18:34:00 crc kubenswrapper[4805]: I0226 18:34:00.153976 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="2846a75f-e617-41be-8892-18cbfafee47b" containerName="container-00" Feb 26 18:34:00 crc kubenswrapper[4805]: I0226 18:34:00.154805 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535514-tf9js" Feb 26 18:34:00 crc kubenswrapper[4805]: I0226 18:34:00.157679 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 18:34:00 crc kubenswrapper[4805]: I0226 18:34:00.158738 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 18:34:00 crc kubenswrapper[4805]: I0226 18:34:00.158747 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 18:34:00 crc kubenswrapper[4805]: I0226 18:34:00.164424 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535514-tf9js"] Feb 26 18:34:00 crc kubenswrapper[4805]: I0226 18:34:00.287225 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p8pt\" (UniqueName: \"kubernetes.io/projected/66e0bb37-53c2-4072-bb79-49dd3a8760c8-kube-api-access-9p8pt\") pod \"auto-csr-approver-29535514-tf9js\" (UID: \"66e0bb37-53c2-4072-bb79-49dd3a8760c8\") " pod="openshift-infra/auto-csr-approver-29535514-tf9js" Feb 26 18:34:00 crc kubenswrapper[4805]: I0226 18:34:00.389900 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p8pt\" (UniqueName: \"kubernetes.io/projected/66e0bb37-53c2-4072-bb79-49dd3a8760c8-kube-api-access-9p8pt\") pod \"auto-csr-approver-29535514-tf9js\" (UID: \"66e0bb37-53c2-4072-bb79-49dd3a8760c8\") " pod="openshift-infra/auto-csr-approver-29535514-tf9js" Feb 26 18:34:00 crc kubenswrapper[4805]: I0226 18:34:00.420151 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p8pt\" (UniqueName: \"kubernetes.io/projected/66e0bb37-53c2-4072-bb79-49dd3a8760c8-kube-api-access-9p8pt\") pod \"auto-csr-approver-29535514-tf9js\" (UID: \"66e0bb37-53c2-4072-bb79-49dd3a8760c8\") " pod="openshift-infra/auto-csr-approver-29535514-tf9js" Feb 26 18:34:00 crc kubenswrapper[4805]: I0226 18:34:00.472415 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535514-tf9js" Feb 26 18:34:01 crc kubenswrapper[4805]: I0226 18:34:01.054239 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535514-tf9js"] Feb 26 18:34:01 crc kubenswrapper[4805]: I0226 18:34:01.338269 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-vtqrv_60a1ca6f-55b2-43e0-a86b-b38ecf7190f6/manager/0.log" Feb 26 18:34:01 crc kubenswrapper[4805]: I0226 18:34:01.683005 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535514-tf9js" event={"ID":"66e0bb37-53c2-4072-bb79-49dd3a8760c8","Type":"ContainerStarted","Data":"f3352c303b28afe6d836a0a1e3acf09d4d5fd11c2810e98801da838cbedff7dc"} Feb 26 18:34:02 crc kubenswrapper[4805]: I0226 18:34:02.695287 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535514-tf9js" event={"ID":"66e0bb37-53c2-4072-bb79-49dd3a8760c8","Type":"ContainerStarted","Data":"725d98ccbae346d4c6f67194f1ad982352ca399ec4758ae1efb13ad8899c6502"} Feb 26 18:34:02 crc kubenswrapper[4805]: I0226 18:34:02.721646 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535514-tf9js" podStartSLOduration=1.611790442 podStartE2EDuration="2.72162343s" podCreationTimestamp="2026-02-26 18:34:00 +0000 UTC" firstStartedPulling="2026-02-26 18:34:01.062788427 +0000 UTC m=+4755.624542766" lastFinishedPulling="2026-02-26 18:34:02.172621415 +0000 UTC m=+4756.734375754" observedRunningTime="2026-02-26 18:34:02.709008182 +0000 UTC m=+4757.270762521" watchObservedRunningTime="2026-02-26 18:34:02.72162343 +0000 UTC m=+4757.283377769" Feb 26 18:34:02 crc kubenswrapper[4805]: I0226 18:34:02.953846 4805 scope.go:117] "RemoveContainer" containerID="b7c30223c5b76c8ad9d42938cacce991adefae21cfd696e2668eb62862ed8166" Feb 26 18:34:02 crc kubenswrapper[4805]: E0226 18:34:02.954250 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:34:03 crc kubenswrapper[4805]: I0226 18:34:03.339589 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mcbv4"] Feb 26 18:34:03 crc kubenswrapper[4805]: I0226 18:34:03.342337 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mcbv4" Feb 26 18:34:03 crc kubenswrapper[4805]: I0226 18:34:03.352887 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mcbv4"] Feb 26 18:34:03 crc kubenswrapper[4805]: I0226 18:34:03.413925 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ps5n\" (UniqueName: \"kubernetes.io/projected/0d76a758-43b7-4c37-a79c-dcb9770a8b09-kube-api-access-6ps5n\") pod \"redhat-operators-mcbv4\" (UID: \"0d76a758-43b7-4c37-a79c-dcb9770a8b09\") " pod="openshift-marketplace/redhat-operators-mcbv4" Feb 26 18:34:03 crc kubenswrapper[4805]: I0226 18:34:03.414300 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d76a758-43b7-4c37-a79c-dcb9770a8b09-utilities\") pod \"redhat-operators-mcbv4\" (UID: \"0d76a758-43b7-4c37-a79c-dcb9770a8b09\") " pod="openshift-marketplace/redhat-operators-mcbv4" Feb 26 18:34:03 crc kubenswrapper[4805]: I0226 18:34:03.414374 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d76a758-43b7-4c37-a79c-dcb9770a8b09-catalog-content\") pod \"redhat-operators-mcbv4\" (UID: \"0d76a758-43b7-4c37-a79c-dcb9770a8b09\") " pod="openshift-marketplace/redhat-operators-mcbv4" Feb 26 18:34:03 crc kubenswrapper[4805]: I0226 18:34:03.515711 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ps5n\" (UniqueName: \"kubernetes.io/projected/0d76a758-43b7-4c37-a79c-dcb9770a8b09-kube-api-access-6ps5n\") pod \"redhat-operators-mcbv4\" (UID: \"0d76a758-43b7-4c37-a79c-dcb9770a8b09\") " pod="openshift-marketplace/redhat-operators-mcbv4" Feb 26 18:34:03 crc kubenswrapper[4805]: I0226 18:34:03.515790 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d76a758-43b7-4c37-a79c-dcb9770a8b09-utilities\") pod \"redhat-operators-mcbv4\" (UID: \"0d76a758-43b7-4c37-a79c-dcb9770a8b09\") " pod="openshift-marketplace/redhat-operators-mcbv4" Feb 26 18:34:03 crc kubenswrapper[4805]: I0226 18:34:03.515829 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d76a758-43b7-4c37-a79c-dcb9770a8b09-catalog-content\") pod \"redhat-operators-mcbv4\" (UID: \"0d76a758-43b7-4c37-a79c-dcb9770a8b09\") " pod="openshift-marketplace/redhat-operators-mcbv4" Feb 26 18:34:03 crc kubenswrapper[4805]: I0226 18:34:03.516411 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d76a758-43b7-4c37-a79c-dcb9770a8b09-utilities\") pod \"redhat-operators-mcbv4\" (UID: \"0d76a758-43b7-4c37-a79c-dcb9770a8b09\") " pod="openshift-marketplace/redhat-operators-mcbv4" Feb 26 18:34:03 crc kubenswrapper[4805]: I0226 18:34:03.516416 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d76a758-43b7-4c37-a79c-dcb9770a8b09-catalog-content\") pod \"redhat-operators-mcbv4\" (UID: \"0d76a758-43b7-4c37-a79c-dcb9770a8b09\") " pod="openshift-marketplace/redhat-operators-mcbv4" Feb 26 18:34:03 crc kubenswrapper[4805]: I0226 18:34:03.543445 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ps5n\" (UniqueName: \"kubernetes.io/projected/0d76a758-43b7-4c37-a79c-dcb9770a8b09-kube-api-access-6ps5n\") pod \"redhat-operators-mcbv4\" (UID: \"0d76a758-43b7-4c37-a79c-dcb9770a8b09\") " pod="openshift-marketplace/redhat-operators-mcbv4" Feb 26 18:34:03 crc kubenswrapper[4805]: I0226 18:34:03.713666 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mcbv4" Feb 26 18:34:03 crc kubenswrapper[4805]: I0226 18:34:03.865784 4805 generic.go:334] "Generic (PLEG): container finished" podID="66e0bb37-53c2-4072-bb79-49dd3a8760c8" containerID="725d98ccbae346d4c6f67194f1ad982352ca399ec4758ae1efb13ad8899c6502" exitCode=0 Feb 26 18:34:03 crc kubenswrapper[4805]: I0226 18:34:03.865824 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535514-tf9js" event={"ID":"66e0bb37-53c2-4072-bb79-49dd3a8760c8","Type":"ContainerDied","Data":"725d98ccbae346d4c6f67194f1ad982352ca399ec4758ae1efb13ad8899c6502"} Feb 26 18:34:04 crc kubenswrapper[4805]: I0226 18:34:04.411470 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mcbv4"] Feb 26 18:34:05 crc kubenswrapper[4805]: I0226 18:34:05.366089 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535514-tf9js" Feb 26 18:34:05 crc kubenswrapper[4805]: I0226 18:34:05.374180 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9p8pt\" (UniqueName: \"kubernetes.io/projected/66e0bb37-53c2-4072-bb79-49dd3a8760c8-kube-api-access-9p8pt\") pod \"66e0bb37-53c2-4072-bb79-49dd3a8760c8\" (UID: \"66e0bb37-53c2-4072-bb79-49dd3a8760c8\") " Feb 26 18:34:05 crc kubenswrapper[4805]: I0226 18:34:05.379372 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66e0bb37-53c2-4072-bb79-49dd3a8760c8-kube-api-access-9p8pt" (OuterVolumeSpecName: "kube-api-access-9p8pt") pod "66e0bb37-53c2-4072-bb79-49dd3a8760c8" (UID: "66e0bb37-53c2-4072-bb79-49dd3a8760c8"). InnerVolumeSpecName "kube-api-access-9p8pt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:34:05 crc kubenswrapper[4805]: I0226 18:34:05.476646 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9p8pt\" (UniqueName: \"kubernetes.io/projected/66e0bb37-53c2-4072-bb79-49dd3a8760c8-kube-api-access-9p8pt\") on node \"crc\" DevicePath \"\"" Feb 26 18:34:05 crc kubenswrapper[4805]: I0226 18:34:05.892047 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535514-tf9js" event={"ID":"66e0bb37-53c2-4072-bb79-49dd3a8760c8","Type":"ContainerDied","Data":"f3352c303b28afe6d836a0a1e3acf09d4d5fd11c2810e98801da838cbedff7dc"} Feb 26 18:34:05 crc kubenswrapper[4805]: I0226 18:34:05.892085 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3352c303b28afe6d836a0a1e3acf09d4d5fd11c2810e98801da838cbedff7dc" Feb 26 18:34:05 crc kubenswrapper[4805]: I0226 18:34:05.892112 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535514-tf9js" Feb 26 18:34:05 crc kubenswrapper[4805]: I0226 18:34:05.893463 4805 generic.go:334] "Generic (PLEG): container finished" podID="0d76a758-43b7-4c37-a79c-dcb9770a8b09" containerID="8177fb4254a7468dff77a9ea67b78b5670c4f21400dd4c3a37f9ed3883a6f269" exitCode=0 Feb 26 18:34:05 crc kubenswrapper[4805]: I0226 18:34:05.893490 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mcbv4" event={"ID":"0d76a758-43b7-4c37-a79c-dcb9770a8b09","Type":"ContainerDied","Data":"8177fb4254a7468dff77a9ea67b78b5670c4f21400dd4c3a37f9ed3883a6f269"} Feb 26 18:34:05 crc kubenswrapper[4805]: I0226 18:34:05.893506 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mcbv4" event={"ID":"0d76a758-43b7-4c37-a79c-dcb9770a8b09","Type":"ContainerStarted","Data":"8256df8a53785272551540de937fd45a82aa67957c297577f5a105f018a21908"} Feb 26 18:34:05 crc kubenswrapper[4805]: I0226 18:34:05.899801 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535508-9cpt6"] Feb 26 18:34:05 crc kubenswrapper[4805]: I0226 18:34:05.928920 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535508-9cpt6"] Feb 26 18:34:06 crc kubenswrapper[4805]: I0226 18:34:06.972642 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e531362b-5fcf-4aac-bf9c-ddbee35eb1b7" path="/var/lib/kubelet/pods/e531362b-5fcf-4aac-bf9c-ddbee35eb1b7/volumes" Feb 26 18:34:07 crc kubenswrapper[4805]: I0226 18:34:07.910565 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mcbv4" event={"ID":"0d76a758-43b7-4c37-a79c-dcb9770a8b09","Type":"ContainerStarted","Data":"508d638b53efde95f8095f720d0d1e4367875020c30e0268eeb4fa3061ed17f4"} Feb 26 18:34:14 crc kubenswrapper[4805]: I0226 18:34:14.001193 4805 generic.go:334] "Generic (PLEG): container finished" podID="0d76a758-43b7-4c37-a79c-dcb9770a8b09" containerID="508d638b53efde95f8095f720d0d1e4367875020c30e0268eeb4fa3061ed17f4" exitCode=0 Feb 26 18:34:14 crc kubenswrapper[4805]: I0226 18:34:14.001284 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mcbv4" event={"ID":"0d76a758-43b7-4c37-a79c-dcb9770a8b09","Type":"ContainerDied","Data":"508d638b53efde95f8095f720d0d1e4367875020c30e0268eeb4fa3061ed17f4"} Feb 26 18:34:15 crc kubenswrapper[4805]: I0226 18:34:15.012589 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mcbv4" event={"ID":"0d76a758-43b7-4c37-a79c-dcb9770a8b09","Type":"ContainerStarted","Data":"54bc857a607d2935b6ce5ed6315b101964bb50f2bc94f6f0a03e36fe0f77ce99"} Feb 26 18:34:15 crc kubenswrapper[4805]: I0226 18:34:15.034774 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mcbv4" podStartSLOduration=3.508109311 podStartE2EDuration="12.034756698s" podCreationTimestamp="2026-02-26 18:34:03 +0000 UTC" firstStartedPulling="2026-02-26 18:34:05.895060369 +0000 UTC m=+4760.456814708" lastFinishedPulling="2026-02-26 18:34:14.421707746 +0000 UTC m=+4768.983462095" observedRunningTime="2026-02-26 18:34:15.028113361 +0000 UTC m=+4769.589867700" watchObservedRunningTime="2026-02-26 18:34:15.034756698 +0000 UTC m=+4769.596511037" Feb 26 18:34:15 crc kubenswrapper[4805]: I0226 18:34:15.953814 4805 scope.go:117] "RemoveContainer" containerID="b7c30223c5b76c8ad9d42938cacce991adefae21cfd696e2668eb62862ed8166" Feb 26 18:34:15 crc kubenswrapper[4805]: E0226 18:34:15.954419 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:34:23 crc kubenswrapper[4805]: I0226 18:34:23.714556 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mcbv4" Feb 26 18:34:23 crc kubenswrapper[4805]: I0226 18:34:23.715067 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mcbv4" Feb 26 18:34:24 crc kubenswrapper[4805]: I0226 18:34:24.747673 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-frcbs_07a7b8f7-2103-443e-906f-5d2d74baa5a9/control-plane-machine-set-operator/0.log" Feb 26 18:34:24 crc kubenswrapper[4805]: I0226 18:34:24.906315 4805 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mcbv4" podUID="0d76a758-43b7-4c37-a79c-dcb9770a8b09" containerName="registry-server" probeResult="failure" output=< Feb 26 18:34:24 crc kubenswrapper[4805]: timeout: failed to connect service ":50051" within 1s Feb 26 18:34:24 crc kubenswrapper[4805]: > Feb 26 18:34:24 crc kubenswrapper[4805]: I0226 18:34:24.978956 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-v45rz_99edc210-b315-4224-8d9f-a5911f8527b2/kube-rbac-proxy/0.log" Feb 26 18:34:25 crc kubenswrapper[4805]: I0226 18:34:25.015831 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-v45rz_99edc210-b315-4224-8d9f-a5911f8527b2/machine-api-operator/0.log" Feb 26 18:34:28 crc kubenswrapper[4805]: I0226 18:34:28.953166 4805 scope.go:117] "RemoveContainer" containerID="b7c30223c5b76c8ad9d42938cacce991adefae21cfd696e2668eb62862ed8166" Feb 26 18:34:28 crc kubenswrapper[4805]: E0226 18:34:28.954003 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:34:34 crc kubenswrapper[4805]: I0226 18:34:34.138728 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mcbv4" Feb 26 18:34:34 crc kubenswrapper[4805]: I0226 18:34:34.207043 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mcbv4" Feb 26 18:34:34 crc kubenswrapper[4805]: I0226 18:34:34.540439 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mcbv4"] Feb 26 18:34:35 crc kubenswrapper[4805]: I0226 18:34:35.206127 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mcbv4" podUID="0d76a758-43b7-4c37-a79c-dcb9770a8b09" containerName="registry-server" containerID="cri-o://54bc857a607d2935b6ce5ed6315b101964bb50f2bc94f6f0a03e36fe0f77ce99" gracePeriod=2 Feb 26 18:34:36 crc kubenswrapper[4805]: I0226 18:34:36.084367 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mcbv4" Feb 26 18:34:36 crc kubenswrapper[4805]: I0226 18:34:36.091964 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ps5n\" (UniqueName: \"kubernetes.io/projected/0d76a758-43b7-4c37-a79c-dcb9770a8b09-kube-api-access-6ps5n\") pod \"0d76a758-43b7-4c37-a79c-dcb9770a8b09\" (UID: \"0d76a758-43b7-4c37-a79c-dcb9770a8b09\") " Feb 26 18:34:36 crc kubenswrapper[4805]: I0226 18:34:36.092131 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d76a758-43b7-4c37-a79c-dcb9770a8b09-utilities\") pod \"0d76a758-43b7-4c37-a79c-dcb9770a8b09\" (UID: \"0d76a758-43b7-4c37-a79c-dcb9770a8b09\") " Feb 26 18:34:36 crc kubenswrapper[4805]: I0226 18:34:36.092275 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d76a758-43b7-4c37-a79c-dcb9770a8b09-catalog-content\") pod \"0d76a758-43b7-4c37-a79c-dcb9770a8b09\" (UID: \"0d76a758-43b7-4c37-a79c-dcb9770a8b09\") " Feb 26 18:34:36 crc kubenswrapper[4805]: I0226 18:34:36.093126 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d76a758-43b7-4c37-a79c-dcb9770a8b09-utilities" (OuterVolumeSpecName: "utilities") pod "0d76a758-43b7-4c37-a79c-dcb9770a8b09" (UID: "0d76a758-43b7-4c37-a79c-dcb9770a8b09"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 18:34:36 crc kubenswrapper[4805]: I0226 18:34:36.101749 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d76a758-43b7-4c37-a79c-dcb9770a8b09-kube-api-access-6ps5n" (OuterVolumeSpecName: "kube-api-access-6ps5n") pod "0d76a758-43b7-4c37-a79c-dcb9770a8b09" (UID: "0d76a758-43b7-4c37-a79c-dcb9770a8b09"). InnerVolumeSpecName "kube-api-access-6ps5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:34:36 crc kubenswrapper[4805]: I0226 18:34:36.198426 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d76a758-43b7-4c37-a79c-dcb9770a8b09-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 18:34:36 crc kubenswrapper[4805]: I0226 18:34:36.198624 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ps5n\" (UniqueName: \"kubernetes.io/projected/0d76a758-43b7-4c37-a79c-dcb9770a8b09-kube-api-access-6ps5n\") on node \"crc\" DevicePath \"\"" Feb 26 18:34:36 crc kubenswrapper[4805]: I0226 18:34:36.227752 4805 generic.go:334] "Generic (PLEG): container finished" podID="0d76a758-43b7-4c37-a79c-dcb9770a8b09" containerID="54bc857a607d2935b6ce5ed6315b101964bb50f2bc94f6f0a03e36fe0f77ce99" exitCode=0 Feb 26 18:34:36 crc kubenswrapper[4805]: I0226 18:34:36.227805 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mcbv4" event={"ID":"0d76a758-43b7-4c37-a79c-dcb9770a8b09","Type":"ContainerDied","Data":"54bc857a607d2935b6ce5ed6315b101964bb50f2bc94f6f0a03e36fe0f77ce99"} Feb 26 18:34:36 crc kubenswrapper[4805]: I0226 18:34:36.227837 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mcbv4" event={"ID":"0d76a758-43b7-4c37-a79c-dcb9770a8b09","Type":"ContainerDied","Data":"8256df8a53785272551540de937fd45a82aa67957c297577f5a105f018a21908"} Feb 26 18:34:36 crc kubenswrapper[4805]: I0226 18:34:36.227861 4805 scope.go:117] "RemoveContainer" containerID="54bc857a607d2935b6ce5ed6315b101964bb50f2bc94f6f0a03e36fe0f77ce99" Feb 26 18:34:36 crc kubenswrapper[4805]: I0226 18:34:36.228081 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mcbv4" Feb 26 18:34:36 crc kubenswrapper[4805]: I0226 18:34:36.256423 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d76a758-43b7-4c37-a79c-dcb9770a8b09-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d76a758-43b7-4c37-a79c-dcb9770a8b09" (UID: "0d76a758-43b7-4c37-a79c-dcb9770a8b09"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 18:34:36 crc kubenswrapper[4805]: I0226 18:34:36.278332 4805 scope.go:117] "RemoveContainer" containerID="508d638b53efde95f8095f720d0d1e4367875020c30e0268eeb4fa3061ed17f4" Feb 26 18:34:36 crc kubenswrapper[4805]: I0226 18:34:36.300680 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d76a758-43b7-4c37-a79c-dcb9770a8b09-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 18:34:36 crc kubenswrapper[4805]: I0226 18:34:36.307119 4805 scope.go:117] "RemoveContainer" containerID="8177fb4254a7468dff77a9ea67b78b5670c4f21400dd4c3a37f9ed3883a6f269" Feb 26 18:34:36 crc kubenswrapper[4805]: I0226 18:34:36.351532 4805 scope.go:117] "RemoveContainer" containerID="54bc857a607d2935b6ce5ed6315b101964bb50f2bc94f6f0a03e36fe0f77ce99" Feb 26 18:34:36 crc kubenswrapper[4805]: E0226 18:34:36.351979 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54bc857a607d2935b6ce5ed6315b101964bb50f2bc94f6f0a03e36fe0f77ce99\": container with ID starting with 54bc857a607d2935b6ce5ed6315b101964bb50f2bc94f6f0a03e36fe0f77ce99 not found: ID does not exist" containerID="54bc857a607d2935b6ce5ed6315b101964bb50f2bc94f6f0a03e36fe0f77ce99" Feb 26 18:34:36 crc kubenswrapper[4805]: I0226 18:34:36.352045 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54bc857a607d2935b6ce5ed6315b101964bb50f2bc94f6f0a03e36fe0f77ce99"} err="failed to get container status \"54bc857a607d2935b6ce5ed6315b101964bb50f2bc94f6f0a03e36fe0f77ce99\": rpc error: code = NotFound desc = could not find container \"54bc857a607d2935b6ce5ed6315b101964bb50f2bc94f6f0a03e36fe0f77ce99\": container with ID starting with 54bc857a607d2935b6ce5ed6315b101964bb50f2bc94f6f0a03e36fe0f77ce99 not found: ID does not exist" Feb 26 18:34:36 crc kubenswrapper[4805]: I0226 18:34:36.352076 4805 scope.go:117] "RemoveContainer" containerID="508d638b53efde95f8095f720d0d1e4367875020c30e0268eeb4fa3061ed17f4" Feb 26 18:34:36 crc kubenswrapper[4805]: E0226 18:34:36.352516 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"508d638b53efde95f8095f720d0d1e4367875020c30e0268eeb4fa3061ed17f4\": container with ID starting with 508d638b53efde95f8095f720d0d1e4367875020c30e0268eeb4fa3061ed17f4 not found: ID does not exist" containerID="508d638b53efde95f8095f720d0d1e4367875020c30e0268eeb4fa3061ed17f4" Feb 26 18:34:36 crc kubenswrapper[4805]: I0226 18:34:36.352549 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"508d638b53efde95f8095f720d0d1e4367875020c30e0268eeb4fa3061ed17f4"} err="failed to get container status \"508d638b53efde95f8095f720d0d1e4367875020c30e0268eeb4fa3061ed17f4\": rpc error: code = NotFound desc = could not find container \"508d638b53efde95f8095f720d0d1e4367875020c30e0268eeb4fa3061ed17f4\": container with ID starting with 508d638b53efde95f8095f720d0d1e4367875020c30e0268eeb4fa3061ed17f4 not found: ID does not exist" Feb 26 18:34:36 crc kubenswrapper[4805]: I0226 18:34:36.352566 4805 scope.go:117] "RemoveContainer" containerID="8177fb4254a7468dff77a9ea67b78b5670c4f21400dd4c3a37f9ed3883a6f269" Feb 26 18:34:36 crc kubenswrapper[4805]: E0226 18:34:36.352835 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8177fb4254a7468dff77a9ea67b78b5670c4f21400dd4c3a37f9ed3883a6f269\": container with ID starting with 8177fb4254a7468dff77a9ea67b78b5670c4f21400dd4c3a37f9ed3883a6f269 not found: ID does not exist" containerID="8177fb4254a7468dff77a9ea67b78b5670c4f21400dd4c3a37f9ed3883a6f269" Feb 26 18:34:36 crc kubenswrapper[4805]: I0226 18:34:36.352874 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8177fb4254a7468dff77a9ea67b78b5670c4f21400dd4c3a37f9ed3883a6f269"} err="failed to get container status \"8177fb4254a7468dff77a9ea67b78b5670c4f21400dd4c3a37f9ed3883a6f269\": rpc error: code = NotFound desc = could not find container \"8177fb4254a7468dff77a9ea67b78b5670c4f21400dd4c3a37f9ed3883a6f269\": container with ID starting with 8177fb4254a7468dff77a9ea67b78b5670c4f21400dd4c3a37f9ed3883a6f269 not found: ID does not exist" Feb 26 18:34:36 crc kubenswrapper[4805]: I0226 18:34:36.562686 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mcbv4"] Feb 26 18:34:36 crc kubenswrapper[4805]: I0226 18:34:36.571364 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mcbv4"] Feb 26 18:34:36 crc kubenswrapper[4805]: I0226 18:34:36.969838 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d76a758-43b7-4c37-a79c-dcb9770a8b09" path="/var/lib/kubelet/pods/0d76a758-43b7-4c37-a79c-dcb9770a8b09/volumes" Feb 26 18:34:41 crc kubenswrapper[4805]: I0226 18:34:41.973788 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-8ncfd_6474d164-4c68-4fba-9eaf-ec92e1636ea9/cert-manager-controller/0.log" Feb 26 18:34:42 crc kubenswrapper[4805]: I0226 18:34:42.176891 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-q4dc5_42dac4fd-2d52-471d-88df-5c9c12963936/cert-manager-cainjector/0.log" Feb 26 18:34:42 crc kubenswrapper[4805]: I0226 18:34:42.272504 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-mzsw5_1916615f-2f09-479c-896e-6be0815477cf/cert-manager-webhook/0.log" Feb 26 18:34:42 crc kubenswrapper[4805]: I0226 18:34:42.953580 4805 scope.go:117] "RemoveContainer" containerID="b7c30223c5b76c8ad9d42938cacce991adefae21cfd696e2668eb62862ed8166" Feb 26 18:34:42 crc kubenswrapper[4805]: E0226 18:34:42.954119 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:34:49 crc kubenswrapper[4805]: I0226 18:34:49.383910 4805 scope.go:117] "RemoveContainer" containerID="284bb6e4973c75a892264e2366af98f23bed65692bcab320a4e10f1af9e107f2" Feb 26 18:34:56 crc kubenswrapper[4805]: I0226 18:34:56.869709 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5dcbbd79cf-69z2b_c118280e-50c2-42a7-a69f-5cd4654ad329/nmstate-console-plugin/0.log" Feb 26 18:34:57 crc kubenswrapper[4805]: I0226 18:34:57.072874 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-kzzsr_9545e547-3f0a-461b-ac54-b0e3bc910543/kube-rbac-proxy/0.log" Feb 26 18:34:57 crc kubenswrapper[4805]: I0226 18:34:57.101165 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-v5h82_b589e613-548a-4e38-9802-b9aa97bea8ba/nmstate-handler/0.log" Feb 26 18:34:57 crc kubenswrapper[4805]: I0226 18:34:57.262006 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-kzzsr_9545e547-3f0a-461b-ac54-b0e3bc910543/nmstate-metrics/0.log" Feb 26 18:34:57 crc kubenswrapper[4805]: I0226 18:34:57.312684 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-75c5dccd6c-vw2h5_37140a20-72cb-40f9-814f-2d6d02a3d4fa/nmstate-operator/0.log" Feb 26 18:34:57 crc kubenswrapper[4805]: I0226 18:34:57.472325 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-786f45cff4-4tg62_20b243d2-9299-4d1b-b27e-3525992664ce/nmstate-webhook/0.log" Feb 26 18:34:57 crc kubenswrapper[4805]: I0226 18:34:57.953454 4805 scope.go:117] "RemoveContainer" containerID="b7c30223c5b76c8ad9d42938cacce991adefae21cfd696e2668eb62862ed8166" Feb 26 18:34:57 crc kubenswrapper[4805]: E0226 18:34:57.954231 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:35:09 crc kubenswrapper[4805]: I0226 18:35:09.968251 4805 scope.go:117] "RemoveContainer" containerID="b7c30223c5b76c8ad9d42938cacce991adefae21cfd696e2668eb62862ed8166" Feb 26 18:35:09 crc kubenswrapper[4805]: E0226 18:35:09.970865 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:35:11 crc kubenswrapper[4805]: I0226 18:35:11.569644 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-dbdbb8494-f8mdl_9d1d4276-53fb-4d5a-814b-c29e097718d0/kube-rbac-proxy/0.log" Feb 26 18:35:11 crc kubenswrapper[4805]: I0226 18:35:11.615827 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-dbdbb8494-f8mdl_9d1d4276-53fb-4d5a-814b-c29e097718d0/manager/0.log" Feb 26 18:35:21 crc kubenswrapper[4805]: I0226 18:35:21.953699 4805 scope.go:117] "RemoveContainer" containerID="b7c30223c5b76c8ad9d42938cacce991adefae21cfd696e2668eb62862ed8166" Feb 26 18:35:21 crc kubenswrapper[4805]: E0226 18:35:21.954615 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:35:25 crc kubenswrapper[4805]: I0226 18:35:25.467005 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-htjln_dfc0288a-d269-4568-a6f0-57bd9fa6cfcc/prometheus-operator/0.log" Feb 26 18:35:25 crc kubenswrapper[4805]: I0226 18:35:25.480149 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4_b3e82ddd-d1e2-4d50-a582-04f07e2a1fb4/prometheus-operator-admission-webhook/0.log" Feb 26 18:35:25 crc kubenswrapper[4805]: I0226 18:35:25.660361 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p_174cbe90-075b-4c73-ae20-cc8a47c42d06/prometheus-operator-admission-webhook/0.log" Feb 26 18:35:25 crc kubenswrapper[4805]: I0226 18:35:25.731815 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-t7gbj_df96d4df-f5b6-4b7b-956d-f957313d1914/operator/0.log" Feb 26 18:35:25 crc kubenswrapper[4805]: I0226 18:35:25.839177 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-2xtcm_c48a9594-81fc-493b-98b0-fc3ad286abe2/perses-operator/0.log" Feb 26 18:35:33 crc kubenswrapper[4805]: I0226 18:35:33.954598 4805 scope.go:117] "RemoveContainer" containerID="b7c30223c5b76c8ad9d42938cacce991adefae21cfd696e2668eb62862ed8166" Feb 26 18:35:33 crc kubenswrapper[4805]: E0226 18:35:33.955447 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:35:41 crc kubenswrapper[4805]: I0226 18:35:41.302731 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-4jj6l_2481b089-843f-4898-9f85-36769bac7219/kube-rbac-proxy/0.log" Feb 26 18:35:41 crc kubenswrapper[4805]: I0226 18:35:41.442700 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-4jj6l_2481b089-843f-4898-9f85-36769bac7219/controller/0.log" Feb 26 18:35:41 crc kubenswrapper[4805]: I0226 18:35:41.580185 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gb6n5_98934f34-7841-49dc-b326-c76aa0c09017/cp-frr-files/0.log" Feb 26 18:35:41 crc kubenswrapper[4805]: I0226 18:35:41.758523 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gb6n5_98934f34-7841-49dc-b326-c76aa0c09017/cp-frr-files/0.log" Feb 26 18:35:41 crc kubenswrapper[4805]: I0226 18:35:41.786959 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gb6n5_98934f34-7841-49dc-b326-c76aa0c09017/cp-reloader/0.log" Feb 26 18:35:41 crc kubenswrapper[4805]: I0226 18:35:41.799749 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gb6n5_98934f34-7841-49dc-b326-c76aa0c09017/cp-metrics/0.log" Feb 26 18:35:41 crc kubenswrapper[4805]: I0226 18:35:41.800402 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gb6n5_98934f34-7841-49dc-b326-c76aa0c09017/cp-reloader/0.log" Feb 26 18:35:41 crc kubenswrapper[4805]: I0226 18:35:41.961685 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gb6n5_98934f34-7841-49dc-b326-c76aa0c09017/cp-frr-files/0.log" Feb 26 18:35:41 crc kubenswrapper[4805]: I0226 18:35:41.972055 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gb6n5_98934f34-7841-49dc-b326-c76aa0c09017/cp-reloader/0.log" Feb 26 18:35:42 crc kubenswrapper[4805]: I0226 18:35:42.009050 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gb6n5_98934f34-7841-49dc-b326-c76aa0c09017/cp-metrics/0.log" Feb 26 18:35:42 crc kubenswrapper[4805]: I0226 18:35:42.041385 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gb6n5_98934f34-7841-49dc-b326-c76aa0c09017/cp-metrics/0.log" Feb 26 18:35:42 crc kubenswrapper[4805]: I0226 18:35:42.161670 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gb6n5_98934f34-7841-49dc-b326-c76aa0c09017/cp-reloader/0.log" Feb 26 18:35:42 crc kubenswrapper[4805]: I0226 18:35:42.172166 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gb6n5_98934f34-7841-49dc-b326-c76aa0c09017/cp-frr-files/0.log" Feb 26 18:35:42 crc kubenswrapper[4805]: I0226 18:35:42.184650 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gb6n5_98934f34-7841-49dc-b326-c76aa0c09017/cp-metrics/0.log" Feb 26 18:35:42 crc kubenswrapper[4805]: I0226 18:35:42.217741 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gb6n5_98934f34-7841-49dc-b326-c76aa0c09017/controller/0.log" Feb 26 18:35:42 crc kubenswrapper[4805]: I0226 18:35:42.382682 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gb6n5_98934f34-7841-49dc-b326-c76aa0c09017/frr-metrics/0.log" Feb 26 18:35:42 crc kubenswrapper[4805]: I0226 18:35:42.437226 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gb6n5_98934f34-7841-49dc-b326-c76aa0c09017/kube-rbac-proxy/0.log" Feb 26 18:35:42 crc kubenswrapper[4805]: I0226 18:35:42.541159 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gb6n5_98934f34-7841-49dc-b326-c76aa0c09017/kube-rbac-proxy-frr/0.log" Feb 26 18:35:42 crc kubenswrapper[4805]: I0226 18:35:42.676863 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gb6n5_98934f34-7841-49dc-b326-c76aa0c09017/reloader/0.log" Feb 26 18:35:42 crc kubenswrapper[4805]: I0226 18:35:42.770823 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7f989f654f-qpcbj_417cdacf-3299-4488-abd5-ceb51272f3be/frr-k8s-webhook-server/0.log" Feb 26 18:35:42 crc kubenswrapper[4805]: I0226 18:35:42.933640 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-54558c788d-wmhq8_22a4700e-04cb-4a48-9596-e0813f515868/manager/0.log" Feb 26 18:35:43 crc kubenswrapper[4805]: I0226 18:35:43.143478 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-54896b5f59-xjslq_154d74df-d117-44e5-86c4-b4a72182153e/webhook-server/0.log" Feb 26 18:35:43 crc kubenswrapper[4805]: I0226 18:35:43.253259 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-djvlh_0cde9605-24e4-48e2-b0b7-a4bf09039031/kube-rbac-proxy/0.log" Feb 26 18:35:43 crc kubenswrapper[4805]: I0226 18:35:43.998173 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-djvlh_0cde9605-24e4-48e2-b0b7-a4bf09039031/speaker/0.log" Feb 26 18:35:44 crc kubenswrapper[4805]: I0226 18:35:44.351528 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gb6n5_98934f34-7841-49dc-b326-c76aa0c09017/frr/0.log" Feb 26 18:35:47 crc kubenswrapper[4805]: I0226 18:35:47.953712 4805 scope.go:117] "RemoveContainer" containerID="b7c30223c5b76c8ad9d42938cacce991adefae21cfd696e2668eb62862ed8166" Feb 26 18:35:47 crc kubenswrapper[4805]: E0226 18:35:47.954708 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:35:58 crc kubenswrapper[4805]: I0226 18:35:58.871836 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82x2xnq_22ae402a-acf3-450c-b0e8-e3ad05335343/util/0.log" Feb 26 18:35:59 crc kubenswrapper[4805]: I0226 18:35:59.165547 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82x2xnq_22ae402a-acf3-450c-b0e8-e3ad05335343/util/0.log" Feb 26 18:35:59 crc kubenswrapper[4805]: I0226 18:35:59.224203 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82x2xnq_22ae402a-acf3-450c-b0e8-e3ad05335343/pull/0.log" Feb 26 18:35:59 crc kubenswrapper[4805]: I0226 18:35:59.258304 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82x2xnq_22ae402a-acf3-450c-b0e8-e3ad05335343/pull/0.log" Feb 26 18:35:59 crc kubenswrapper[4805]: I0226 18:35:59.421280 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82x2xnq_22ae402a-acf3-450c-b0e8-e3ad05335343/pull/0.log" Feb 26 18:35:59 crc kubenswrapper[4805]: I0226 18:35:59.421699 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82x2xnq_22ae402a-acf3-450c-b0e8-e3ad05335343/util/0.log" Feb 26 18:35:59 crc kubenswrapper[4805]: I0226 18:35:59.458263 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82x2xnq_22ae402a-acf3-450c-b0e8-e3ad05335343/extract/0.log" Feb 26 18:35:59 crc kubenswrapper[4805]: I0226 18:35:59.560829 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651s5h92_3b63836c-d48c-4cba-a661-0c6063f2cbbc/util/0.log" Feb 26 18:35:59 crc kubenswrapper[4805]: I0226 18:35:59.789714 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651s5h92_3b63836c-d48c-4cba-a661-0c6063f2cbbc/util/0.log" Feb 26 18:35:59 crc kubenswrapper[4805]: I0226 18:35:59.798208 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651s5h92_3b63836c-d48c-4cba-a661-0c6063f2cbbc/pull/0.log" Feb 26 18:35:59 crc kubenswrapper[4805]: I0226 18:35:59.813761 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651s5h92_3b63836c-d48c-4cba-a661-0c6063f2cbbc/pull/0.log" Feb 26 18:35:59 crc kubenswrapper[4805]: I0226 18:35:59.972294 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651s5h92_3b63836c-d48c-4cba-a661-0c6063f2cbbc/util/0.log" Feb 26 18:36:00 crc kubenswrapper[4805]: I0226 18:36:00.024118 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651s5h92_3b63836c-d48c-4cba-a661-0c6063f2cbbc/extract/0.log" Feb 26 18:36:00 crc kubenswrapper[4805]: I0226 18:36:00.031955 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651s5h92_3b63836c-d48c-4cba-a661-0c6063f2cbbc/pull/0.log" Feb 26 18:36:00 crc kubenswrapper[4805]: I0226 18:36:00.150899 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535516-8jgpr"] Feb 26 18:36:00 crc kubenswrapper[4805]: E0226 18:36:00.151405 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66e0bb37-53c2-4072-bb79-49dd3a8760c8" containerName="oc" Feb 26 18:36:00 crc kubenswrapper[4805]: I0226 18:36:00.151422 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="66e0bb37-53c2-4072-bb79-49dd3a8760c8" containerName="oc" Feb 26 18:36:00 crc kubenswrapper[4805]: E0226 18:36:00.151437 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d76a758-43b7-4c37-a79c-dcb9770a8b09" containerName="registry-server" Feb 26 18:36:00 crc kubenswrapper[4805]: I0226 18:36:00.151443 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d76a758-43b7-4c37-a79c-dcb9770a8b09" containerName="registry-server" Feb 26 18:36:00 crc kubenswrapper[4805]: E0226 18:36:00.151450 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d76a758-43b7-4c37-a79c-dcb9770a8b09" containerName="extract-utilities" Feb 26 18:36:00 crc kubenswrapper[4805]: I0226 18:36:00.151456 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d76a758-43b7-4c37-a79c-dcb9770a8b09" containerName="extract-utilities" Feb 26 18:36:00 crc kubenswrapper[4805]: E0226 18:36:00.151467 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d76a758-43b7-4c37-a79c-dcb9770a8b09" containerName="extract-content" Feb 26 18:36:00 crc kubenswrapper[4805]: I0226 18:36:00.151483 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d76a758-43b7-4c37-a79c-dcb9770a8b09" containerName="extract-content" Feb 26 18:36:00 crc kubenswrapper[4805]: I0226 18:36:00.151675 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="66e0bb37-53c2-4072-bb79-49dd3a8760c8" containerName="oc" Feb 26 18:36:00 crc kubenswrapper[4805]: I0226 18:36:00.151700 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d76a758-43b7-4c37-a79c-dcb9770a8b09" containerName="registry-server" Feb 26 18:36:00 crc kubenswrapper[4805]: I0226 18:36:00.152421 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535516-8jgpr" Feb 26 18:36:00 crc kubenswrapper[4805]: I0226 18:36:00.154588 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 18:36:00 crc kubenswrapper[4805]: I0226 18:36:00.155095 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 18:36:00 crc kubenswrapper[4805]: I0226 18:36:00.164701 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535516-8jgpr"] Feb 26 18:36:00 crc kubenswrapper[4805]: I0226 18:36:00.165374 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 18:36:00 crc kubenswrapper[4805]: I0226 18:36:00.232974 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0865zbx_c45bc675-11fe-4450-b640-fa2d62126bda/util/0.log" Feb 26 18:36:00 crc kubenswrapper[4805]: I0226 18:36:00.319793 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88rwh\" (UniqueName: \"kubernetes.io/projected/1f813caa-a8ab-49b0-891a-d499bc05a9a5-kube-api-access-88rwh\") pod \"auto-csr-approver-29535516-8jgpr\" (UID: \"1f813caa-a8ab-49b0-891a-d499bc05a9a5\") " pod="openshift-infra/auto-csr-approver-29535516-8jgpr" Feb 26 18:36:00 crc kubenswrapper[4805]: I0226 18:36:00.416044 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0865zbx_c45bc675-11fe-4450-b640-fa2d62126bda/util/0.log" Feb 26 18:36:00 crc kubenswrapper[4805]: I0226 18:36:00.421763 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88rwh\" (UniqueName: \"kubernetes.io/projected/1f813caa-a8ab-49b0-891a-d499bc05a9a5-kube-api-access-88rwh\") pod \"auto-csr-approver-29535516-8jgpr\" (UID: \"1f813caa-a8ab-49b0-891a-d499bc05a9a5\") " pod="openshift-infra/auto-csr-approver-29535516-8jgpr" Feb 26 18:36:00 crc kubenswrapper[4805]: I0226 18:36:00.446999 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0865zbx_c45bc675-11fe-4450-b640-fa2d62126bda/pull/0.log" Feb 26 18:36:00 crc kubenswrapper[4805]: I0226 18:36:00.448742 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0865zbx_c45bc675-11fe-4450-b640-fa2d62126bda/pull/0.log" Feb 26 18:36:00 crc kubenswrapper[4805]: I0226 18:36:00.451584 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88rwh\" (UniqueName: \"kubernetes.io/projected/1f813caa-a8ab-49b0-891a-d499bc05a9a5-kube-api-access-88rwh\") pod \"auto-csr-approver-29535516-8jgpr\" (UID: \"1f813caa-a8ab-49b0-891a-d499bc05a9a5\") " pod="openshift-infra/auto-csr-approver-29535516-8jgpr" Feb 26 18:36:00 crc kubenswrapper[4805]: I0226 18:36:00.472999 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535516-8jgpr" Feb 26 18:36:00 crc kubenswrapper[4805]: I0226 18:36:00.639104 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0865zbx_c45bc675-11fe-4450-b640-fa2d62126bda/extract/0.log" Feb 26 18:36:00 crc kubenswrapper[4805]: I0226 18:36:00.687501 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0865zbx_c45bc675-11fe-4450-b640-fa2d62126bda/pull/0.log" Feb 26 18:36:00 crc kubenswrapper[4805]: I0226 18:36:00.708451 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0865zbx_c45bc675-11fe-4450-b640-fa2d62126bda/util/0.log" Feb 26 18:36:00 crc kubenswrapper[4805]: I0226 18:36:00.830348 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-z8np4_f0f39e3f-788e-466a-bb81-0278246ad4b6/extract-utilities/0.log" Feb 26 18:36:00 crc kubenswrapper[4805]: I0226 18:36:00.922998 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535516-8jgpr"] Feb 26 18:36:00 crc kubenswrapper[4805]: I0226 18:36:00.925475 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 18:36:00 crc kubenswrapper[4805]: I0226 18:36:00.971171 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-z8np4_f0f39e3f-788e-466a-bb81-0278246ad4b6/extract-content/0.log" Feb 26 18:36:00 crc kubenswrapper[4805]: I0226 18:36:00.992782 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-z8np4_f0f39e3f-788e-466a-bb81-0278246ad4b6/extract-utilities/0.log" Feb 26 18:36:01 crc kubenswrapper[4805]: I0226 18:36:01.029921 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-z8np4_f0f39e3f-788e-466a-bb81-0278246ad4b6/extract-content/0.log" Feb 26 18:36:01 crc kubenswrapper[4805]: I0226 18:36:01.145774 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535516-8jgpr" event={"ID":"1f813caa-a8ab-49b0-891a-d499bc05a9a5","Type":"ContainerStarted","Data":"089be26a3796095a94cc5820cde086bd96673d475479250b455acf25686586d7"} Feb 26 18:36:01 crc kubenswrapper[4805]: I0226 18:36:01.206168 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-z8np4_f0f39e3f-788e-466a-bb81-0278246ad4b6/extract-utilities/0.log" Feb 26 18:36:01 crc kubenswrapper[4805]: I0226 18:36:01.298064 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-z8np4_f0f39e3f-788e-466a-bb81-0278246ad4b6/extract-content/0.log" Feb 26 18:36:01 crc kubenswrapper[4805]: I0226 18:36:01.429497 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-r5crp_d021e872-9d99-40a3-8b0d-865aa5c8b287/extract-utilities/0.log" Feb 26 18:36:01 crc kubenswrapper[4805]: I0226 18:36:01.687818 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-r5crp_d021e872-9d99-40a3-8b0d-865aa5c8b287/extract-content/0.log" Feb 26 18:36:01 crc kubenswrapper[4805]: I0226 18:36:01.763274 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-r5crp_d021e872-9d99-40a3-8b0d-865aa5c8b287/extract-content/0.log" Feb 26 18:36:01 crc kubenswrapper[4805]: I0226 18:36:01.775165 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-r5crp_d021e872-9d99-40a3-8b0d-865aa5c8b287/extract-utilities/0.log" Feb 26 18:36:01 crc kubenswrapper[4805]: I0226 18:36:01.868601 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-z8np4_f0f39e3f-788e-466a-bb81-0278246ad4b6/registry-server/0.log" Feb 26 18:36:01 crc kubenswrapper[4805]: I0226 18:36:01.963681 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-r5crp_d021e872-9d99-40a3-8b0d-865aa5c8b287/extract-content/0.log" Feb 26 18:36:02 crc kubenswrapper[4805]: I0226 18:36:02.036744 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-r5crp_d021e872-9d99-40a3-8b0d-865aa5c8b287/extract-utilities/0.log" Feb 26 18:36:02 crc kubenswrapper[4805]: I0226 18:36:02.156454 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535516-8jgpr" event={"ID":"1f813caa-a8ab-49b0-891a-d499bc05a9a5","Type":"ContainerStarted","Data":"a261f788842547c329e1a509b15464eb97e814ec93bec5920352a4c14b058a00"} Feb 26 18:36:02 crc kubenswrapper[4805]: I0226 18:36:02.172643 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535516-8jgpr" podStartSLOduration=1.300642646 podStartE2EDuration="2.172627574s" podCreationTimestamp="2026-02-26 18:36:00 +0000 UTC" firstStartedPulling="2026-02-26 18:36:00.925202035 +0000 UTC m=+4875.486956374" lastFinishedPulling="2026-02-26 18:36:01.797186963 +0000 UTC m=+4876.358941302" observedRunningTime="2026-02-26 18:36:02.167745662 +0000 UTC m=+4876.729500001" watchObservedRunningTime="2026-02-26 18:36:02.172627574 +0000 UTC m=+4876.734381913" Feb 26 18:36:02 crc kubenswrapper[4805]: I0226 18:36:02.208613 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4sxhs9_6684ce9e-9190-4147-961f-4d9b437d17be/util/0.log" Feb 26 18:36:02 crc kubenswrapper[4805]: I0226 18:36:02.683905 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-r5crp_d021e872-9d99-40a3-8b0d-865aa5c8b287/registry-server/0.log" Feb 26 18:36:02 crc kubenswrapper[4805]: I0226 18:36:02.948482 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4sxhs9_6684ce9e-9190-4147-961f-4d9b437d17be/util/0.log" Feb 26 18:36:02 crc kubenswrapper[4805]: I0226 18:36:02.952580 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4sxhs9_6684ce9e-9190-4147-961f-4d9b437d17be/pull/0.log" Feb 26 18:36:02 crc kubenswrapper[4805]: I0226 18:36:02.953033 4805 scope.go:117] "RemoveContainer" containerID="b7c30223c5b76c8ad9d42938cacce991adefae21cfd696e2668eb62862ed8166" Feb 26 18:36:02 crc kubenswrapper[4805]: E0226 18:36:02.953390 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:36:02 crc kubenswrapper[4805]: I0226 18:36:02.956565 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4sxhs9_6684ce9e-9190-4147-961f-4d9b437d17be/pull/0.log" Feb 26 18:36:03 crc kubenswrapper[4805]: I0226 18:36:03.119361 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4sxhs9_6684ce9e-9190-4147-961f-4d9b437d17be/util/0.log" Feb 26 18:36:03 crc kubenswrapper[4805]: I0226 18:36:03.141522 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4sxhs9_6684ce9e-9190-4147-961f-4d9b437d17be/pull/0.log" Feb 26 18:36:03 crc kubenswrapper[4805]: I0226 18:36:03.153129 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-shsd9_f4c1b8e1-fec8-422a-b155-b99fd4a121fc/marketplace-operator/0.log" Feb 26 18:36:03 crc kubenswrapper[4805]: I0226 18:36:03.161595 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4sxhs9_6684ce9e-9190-4147-961f-4d9b437d17be/extract/0.log" Feb 26 18:36:03 crc kubenswrapper[4805]: I0226 18:36:03.166911 4805 generic.go:334] "Generic (PLEG): container finished" podID="1f813caa-a8ab-49b0-891a-d499bc05a9a5" containerID="a261f788842547c329e1a509b15464eb97e814ec93bec5920352a4c14b058a00" exitCode=0 Feb 26 18:36:03 crc kubenswrapper[4805]: I0226 18:36:03.167006 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535516-8jgpr" event={"ID":"1f813caa-a8ab-49b0-891a-d499bc05a9a5","Type":"ContainerDied","Data":"a261f788842547c329e1a509b15464eb97e814ec93bec5920352a4c14b058a00"} Feb 26 18:36:03 crc kubenswrapper[4805]: I0226 18:36:03.368290 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zwn5f_cab13165-dd85-4398-996a-f9795912f12e/extract-utilities/0.log" Feb 26 18:36:03 crc kubenswrapper[4805]: I0226 18:36:03.557660 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zwn5f_cab13165-dd85-4398-996a-f9795912f12e/extract-content/0.log" Feb 26 18:36:03 crc kubenswrapper[4805]: I0226 18:36:03.590220 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zwn5f_cab13165-dd85-4398-996a-f9795912f12e/extract-utilities/0.log" Feb 26 18:36:03 crc kubenswrapper[4805]: I0226 18:36:03.600910 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zwn5f_cab13165-dd85-4398-996a-f9795912f12e/extract-content/0.log" Feb 26 18:36:03 crc kubenswrapper[4805]: I0226 18:36:03.746627 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zwn5f_cab13165-dd85-4398-996a-f9795912f12e/extract-utilities/0.log" Feb 26 18:36:03 crc kubenswrapper[4805]: I0226 18:36:03.775846 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zwn5f_cab13165-dd85-4398-996a-f9795912f12e/extract-content/0.log" Feb 26 18:36:03 crc kubenswrapper[4805]: I0226 18:36:03.775937 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rb2p9_db58eaee-5842-4d11-babf-1ededef9c68e/extract-utilities/0.log" Feb 26 18:36:03 crc kubenswrapper[4805]: I0226 18:36:03.974435 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-zwn5f_cab13165-dd85-4398-996a-f9795912f12e/registry-server/0.log" Feb 26 18:36:04 crc kubenswrapper[4805]: I0226 18:36:04.011925 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rb2p9_db58eaee-5842-4d11-babf-1ededef9c68e/extract-content/0.log" Feb 26 18:36:04 crc kubenswrapper[4805]: I0226 18:36:04.037560 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rb2p9_db58eaee-5842-4d11-babf-1ededef9c68e/extract-utilities/0.log" Feb 26 18:36:04 crc kubenswrapper[4805]: I0226 18:36:04.091895 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rb2p9_db58eaee-5842-4d11-babf-1ededef9c68e/extract-content/0.log" Feb 26 18:36:04 crc kubenswrapper[4805]: I0226 18:36:04.704698 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535516-8jgpr" Feb 26 18:36:04 crc kubenswrapper[4805]: I0226 18:36:04.794234 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rb2p9_db58eaee-5842-4d11-babf-1ededef9c68e/extract-content/0.log" Feb 26 18:36:04 crc kubenswrapper[4805]: I0226 18:36:04.810726 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88rwh\" (UniqueName: \"kubernetes.io/projected/1f813caa-a8ab-49b0-891a-d499bc05a9a5-kube-api-access-88rwh\") pod \"1f813caa-a8ab-49b0-891a-d499bc05a9a5\" (UID: \"1f813caa-a8ab-49b0-891a-d499bc05a9a5\") " Feb 26 18:36:04 crc kubenswrapper[4805]: I0226 18:36:04.818105 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f813caa-a8ab-49b0-891a-d499bc05a9a5-kube-api-access-88rwh" (OuterVolumeSpecName: "kube-api-access-88rwh") pod "1f813caa-a8ab-49b0-891a-d499bc05a9a5" (UID: "1f813caa-a8ab-49b0-891a-d499bc05a9a5"). InnerVolumeSpecName "kube-api-access-88rwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:36:04 crc kubenswrapper[4805]: I0226 18:36:04.829847 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rb2p9_db58eaee-5842-4d11-babf-1ededef9c68e/extract-utilities/0.log" Feb 26 18:36:04 crc kubenswrapper[4805]: I0226 18:36:04.913903 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88rwh\" (UniqueName: \"kubernetes.io/projected/1f813caa-a8ab-49b0-891a-d499bc05a9a5-kube-api-access-88rwh\") on node \"crc\" DevicePath \"\"" Feb 26 18:36:05 crc kubenswrapper[4805]: I0226 18:36:05.189392 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535516-8jgpr" event={"ID":"1f813caa-a8ab-49b0-891a-d499bc05a9a5","Type":"ContainerDied","Data":"089be26a3796095a94cc5820cde086bd96673d475479250b455acf25686586d7"} Feb 26 18:36:05 crc kubenswrapper[4805]: I0226 18:36:05.189432 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="089be26a3796095a94cc5820cde086bd96673d475479250b455acf25686586d7" Feb 26 18:36:05 crc kubenswrapper[4805]: I0226 18:36:05.189495 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535516-8jgpr" Feb 26 18:36:05 crc kubenswrapper[4805]: I0226 18:36:05.242499 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535510-lt547"] Feb 26 18:36:05 crc kubenswrapper[4805]: I0226 18:36:05.253353 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535510-lt547"] Feb 26 18:36:05 crc kubenswrapper[4805]: I0226 18:36:05.317426 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rb2p9_db58eaee-5842-4d11-babf-1ededef9c68e/registry-server/0.log" Feb 26 18:36:06 crc kubenswrapper[4805]: I0226 18:36:06.969238 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78fd51e0-ba17-4be2-8e62-d1be273a222e" path="/var/lib/kubelet/pods/78fd51e0-ba17-4be2-8e62-d1be273a222e/volumes" Feb 26 18:36:13 crc kubenswrapper[4805]: I0226 18:36:13.953004 4805 scope.go:117] "RemoveContainer" containerID="b7c30223c5b76c8ad9d42938cacce991adefae21cfd696e2668eb62862ed8166" Feb 26 18:36:13 crc kubenswrapper[4805]: E0226 18:36:13.954490 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:36:18 crc kubenswrapper[4805]: I0226 18:36:18.693071 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5c4cb46844-fsfd4_b3e82ddd-d1e2-4d50-a582-04f07e2a1fb4/prometheus-operator-admission-webhook/0.log" Feb 26 18:36:18 crc kubenswrapper[4805]: I0226 18:36:18.705700 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-htjln_dfc0288a-d269-4568-a6f0-57bd9fa6cfcc/prometheus-operator/0.log" Feb 26 18:36:18 crc kubenswrapper[4805]: I0226 18:36:18.731960 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5c4cb46844-nsp7p_174cbe90-075b-4c73-ae20-cc8a47c42d06/prometheus-operator-admission-webhook/0.log" Feb 26 18:36:18 crc kubenswrapper[4805]: I0226 18:36:18.957037 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-t7gbj_df96d4df-f5b6-4b7b-956d-f957313d1914/operator/0.log" Feb 26 18:36:18 crc kubenswrapper[4805]: I0226 18:36:18.966681 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-2xtcm_c48a9594-81fc-493b-98b0-fc3ad286abe2/perses-operator/0.log" Feb 26 18:36:26 crc kubenswrapper[4805]: I0226 18:36:26.961717 4805 scope.go:117] "RemoveContainer" containerID="b7c30223c5b76c8ad9d42938cacce991adefae21cfd696e2668eb62862ed8166" Feb 26 18:36:26 crc kubenswrapper[4805]: E0226 18:36:26.962667 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:36:33 crc kubenswrapper[4805]: I0226 18:36:33.411770 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-dbdbb8494-f8mdl_9d1d4276-53fb-4d5a-814b-c29e097718d0/kube-rbac-proxy/0.log" Feb 26 18:36:33 crc kubenswrapper[4805]: I0226 18:36:33.431435 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-dbdbb8494-f8mdl_9d1d4276-53fb-4d5a-814b-c29e097718d0/manager/0.log" Feb 26 18:36:41 crc kubenswrapper[4805]: I0226 18:36:41.953498 4805 scope.go:117] "RemoveContainer" containerID="b7c30223c5b76c8ad9d42938cacce991adefae21cfd696e2668eb62862ed8166" Feb 26 18:36:41 crc kubenswrapper[4805]: E0226 18:36:41.954333 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:36:49 crc kubenswrapper[4805]: I0226 18:36:49.504146 4805 scope.go:117] "RemoveContainer" containerID="f4a8e881a17b95a2c7a6190e7b8649d99646b4b4cce6c07197bb7491211cf453" Feb 26 18:36:53 crc kubenswrapper[4805]: I0226 18:36:53.954435 4805 scope.go:117] "RemoveContainer" containerID="b7c30223c5b76c8ad9d42938cacce991adefae21cfd696e2668eb62862ed8166" Feb 26 18:36:53 crc kubenswrapper[4805]: E0226 18:36:53.955107 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:37:04 crc kubenswrapper[4805]: I0226 18:37:04.953125 4805 scope.go:117] "RemoveContainer" containerID="b7c30223c5b76c8ad9d42938cacce991adefae21cfd696e2668eb62862ed8166" Feb 26 18:37:04 crc kubenswrapper[4805]: E0226 18:37:04.953642 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:37:18 crc kubenswrapper[4805]: I0226 18:37:18.955091 4805 scope.go:117] "RemoveContainer" containerID="b7c30223c5b76c8ad9d42938cacce991adefae21cfd696e2668eb62862ed8166" Feb 26 18:37:18 crc kubenswrapper[4805]: E0226 18:37:18.955928 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:37:29 crc kubenswrapper[4805]: I0226 18:37:29.954501 4805 scope.go:117] "RemoveContainer" containerID="b7c30223c5b76c8ad9d42938cacce991adefae21cfd696e2668eb62862ed8166" Feb 26 18:37:29 crc kubenswrapper[4805]: E0226 18:37:29.955892 4805 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2mnb9_openshift-machine-config-operator(25e83477-65d0-41be-8e55-fdacfc5871a8)\"" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" Feb 26 18:37:44 crc kubenswrapper[4805]: I0226 18:37:44.953389 4805 scope.go:117] "RemoveContainer" containerID="b7c30223c5b76c8ad9d42938cacce991adefae21cfd696e2668eb62862ed8166" Feb 26 18:37:45 crc kubenswrapper[4805]: I0226 18:37:45.238175 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" event={"ID":"25e83477-65d0-41be-8e55-fdacfc5871a8","Type":"ContainerStarted","Data":"94698164bbc748d493aaa4737cfc8a3edd3072f7f08fd64a50f13376435af125"} Feb 26 18:37:49 crc kubenswrapper[4805]: I0226 18:37:49.621782 4805 scope.go:117] "RemoveContainer" containerID="6e9fecc8a373882648e9b316833ccd4fc6599bc5c954578c0fb6e6622bcc96cd" Feb 26 18:38:00 crc kubenswrapper[4805]: I0226 18:38:00.157458 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535518-hdjbr"] Feb 26 18:38:00 crc kubenswrapper[4805]: E0226 18:38:00.160458 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f813caa-a8ab-49b0-891a-d499bc05a9a5" containerName="oc" Feb 26 18:38:00 crc kubenswrapper[4805]: I0226 18:38:00.160621 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f813caa-a8ab-49b0-891a-d499bc05a9a5" containerName="oc" Feb 26 18:38:00 crc kubenswrapper[4805]: I0226 18:38:00.161495 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f813caa-a8ab-49b0-891a-d499bc05a9a5" containerName="oc" Feb 26 18:38:00 crc kubenswrapper[4805]: I0226 18:38:00.162741 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535518-hdjbr" Feb 26 18:38:00 crc kubenswrapper[4805]: I0226 18:38:00.166302 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 18:38:00 crc kubenswrapper[4805]: I0226 18:38:00.166316 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 18:38:00 crc kubenswrapper[4805]: I0226 18:38:00.168063 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 18:38:00 crc kubenswrapper[4805]: I0226 18:38:00.198469 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535518-hdjbr"] Feb 26 18:38:00 crc kubenswrapper[4805]: I0226 18:38:00.327221 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8kkv\" (UniqueName: \"kubernetes.io/projected/586995d4-d43f-401b-babf-9779493cb4ba-kube-api-access-q8kkv\") pod \"auto-csr-approver-29535518-hdjbr\" (UID: \"586995d4-d43f-401b-babf-9779493cb4ba\") " pod="openshift-infra/auto-csr-approver-29535518-hdjbr" Feb 26 18:38:00 crc kubenswrapper[4805]: I0226 18:38:00.428830 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8kkv\" (UniqueName: \"kubernetes.io/projected/586995d4-d43f-401b-babf-9779493cb4ba-kube-api-access-q8kkv\") pod \"auto-csr-approver-29535518-hdjbr\" (UID: \"586995d4-d43f-401b-babf-9779493cb4ba\") " pod="openshift-infra/auto-csr-approver-29535518-hdjbr" Feb 26 18:38:00 crc kubenswrapper[4805]: I0226 18:38:00.448958 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8kkv\" (UniqueName: \"kubernetes.io/projected/586995d4-d43f-401b-babf-9779493cb4ba-kube-api-access-q8kkv\") pod \"auto-csr-approver-29535518-hdjbr\" (UID: \"586995d4-d43f-401b-babf-9779493cb4ba\") " pod="openshift-infra/auto-csr-approver-29535518-hdjbr" Feb 26 18:38:00 crc kubenswrapper[4805]: I0226 18:38:00.492278 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535518-hdjbr" Feb 26 18:38:01 crc kubenswrapper[4805]: I0226 18:38:00.997008 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535518-hdjbr"] Feb 26 18:38:01 crc kubenswrapper[4805]: W0226 18:38:01.077332 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod586995d4_d43f_401b_babf_9779493cb4ba.slice/crio-4dd73abc7e3b69e46a9c27a20e48d3acb8c10e2cac90214a1ea59571f360856e WatchSource:0}: Error finding container 4dd73abc7e3b69e46a9c27a20e48d3acb8c10e2cac90214a1ea59571f360856e: Status 404 returned error can't find the container with id 4dd73abc7e3b69e46a9c27a20e48d3acb8c10e2cac90214a1ea59571f360856e Feb 26 18:38:01 crc kubenswrapper[4805]: I0226 18:38:01.387811 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535518-hdjbr" event={"ID":"586995d4-d43f-401b-babf-9779493cb4ba","Type":"ContainerStarted","Data":"4dd73abc7e3b69e46a9c27a20e48d3acb8c10e2cac90214a1ea59571f360856e"} Feb 26 18:38:03 crc kubenswrapper[4805]: I0226 18:38:03.409281 4805 generic.go:334] "Generic (PLEG): container finished" podID="586995d4-d43f-401b-babf-9779493cb4ba" containerID="b652a26cbeb82b4d53eddf0e9da6cb97d88a2ccc9f40bdff7516cd3ec3e128d3" exitCode=0 Feb 26 18:38:03 crc kubenswrapper[4805]: I0226 18:38:03.409593 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535518-hdjbr" event={"ID":"586995d4-d43f-401b-babf-9779493cb4ba","Type":"ContainerDied","Data":"b652a26cbeb82b4d53eddf0e9da6cb97d88a2ccc9f40bdff7516cd3ec3e128d3"} Feb 26 18:38:05 crc kubenswrapper[4805]: I0226 18:38:05.024319 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535518-hdjbr" Feb 26 18:38:05 crc kubenswrapper[4805]: I0226 18:38:05.145093 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8kkv\" (UniqueName: \"kubernetes.io/projected/586995d4-d43f-401b-babf-9779493cb4ba-kube-api-access-q8kkv\") pod \"586995d4-d43f-401b-babf-9779493cb4ba\" (UID: \"586995d4-d43f-401b-babf-9779493cb4ba\") " Feb 26 18:38:05 crc kubenswrapper[4805]: I0226 18:38:05.151692 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/586995d4-d43f-401b-babf-9779493cb4ba-kube-api-access-q8kkv" (OuterVolumeSpecName: "kube-api-access-q8kkv") pod "586995d4-d43f-401b-babf-9779493cb4ba" (UID: "586995d4-d43f-401b-babf-9779493cb4ba"). InnerVolumeSpecName "kube-api-access-q8kkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:38:05 crc kubenswrapper[4805]: I0226 18:38:05.249934 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8kkv\" (UniqueName: \"kubernetes.io/projected/586995d4-d43f-401b-babf-9779493cb4ba-kube-api-access-q8kkv\") on node \"crc\" DevicePath \"\"" Feb 26 18:38:05 crc kubenswrapper[4805]: I0226 18:38:05.437514 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535518-hdjbr" event={"ID":"586995d4-d43f-401b-babf-9779493cb4ba","Type":"ContainerDied","Data":"4dd73abc7e3b69e46a9c27a20e48d3acb8c10e2cac90214a1ea59571f360856e"} Feb 26 18:38:05 crc kubenswrapper[4805]: I0226 18:38:05.437560 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dd73abc7e3b69e46a9c27a20e48d3acb8c10e2cac90214a1ea59571f360856e" Feb 26 18:38:05 crc kubenswrapper[4805]: I0226 18:38:05.437598 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535518-hdjbr" Feb 26 18:38:06 crc kubenswrapper[4805]: I0226 18:38:06.122722 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535512-lkw4c"] Feb 26 18:38:06 crc kubenswrapper[4805]: I0226 18:38:06.142292 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535512-lkw4c"] Feb 26 18:38:06 crc kubenswrapper[4805]: I0226 18:38:06.977789 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c38d130-9e7f-4edf-bb86-f62f1bfe000b" path="/var/lib/kubelet/pods/5c38d130-9e7f-4edf-bb86-f62f1bfe000b/volumes" Feb 26 18:38:26 crc kubenswrapper[4805]: I0226 18:38:26.340261 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-7d9687b795-zqn9b" podUID="01b8f655-2944-4562-89c4-d2bcf9516cde" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 26 18:38:31 crc kubenswrapper[4805]: I0226 18:38:31.716895 4805 generic.go:334] "Generic (PLEG): container finished" podID="d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4" containerID="08146ac29046346b848998f16192d184fced07ca1bf020240e2d0ca1acaa3817" exitCode=0 Feb 26 18:38:31 crc kubenswrapper[4805]: I0226 18:38:31.716983 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7cvl2/must-gather-m4mgg" event={"ID":"d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4","Type":"ContainerDied","Data":"08146ac29046346b848998f16192d184fced07ca1bf020240e2d0ca1acaa3817"} Feb 26 18:38:31 crc kubenswrapper[4805]: I0226 18:38:31.719715 4805 scope.go:117] "RemoveContainer" containerID="08146ac29046346b848998f16192d184fced07ca1bf020240e2d0ca1acaa3817" Feb 26 18:38:32 crc kubenswrapper[4805]: I0226 18:38:32.080834 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7cvl2_must-gather-m4mgg_d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4/gather/0.log" Feb 26 18:38:41 crc kubenswrapper[4805]: I0226 18:38:41.076082 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7cvl2/must-gather-m4mgg"] Feb 26 18:38:41 crc kubenswrapper[4805]: I0226 18:38:41.076736 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-7cvl2/must-gather-m4mgg" podUID="d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4" containerName="copy" containerID="cri-o://fe7875ff4b71c2dfef59f52ff9d9d4994b9ed3fa10917df12b79e1484fdaec9b" gracePeriod=2 Feb 26 18:38:41 crc kubenswrapper[4805]: I0226 18:38:41.086756 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7cvl2/must-gather-m4mgg"] Feb 26 18:38:41 crc kubenswrapper[4805]: I0226 18:38:41.682581 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7cvl2_must-gather-m4mgg_d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4/copy/0.log" Feb 26 18:38:41 crc kubenswrapper[4805]: I0226 18:38:41.683365 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7cvl2/must-gather-m4mgg" Feb 26 18:38:41 crc kubenswrapper[4805]: I0226 18:38:41.818502 4805 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7cvl2_must-gather-m4mgg_d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4/copy/0.log" Feb 26 18:38:41 crc kubenswrapper[4805]: I0226 18:38:41.818973 4805 generic.go:334] "Generic (PLEG): container finished" podID="d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4" containerID="fe7875ff4b71c2dfef59f52ff9d9d4994b9ed3fa10917df12b79e1484fdaec9b" exitCode=143 Feb 26 18:38:41 crc kubenswrapper[4805]: I0226 18:38:41.819035 4805 scope.go:117] "RemoveContainer" containerID="fe7875ff4b71c2dfef59f52ff9d9d4994b9ed3fa10917df12b79e1484fdaec9b" Feb 26 18:38:41 crc kubenswrapper[4805]: I0226 18:38:41.819160 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7cvl2/must-gather-m4mgg" Feb 26 18:38:41 crc kubenswrapper[4805]: I0226 18:38:41.838232 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4-must-gather-output\") pod \"d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4\" (UID: \"d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4\") " Feb 26 18:38:41 crc kubenswrapper[4805]: I0226 18:38:41.838365 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kzt6\" (UniqueName: \"kubernetes.io/projected/d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4-kube-api-access-7kzt6\") pod \"d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4\" (UID: \"d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4\") " Feb 26 18:38:41 crc kubenswrapper[4805]: I0226 18:38:41.848187 4805 scope.go:117] "RemoveContainer" containerID="08146ac29046346b848998f16192d184fced07ca1bf020240e2d0ca1acaa3817" Feb 26 18:38:42 crc kubenswrapper[4805]: I0226 18:38:42.065521 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4" (UID: "d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 18:38:42 crc kubenswrapper[4805]: I0226 18:38:42.147355 4805 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 26 18:38:42 crc kubenswrapper[4805]: I0226 18:38:42.178767 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4-kube-api-access-7kzt6" (OuterVolumeSpecName: "kube-api-access-7kzt6") pod "d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4" (UID: "d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4"). InnerVolumeSpecName "kube-api-access-7kzt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:38:42 crc kubenswrapper[4805]: I0226 18:38:42.239227 4805 scope.go:117] "RemoveContainer" containerID="fe7875ff4b71c2dfef59f52ff9d9d4994b9ed3fa10917df12b79e1484fdaec9b" Feb 26 18:38:42 crc kubenswrapper[4805]: E0226 18:38:42.242671 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe7875ff4b71c2dfef59f52ff9d9d4994b9ed3fa10917df12b79e1484fdaec9b\": container with ID starting with fe7875ff4b71c2dfef59f52ff9d9d4994b9ed3fa10917df12b79e1484fdaec9b not found: ID does not exist" containerID="fe7875ff4b71c2dfef59f52ff9d9d4994b9ed3fa10917df12b79e1484fdaec9b" Feb 26 18:38:42 crc kubenswrapper[4805]: I0226 18:38:42.242757 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe7875ff4b71c2dfef59f52ff9d9d4994b9ed3fa10917df12b79e1484fdaec9b"} err="failed to get container status \"fe7875ff4b71c2dfef59f52ff9d9d4994b9ed3fa10917df12b79e1484fdaec9b\": rpc error: code = NotFound desc = could not find container \"fe7875ff4b71c2dfef59f52ff9d9d4994b9ed3fa10917df12b79e1484fdaec9b\": container with ID starting with fe7875ff4b71c2dfef59f52ff9d9d4994b9ed3fa10917df12b79e1484fdaec9b not found: ID does not exist" Feb 26 18:38:42 crc kubenswrapper[4805]: I0226 18:38:42.242809 4805 scope.go:117] "RemoveContainer" containerID="08146ac29046346b848998f16192d184fced07ca1bf020240e2d0ca1acaa3817" Feb 26 18:38:42 crc kubenswrapper[4805]: E0226 18:38:42.243236 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08146ac29046346b848998f16192d184fced07ca1bf020240e2d0ca1acaa3817\": container with ID starting with 08146ac29046346b848998f16192d184fced07ca1bf020240e2d0ca1acaa3817 not found: ID does not exist" containerID="08146ac29046346b848998f16192d184fced07ca1bf020240e2d0ca1acaa3817" Feb 26 18:38:42 crc kubenswrapper[4805]: I0226 18:38:42.243285 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08146ac29046346b848998f16192d184fced07ca1bf020240e2d0ca1acaa3817"} err="failed to get container status \"08146ac29046346b848998f16192d184fced07ca1bf020240e2d0ca1acaa3817\": rpc error: code = NotFound desc = could not find container \"08146ac29046346b848998f16192d184fced07ca1bf020240e2d0ca1acaa3817\": container with ID starting with 08146ac29046346b848998f16192d184fced07ca1bf020240e2d0ca1acaa3817 not found: ID does not exist" Feb 26 18:38:42 crc kubenswrapper[4805]: I0226 18:38:42.249741 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kzt6\" (UniqueName: \"kubernetes.io/projected/d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4-kube-api-access-7kzt6\") on node \"crc\" DevicePath \"\"" Feb 26 18:38:42 crc kubenswrapper[4805]: I0226 18:38:42.968611 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4" path="/var/lib/kubelet/pods/d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4/volumes" Feb 26 18:38:49 crc kubenswrapper[4805]: I0226 18:38:49.672657 4805 scope.go:117] "RemoveContainer" containerID="16ab2789883d381c1d32c9c2adc5ad689ae87e46691707a72370d718f0efa8ad" Feb 26 18:38:49 crc kubenswrapper[4805]: I0226 18:38:49.736207 4805 scope.go:117] "RemoveContainer" containerID="c3f94d9a7ee84d714afd682e47cc63bf12aa8e9572def3b9a444a3da4bcb3619" Feb 26 18:39:26 crc kubenswrapper[4805]: I0226 18:39:26.338866 4805 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-7d9687b795-zqn9b" podUID="01b8f655-2944-4562-89c4-d2bcf9516cde" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 26 18:39:58 crc kubenswrapper[4805]: I0226 18:39:58.289886 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7g27g"] Feb 26 18:39:58 crc kubenswrapper[4805]: E0226 18:39:58.291193 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4" containerName="copy" Feb 26 18:39:58 crc kubenswrapper[4805]: I0226 18:39:58.291210 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4" containerName="copy" Feb 26 18:39:58 crc kubenswrapper[4805]: E0226 18:39:58.291268 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4" containerName="gather" Feb 26 18:39:58 crc kubenswrapper[4805]: I0226 18:39:58.291277 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4" containerName="gather" Feb 26 18:39:58 crc kubenswrapper[4805]: E0226 18:39:58.291294 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="586995d4-d43f-401b-babf-9779493cb4ba" containerName="oc" Feb 26 18:39:58 crc kubenswrapper[4805]: I0226 18:39:58.291303 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="586995d4-d43f-401b-babf-9779493cb4ba" containerName="oc" Feb 26 18:39:58 crc kubenswrapper[4805]: I0226 18:39:58.291543 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="586995d4-d43f-401b-babf-9779493cb4ba" containerName="oc" Feb 26 18:39:58 crc kubenswrapper[4805]: I0226 18:39:58.291557 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4" containerName="gather" Feb 26 18:39:58 crc kubenswrapper[4805]: I0226 18:39:58.291568 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="d363f2b6-6c6f-4c6a-acb9-f1fd00a306a4" containerName="copy" Feb 26 18:39:58 crc kubenswrapper[4805]: I0226 18:39:58.293574 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7g27g" Feb 26 18:39:58 crc kubenswrapper[4805]: I0226 18:39:58.307829 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7g27g"] Feb 26 18:39:58 crc kubenswrapper[4805]: I0226 18:39:58.461778 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hj4d\" (UniqueName: \"kubernetes.io/projected/75a6acbb-becd-47c8-8904-beba5962488b-kube-api-access-8hj4d\") pod \"redhat-marketplace-7g27g\" (UID: \"75a6acbb-becd-47c8-8904-beba5962488b\") " pod="openshift-marketplace/redhat-marketplace-7g27g" Feb 26 18:39:58 crc kubenswrapper[4805]: I0226 18:39:58.462130 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75a6acbb-becd-47c8-8904-beba5962488b-utilities\") pod \"redhat-marketplace-7g27g\" (UID: \"75a6acbb-becd-47c8-8904-beba5962488b\") " pod="openshift-marketplace/redhat-marketplace-7g27g" Feb 26 18:39:58 crc kubenswrapper[4805]: I0226 18:39:58.462394 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75a6acbb-becd-47c8-8904-beba5962488b-catalog-content\") pod \"redhat-marketplace-7g27g\" (UID: \"75a6acbb-becd-47c8-8904-beba5962488b\") " pod="openshift-marketplace/redhat-marketplace-7g27g" Feb 26 18:39:58 crc kubenswrapper[4805]: I0226 18:39:58.564363 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75a6acbb-becd-47c8-8904-beba5962488b-utilities\") pod \"redhat-marketplace-7g27g\" (UID: \"75a6acbb-becd-47c8-8904-beba5962488b\") " pod="openshift-marketplace/redhat-marketplace-7g27g" Feb 26 18:39:58 crc kubenswrapper[4805]: I0226 18:39:58.564450 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75a6acbb-becd-47c8-8904-beba5962488b-catalog-content\") pod \"redhat-marketplace-7g27g\" (UID: \"75a6acbb-becd-47c8-8904-beba5962488b\") " pod="openshift-marketplace/redhat-marketplace-7g27g" Feb 26 18:39:58 crc kubenswrapper[4805]: I0226 18:39:58.564511 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hj4d\" (UniqueName: \"kubernetes.io/projected/75a6acbb-becd-47c8-8904-beba5962488b-kube-api-access-8hj4d\") pod \"redhat-marketplace-7g27g\" (UID: \"75a6acbb-becd-47c8-8904-beba5962488b\") " pod="openshift-marketplace/redhat-marketplace-7g27g" Feb 26 18:39:58 crc kubenswrapper[4805]: I0226 18:39:58.564953 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75a6acbb-becd-47c8-8904-beba5962488b-utilities\") pod \"redhat-marketplace-7g27g\" (UID: \"75a6acbb-becd-47c8-8904-beba5962488b\") " pod="openshift-marketplace/redhat-marketplace-7g27g" Feb 26 18:39:58 crc kubenswrapper[4805]: I0226 18:39:58.564971 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75a6acbb-becd-47c8-8904-beba5962488b-catalog-content\") pod \"redhat-marketplace-7g27g\" (UID: \"75a6acbb-becd-47c8-8904-beba5962488b\") " pod="openshift-marketplace/redhat-marketplace-7g27g" Feb 26 18:39:58 crc kubenswrapper[4805]: I0226 18:39:58.586858 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hj4d\" (UniqueName: \"kubernetes.io/projected/75a6acbb-becd-47c8-8904-beba5962488b-kube-api-access-8hj4d\") pod \"redhat-marketplace-7g27g\" (UID: \"75a6acbb-becd-47c8-8904-beba5962488b\") " pod="openshift-marketplace/redhat-marketplace-7g27g" Feb 26 18:39:58 crc kubenswrapper[4805]: I0226 18:39:58.615647 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7g27g" Feb 26 18:39:59 crc kubenswrapper[4805]: I0226 18:39:59.159769 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7g27g"] Feb 26 18:39:59 crc kubenswrapper[4805]: I0226 18:39:59.744567 4805 generic.go:334] "Generic (PLEG): container finished" podID="75a6acbb-becd-47c8-8904-beba5962488b" containerID="89698873a04f98154c04854074c7345a5187b3830bb3ef2b8bc9cf5235074531" exitCode=0 Feb 26 18:39:59 crc kubenswrapper[4805]: I0226 18:39:59.744626 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7g27g" event={"ID":"75a6acbb-becd-47c8-8904-beba5962488b","Type":"ContainerDied","Data":"89698873a04f98154c04854074c7345a5187b3830bb3ef2b8bc9cf5235074531"} Feb 26 18:39:59 crc kubenswrapper[4805]: I0226 18:39:59.744942 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7g27g" event={"ID":"75a6acbb-becd-47c8-8904-beba5962488b","Type":"ContainerStarted","Data":"441d0d45b3a0b2f1d3624127c239ee734e0caed2da98f93de74c3890de73a182"} Feb 26 18:40:00 crc kubenswrapper[4805]: I0226 18:40:00.154616 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535520-6vjkb"] Feb 26 18:40:00 crc kubenswrapper[4805]: I0226 18:40:00.156350 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535520-6vjkb" Feb 26 18:40:00 crc kubenswrapper[4805]: I0226 18:40:00.158979 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 18:40:00 crc kubenswrapper[4805]: I0226 18:40:00.160094 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 18:40:00 crc kubenswrapper[4805]: I0226 18:40:00.160188 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 18:40:00 crc kubenswrapper[4805]: I0226 18:40:00.179066 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535520-6vjkb"] Feb 26 18:40:00 crc kubenswrapper[4805]: I0226 18:40:00.301426 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xncr2\" (UniqueName: \"kubernetes.io/projected/a18ec65f-7fd5-44c3-904a-c407c3ce5136-kube-api-access-xncr2\") pod \"auto-csr-approver-29535520-6vjkb\" (UID: \"a18ec65f-7fd5-44c3-904a-c407c3ce5136\") " pod="openshift-infra/auto-csr-approver-29535520-6vjkb" Feb 26 18:40:00 crc kubenswrapper[4805]: I0226 18:40:00.403601 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xncr2\" (UniqueName: \"kubernetes.io/projected/a18ec65f-7fd5-44c3-904a-c407c3ce5136-kube-api-access-xncr2\") pod \"auto-csr-approver-29535520-6vjkb\" (UID: \"a18ec65f-7fd5-44c3-904a-c407c3ce5136\") " pod="openshift-infra/auto-csr-approver-29535520-6vjkb" Feb 26 18:40:00 crc kubenswrapper[4805]: I0226 18:40:00.424569 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xncr2\" (UniqueName: \"kubernetes.io/projected/a18ec65f-7fd5-44c3-904a-c407c3ce5136-kube-api-access-xncr2\") pod \"auto-csr-approver-29535520-6vjkb\" (UID: \"a18ec65f-7fd5-44c3-904a-c407c3ce5136\") " pod="openshift-infra/auto-csr-approver-29535520-6vjkb" Feb 26 18:40:00 crc kubenswrapper[4805]: I0226 18:40:00.517792 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535520-6vjkb" Feb 26 18:40:01 crc kubenswrapper[4805]: W0226 18:40:01.011456 4805 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda18ec65f_7fd5_44c3_904a_c407c3ce5136.slice/crio-276777865bda8c8900bfff070f354749c2f4eff6b2bc2ad5839db6187bec6b6e WatchSource:0}: Error finding container 276777865bda8c8900bfff070f354749c2f4eff6b2bc2ad5839db6187bec6b6e: Status 404 returned error can't find the container with id 276777865bda8c8900bfff070f354749c2f4eff6b2bc2ad5839db6187bec6b6e Feb 26 18:40:01 crc kubenswrapper[4805]: I0226 18:40:01.011856 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535520-6vjkb"] Feb 26 18:40:01 crc kubenswrapper[4805]: I0226 18:40:01.777751 4805 generic.go:334] "Generic (PLEG): container finished" podID="75a6acbb-becd-47c8-8904-beba5962488b" containerID="a119127ba5496795f8feb44905a0a02812e0ab218f04c640845854f8791138a3" exitCode=0 Feb 26 18:40:01 crc kubenswrapper[4805]: I0226 18:40:01.777815 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7g27g" event={"ID":"75a6acbb-becd-47c8-8904-beba5962488b","Type":"ContainerDied","Data":"a119127ba5496795f8feb44905a0a02812e0ab218f04c640845854f8791138a3"} Feb 26 18:40:01 crc kubenswrapper[4805]: I0226 18:40:01.780662 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535520-6vjkb" event={"ID":"a18ec65f-7fd5-44c3-904a-c407c3ce5136","Type":"ContainerStarted","Data":"276777865bda8c8900bfff070f354749c2f4eff6b2bc2ad5839db6187bec6b6e"} Feb 26 18:40:02 crc kubenswrapper[4805]: I0226 18:40:02.818603 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7g27g" event={"ID":"75a6acbb-becd-47c8-8904-beba5962488b","Type":"ContainerStarted","Data":"74a79f46a7e582918a76085de4d7953b2872d2dd6b87703d82d433499f5359d0"} Feb 26 18:40:02 crc kubenswrapper[4805]: I0226 18:40:02.851544 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7g27g" podStartSLOduration=2.206273097 podStartE2EDuration="4.851523954s" podCreationTimestamp="2026-02-26 18:39:58 +0000 UTC" firstStartedPulling="2026-02-26 18:39:59.749078176 +0000 UTC m=+5114.310832525" lastFinishedPulling="2026-02-26 18:40:02.394329043 +0000 UTC m=+5116.956083382" observedRunningTime="2026-02-26 18:40:02.840453556 +0000 UTC m=+5117.402207895" watchObservedRunningTime="2026-02-26 18:40:02.851523954 +0000 UTC m=+5117.413278293" Feb 26 18:40:02 crc kubenswrapper[4805]: I0226 18:40:02.978418 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 18:40:02 crc kubenswrapper[4805]: I0226 18:40:02.978740 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 18:40:03 crc kubenswrapper[4805]: I0226 18:40:03.838158 4805 generic.go:334] "Generic (PLEG): container finished" podID="a18ec65f-7fd5-44c3-904a-c407c3ce5136" containerID="7f69163e626b78f9fcd593e53e9a3c7f065394b2a9dac34c9b5fa87693c9e427" exitCode=0 Feb 26 18:40:03 crc kubenswrapper[4805]: I0226 18:40:03.839583 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535520-6vjkb" event={"ID":"a18ec65f-7fd5-44c3-904a-c407c3ce5136","Type":"ContainerDied","Data":"7f69163e626b78f9fcd593e53e9a3c7f065394b2a9dac34c9b5fa87693c9e427"} Feb 26 18:40:05 crc kubenswrapper[4805]: I0226 18:40:05.430167 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535520-6vjkb" Feb 26 18:40:05 crc kubenswrapper[4805]: I0226 18:40:05.629220 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xncr2\" (UniqueName: \"kubernetes.io/projected/a18ec65f-7fd5-44c3-904a-c407c3ce5136-kube-api-access-xncr2\") pod \"a18ec65f-7fd5-44c3-904a-c407c3ce5136\" (UID: \"a18ec65f-7fd5-44c3-904a-c407c3ce5136\") " Feb 26 18:40:05 crc kubenswrapper[4805]: I0226 18:40:05.655085 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a18ec65f-7fd5-44c3-904a-c407c3ce5136-kube-api-access-xncr2" (OuterVolumeSpecName: "kube-api-access-xncr2") pod "a18ec65f-7fd5-44c3-904a-c407c3ce5136" (UID: "a18ec65f-7fd5-44c3-904a-c407c3ce5136"). InnerVolumeSpecName "kube-api-access-xncr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:40:05 crc kubenswrapper[4805]: I0226 18:40:05.732055 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xncr2\" (UniqueName: \"kubernetes.io/projected/a18ec65f-7fd5-44c3-904a-c407c3ce5136-kube-api-access-xncr2\") on node \"crc\" DevicePath \"\"" Feb 26 18:40:05 crc kubenswrapper[4805]: I0226 18:40:05.859568 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535520-6vjkb" event={"ID":"a18ec65f-7fd5-44c3-904a-c407c3ce5136","Type":"ContainerDied","Data":"276777865bda8c8900bfff070f354749c2f4eff6b2bc2ad5839db6187bec6b6e"} Feb 26 18:40:05 crc kubenswrapper[4805]: I0226 18:40:05.859613 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="276777865bda8c8900bfff070f354749c2f4eff6b2bc2ad5839db6187bec6b6e" Feb 26 18:40:05 crc kubenswrapper[4805]: I0226 18:40:05.859654 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535520-6vjkb" Feb 26 18:40:06 crc kubenswrapper[4805]: I0226 18:40:06.544013 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535514-tf9js"] Feb 26 18:40:06 crc kubenswrapper[4805]: I0226 18:40:06.559516 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535514-tf9js"] Feb 26 18:40:06 crc kubenswrapper[4805]: I0226 18:40:06.972890 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66e0bb37-53c2-4072-bb79-49dd3a8760c8" path="/var/lib/kubelet/pods/66e0bb37-53c2-4072-bb79-49dd3a8760c8/volumes" Feb 26 18:40:08 crc kubenswrapper[4805]: I0226 18:40:08.615971 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7g27g" Feb 26 18:40:08 crc kubenswrapper[4805]: I0226 18:40:08.616354 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7g27g" Feb 26 18:40:08 crc kubenswrapper[4805]: I0226 18:40:08.673928 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7g27g" Feb 26 18:40:09 crc kubenswrapper[4805]: I0226 18:40:09.023843 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7g27g" Feb 26 18:40:09 crc kubenswrapper[4805]: I0226 18:40:09.086384 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7g27g"] Feb 26 18:40:10 crc kubenswrapper[4805]: I0226 18:40:10.995103 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7g27g" podUID="75a6acbb-becd-47c8-8904-beba5962488b" containerName="registry-server" containerID="cri-o://74a79f46a7e582918a76085de4d7953b2872d2dd6b87703d82d433499f5359d0" gracePeriod=2 Feb 26 18:40:11 crc kubenswrapper[4805]: I0226 18:40:11.727849 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7g27g" Feb 26 18:40:11 crc kubenswrapper[4805]: I0226 18:40:11.876401 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hj4d\" (UniqueName: \"kubernetes.io/projected/75a6acbb-becd-47c8-8904-beba5962488b-kube-api-access-8hj4d\") pod \"75a6acbb-becd-47c8-8904-beba5962488b\" (UID: \"75a6acbb-becd-47c8-8904-beba5962488b\") " Feb 26 18:40:11 crc kubenswrapper[4805]: I0226 18:40:11.877311 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75a6acbb-becd-47c8-8904-beba5962488b-catalog-content\") pod \"75a6acbb-becd-47c8-8904-beba5962488b\" (UID: \"75a6acbb-becd-47c8-8904-beba5962488b\") " Feb 26 18:40:11 crc kubenswrapper[4805]: I0226 18:40:11.877374 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75a6acbb-becd-47c8-8904-beba5962488b-utilities\") pod \"75a6acbb-becd-47c8-8904-beba5962488b\" (UID: \"75a6acbb-becd-47c8-8904-beba5962488b\") " Feb 26 18:40:11 crc kubenswrapper[4805]: I0226 18:40:11.878174 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75a6acbb-becd-47c8-8904-beba5962488b-utilities" (OuterVolumeSpecName: "utilities") pod "75a6acbb-becd-47c8-8904-beba5962488b" (UID: "75a6acbb-becd-47c8-8904-beba5962488b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 18:40:11 crc kubenswrapper[4805]: I0226 18:40:11.878384 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75a6acbb-becd-47c8-8904-beba5962488b-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 18:40:11 crc kubenswrapper[4805]: I0226 18:40:11.883362 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75a6acbb-becd-47c8-8904-beba5962488b-kube-api-access-8hj4d" (OuterVolumeSpecName: "kube-api-access-8hj4d") pod "75a6acbb-becd-47c8-8904-beba5962488b" (UID: "75a6acbb-becd-47c8-8904-beba5962488b"). InnerVolumeSpecName "kube-api-access-8hj4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:40:11 crc kubenswrapper[4805]: I0226 18:40:11.918955 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75a6acbb-becd-47c8-8904-beba5962488b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "75a6acbb-becd-47c8-8904-beba5962488b" (UID: "75a6acbb-becd-47c8-8904-beba5962488b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 18:40:11 crc kubenswrapper[4805]: I0226 18:40:11.980925 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hj4d\" (UniqueName: \"kubernetes.io/projected/75a6acbb-becd-47c8-8904-beba5962488b-kube-api-access-8hj4d\") on node \"crc\" DevicePath \"\"" Feb 26 18:40:11 crc kubenswrapper[4805]: I0226 18:40:11.981237 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75a6acbb-becd-47c8-8904-beba5962488b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 18:40:12 crc kubenswrapper[4805]: I0226 18:40:12.007117 4805 generic.go:334] "Generic (PLEG): container finished" podID="75a6acbb-becd-47c8-8904-beba5962488b" containerID="74a79f46a7e582918a76085de4d7953b2872d2dd6b87703d82d433499f5359d0" exitCode=0 Feb 26 18:40:12 crc kubenswrapper[4805]: I0226 18:40:12.007170 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7g27g" event={"ID":"75a6acbb-becd-47c8-8904-beba5962488b","Type":"ContainerDied","Data":"74a79f46a7e582918a76085de4d7953b2872d2dd6b87703d82d433499f5359d0"} Feb 26 18:40:12 crc kubenswrapper[4805]: I0226 18:40:12.007205 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7g27g" event={"ID":"75a6acbb-becd-47c8-8904-beba5962488b","Type":"ContainerDied","Data":"441d0d45b3a0b2f1d3624127c239ee734e0caed2da98f93de74c3890de73a182"} Feb 26 18:40:12 crc kubenswrapper[4805]: I0226 18:40:12.007226 4805 scope.go:117] "RemoveContainer" containerID="74a79f46a7e582918a76085de4d7953b2872d2dd6b87703d82d433499f5359d0" Feb 26 18:40:12 crc kubenswrapper[4805]: I0226 18:40:12.008166 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7g27g" Feb 26 18:40:12 crc kubenswrapper[4805]: I0226 18:40:12.034910 4805 scope.go:117] "RemoveContainer" containerID="a119127ba5496795f8feb44905a0a02812e0ab218f04c640845854f8791138a3" Feb 26 18:40:12 crc kubenswrapper[4805]: I0226 18:40:12.049109 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7g27g"] Feb 26 18:40:12 crc kubenswrapper[4805]: I0226 18:40:12.062725 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7g27g"] Feb 26 18:40:12 crc kubenswrapper[4805]: I0226 18:40:12.063641 4805 scope.go:117] "RemoveContainer" containerID="89698873a04f98154c04854074c7345a5187b3830bb3ef2b8bc9cf5235074531" Feb 26 18:40:12 crc kubenswrapper[4805]: I0226 18:40:12.121465 4805 scope.go:117] "RemoveContainer" containerID="74a79f46a7e582918a76085de4d7953b2872d2dd6b87703d82d433499f5359d0" Feb 26 18:40:12 crc kubenswrapper[4805]: E0226 18:40:12.121940 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74a79f46a7e582918a76085de4d7953b2872d2dd6b87703d82d433499f5359d0\": container with ID starting with 74a79f46a7e582918a76085de4d7953b2872d2dd6b87703d82d433499f5359d0 not found: ID does not exist" containerID="74a79f46a7e582918a76085de4d7953b2872d2dd6b87703d82d433499f5359d0" Feb 26 18:40:12 crc kubenswrapper[4805]: I0226 18:40:12.121989 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74a79f46a7e582918a76085de4d7953b2872d2dd6b87703d82d433499f5359d0"} err="failed to get container status \"74a79f46a7e582918a76085de4d7953b2872d2dd6b87703d82d433499f5359d0\": rpc error: code = NotFound desc = could not find container \"74a79f46a7e582918a76085de4d7953b2872d2dd6b87703d82d433499f5359d0\": container with ID starting with 74a79f46a7e582918a76085de4d7953b2872d2dd6b87703d82d433499f5359d0 not found: ID does not exist" Feb 26 18:40:12 crc kubenswrapper[4805]: I0226 18:40:12.122032 4805 scope.go:117] "RemoveContainer" containerID="a119127ba5496795f8feb44905a0a02812e0ab218f04c640845854f8791138a3" Feb 26 18:40:12 crc kubenswrapper[4805]: E0226 18:40:12.122327 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a119127ba5496795f8feb44905a0a02812e0ab218f04c640845854f8791138a3\": container with ID starting with a119127ba5496795f8feb44905a0a02812e0ab218f04c640845854f8791138a3 not found: ID does not exist" containerID="a119127ba5496795f8feb44905a0a02812e0ab218f04c640845854f8791138a3" Feb 26 18:40:12 crc kubenswrapper[4805]: I0226 18:40:12.122371 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a119127ba5496795f8feb44905a0a02812e0ab218f04c640845854f8791138a3"} err="failed to get container status \"a119127ba5496795f8feb44905a0a02812e0ab218f04c640845854f8791138a3\": rpc error: code = NotFound desc = could not find container \"a119127ba5496795f8feb44905a0a02812e0ab218f04c640845854f8791138a3\": container with ID starting with a119127ba5496795f8feb44905a0a02812e0ab218f04c640845854f8791138a3 not found: ID does not exist" Feb 26 18:40:12 crc kubenswrapper[4805]: I0226 18:40:12.122394 4805 scope.go:117] "RemoveContainer" containerID="89698873a04f98154c04854074c7345a5187b3830bb3ef2b8bc9cf5235074531" Feb 26 18:40:12 crc kubenswrapper[4805]: E0226 18:40:12.122612 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89698873a04f98154c04854074c7345a5187b3830bb3ef2b8bc9cf5235074531\": container with ID starting with 89698873a04f98154c04854074c7345a5187b3830bb3ef2b8bc9cf5235074531 not found: ID does not exist" containerID="89698873a04f98154c04854074c7345a5187b3830bb3ef2b8bc9cf5235074531" Feb 26 18:40:12 crc kubenswrapper[4805]: I0226 18:40:12.122636 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89698873a04f98154c04854074c7345a5187b3830bb3ef2b8bc9cf5235074531"} err="failed to get container status \"89698873a04f98154c04854074c7345a5187b3830bb3ef2b8bc9cf5235074531\": rpc error: code = NotFound desc = could not find container \"89698873a04f98154c04854074c7345a5187b3830bb3ef2b8bc9cf5235074531\": container with ID starting with 89698873a04f98154c04854074c7345a5187b3830bb3ef2b8bc9cf5235074531 not found: ID does not exist" Feb 26 18:40:12 crc kubenswrapper[4805]: I0226 18:40:12.969066 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75a6acbb-becd-47c8-8904-beba5962488b" path="/var/lib/kubelet/pods/75a6acbb-becd-47c8-8904-beba5962488b/volumes" Feb 26 18:40:32 crc kubenswrapper[4805]: I0226 18:40:32.978116 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 18:40:32 crc kubenswrapper[4805]: I0226 18:40:32.978730 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 18:40:38 crc kubenswrapper[4805]: I0226 18:40:38.167104 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-282ct"] Feb 26 18:40:38 crc kubenswrapper[4805]: E0226 18:40:38.168005 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a18ec65f-7fd5-44c3-904a-c407c3ce5136" containerName="oc" Feb 26 18:40:38 crc kubenswrapper[4805]: I0226 18:40:38.168057 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="a18ec65f-7fd5-44c3-904a-c407c3ce5136" containerName="oc" Feb 26 18:40:38 crc kubenswrapper[4805]: E0226 18:40:38.168070 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75a6acbb-becd-47c8-8904-beba5962488b" containerName="registry-server" Feb 26 18:40:38 crc kubenswrapper[4805]: I0226 18:40:38.168076 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="75a6acbb-becd-47c8-8904-beba5962488b" containerName="registry-server" Feb 26 18:40:38 crc kubenswrapper[4805]: E0226 18:40:38.168119 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75a6acbb-becd-47c8-8904-beba5962488b" containerName="extract-utilities" Feb 26 18:40:38 crc kubenswrapper[4805]: I0226 18:40:38.168128 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="75a6acbb-becd-47c8-8904-beba5962488b" containerName="extract-utilities" Feb 26 18:40:38 crc kubenswrapper[4805]: E0226 18:40:38.168140 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75a6acbb-becd-47c8-8904-beba5962488b" containerName="extract-content" Feb 26 18:40:38 crc kubenswrapper[4805]: I0226 18:40:38.168147 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="75a6acbb-becd-47c8-8904-beba5962488b" containerName="extract-content" Feb 26 18:40:38 crc kubenswrapper[4805]: I0226 18:40:38.168365 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="75a6acbb-becd-47c8-8904-beba5962488b" containerName="registry-server" Feb 26 18:40:38 crc kubenswrapper[4805]: I0226 18:40:38.168381 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="a18ec65f-7fd5-44c3-904a-c407c3ce5136" containerName="oc" Feb 26 18:40:38 crc kubenswrapper[4805]: I0226 18:40:38.170212 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-282ct" Feb 26 18:40:38 crc kubenswrapper[4805]: I0226 18:40:38.192115 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-282ct"] Feb 26 18:40:38 crc kubenswrapper[4805]: I0226 18:40:38.302447 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdfeb3b5-5095-4388-b918-1f807ef01392-utilities\") pod \"community-operators-282ct\" (UID: \"bdfeb3b5-5095-4388-b918-1f807ef01392\") " pod="openshift-marketplace/community-operators-282ct" Feb 26 18:40:38 crc kubenswrapper[4805]: I0226 18:40:38.302525 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdfeb3b5-5095-4388-b918-1f807ef01392-catalog-content\") pod \"community-operators-282ct\" (UID: \"bdfeb3b5-5095-4388-b918-1f807ef01392\") " pod="openshift-marketplace/community-operators-282ct" Feb 26 18:40:38 crc kubenswrapper[4805]: I0226 18:40:38.302720 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtkhv\" (UniqueName: \"kubernetes.io/projected/bdfeb3b5-5095-4388-b918-1f807ef01392-kube-api-access-vtkhv\") pod \"community-operators-282ct\" (UID: \"bdfeb3b5-5095-4388-b918-1f807ef01392\") " pod="openshift-marketplace/community-operators-282ct" Feb 26 18:40:38 crc kubenswrapper[4805]: I0226 18:40:38.404574 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdfeb3b5-5095-4388-b918-1f807ef01392-utilities\") pod \"community-operators-282ct\" (UID: \"bdfeb3b5-5095-4388-b918-1f807ef01392\") " pod="openshift-marketplace/community-operators-282ct" Feb 26 18:40:38 crc kubenswrapper[4805]: I0226 18:40:38.404648 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdfeb3b5-5095-4388-b918-1f807ef01392-catalog-content\") pod \"community-operators-282ct\" (UID: \"bdfeb3b5-5095-4388-b918-1f807ef01392\") " pod="openshift-marketplace/community-operators-282ct" Feb 26 18:40:38 crc kubenswrapper[4805]: I0226 18:40:38.404698 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtkhv\" (UniqueName: \"kubernetes.io/projected/bdfeb3b5-5095-4388-b918-1f807ef01392-kube-api-access-vtkhv\") pod \"community-operators-282ct\" (UID: \"bdfeb3b5-5095-4388-b918-1f807ef01392\") " pod="openshift-marketplace/community-operators-282ct" Feb 26 18:40:38 crc kubenswrapper[4805]: I0226 18:40:38.405190 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdfeb3b5-5095-4388-b918-1f807ef01392-utilities\") pod \"community-operators-282ct\" (UID: \"bdfeb3b5-5095-4388-b918-1f807ef01392\") " pod="openshift-marketplace/community-operators-282ct" Feb 26 18:40:38 crc kubenswrapper[4805]: I0226 18:40:38.405244 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdfeb3b5-5095-4388-b918-1f807ef01392-catalog-content\") pod \"community-operators-282ct\" (UID: \"bdfeb3b5-5095-4388-b918-1f807ef01392\") " pod="openshift-marketplace/community-operators-282ct" Feb 26 18:40:38 crc kubenswrapper[4805]: I0226 18:40:38.423570 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtkhv\" (UniqueName: \"kubernetes.io/projected/bdfeb3b5-5095-4388-b918-1f807ef01392-kube-api-access-vtkhv\") pod \"community-operators-282ct\" (UID: \"bdfeb3b5-5095-4388-b918-1f807ef01392\") " pod="openshift-marketplace/community-operators-282ct" Feb 26 18:40:38 crc kubenswrapper[4805]: I0226 18:40:38.489972 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-282ct" Feb 26 18:40:39 crc kubenswrapper[4805]: I0226 18:40:39.047612 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-282ct"] Feb 26 18:40:39 crc kubenswrapper[4805]: I0226 18:40:39.305529 4805 generic.go:334] "Generic (PLEG): container finished" podID="bdfeb3b5-5095-4388-b918-1f807ef01392" containerID="1f95a2bb17f4c0892d544a140e869b5fc76f518a24223da3e3be664bf36c2b1f" exitCode=0 Feb 26 18:40:39 crc kubenswrapper[4805]: I0226 18:40:39.305668 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-282ct" event={"ID":"bdfeb3b5-5095-4388-b918-1f807ef01392","Type":"ContainerDied","Data":"1f95a2bb17f4c0892d544a140e869b5fc76f518a24223da3e3be664bf36c2b1f"} Feb 26 18:40:39 crc kubenswrapper[4805]: I0226 18:40:39.305864 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-282ct" event={"ID":"bdfeb3b5-5095-4388-b918-1f807ef01392","Type":"ContainerStarted","Data":"e12efa5e7f5969e96c234c73296cd57103a967f13285aeca712ab38fa61df413"} Feb 26 18:40:42 crc kubenswrapper[4805]: I0226 18:40:42.342798 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-282ct" event={"ID":"bdfeb3b5-5095-4388-b918-1f807ef01392","Type":"ContainerStarted","Data":"2ee3ec83250db8d371a07c01553e0ce902563ac8e303207afb357911770af74d"} Feb 26 18:40:44 crc kubenswrapper[4805]: I0226 18:40:44.365086 4805 generic.go:334] "Generic (PLEG): container finished" podID="bdfeb3b5-5095-4388-b918-1f807ef01392" containerID="2ee3ec83250db8d371a07c01553e0ce902563ac8e303207afb357911770af74d" exitCode=0 Feb 26 18:40:44 crc kubenswrapper[4805]: I0226 18:40:44.365139 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-282ct" event={"ID":"bdfeb3b5-5095-4388-b918-1f807ef01392","Type":"ContainerDied","Data":"2ee3ec83250db8d371a07c01553e0ce902563ac8e303207afb357911770af74d"} Feb 26 18:40:46 crc kubenswrapper[4805]: I0226 18:40:46.397719 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-282ct" event={"ID":"bdfeb3b5-5095-4388-b918-1f807ef01392","Type":"ContainerStarted","Data":"e6b78471225ee7a24442ed882dff1d45d95e294be02a893ce99f8bf2b4e58488"} Feb 26 18:40:46 crc kubenswrapper[4805]: I0226 18:40:46.431863 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-282ct" podStartSLOduration=4.005950807 podStartE2EDuration="8.431841508s" podCreationTimestamp="2026-02-26 18:40:38 +0000 UTC" firstStartedPulling="2026-02-26 18:40:40.319893345 +0000 UTC m=+5154.881647704" lastFinishedPulling="2026-02-26 18:40:44.745784066 +0000 UTC m=+5159.307538405" observedRunningTime="2026-02-26 18:40:46.424985366 +0000 UTC m=+5160.986739725" watchObservedRunningTime="2026-02-26 18:40:46.431841508 +0000 UTC m=+5160.993595847" Feb 26 18:40:48 crc kubenswrapper[4805]: I0226 18:40:48.491006 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-282ct" Feb 26 18:40:48 crc kubenswrapper[4805]: I0226 18:40:48.491689 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-282ct" Feb 26 18:40:48 crc kubenswrapper[4805]: I0226 18:40:48.553251 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-282ct" Feb 26 18:40:49 crc kubenswrapper[4805]: I0226 18:40:49.849233 4805 scope.go:117] "RemoveContainer" containerID="725d98ccbae346d4c6f67194f1ad982352ca399ec4758ae1efb13ad8899c6502" Feb 26 18:40:58 crc kubenswrapper[4805]: I0226 18:40:58.557357 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-282ct" Feb 26 18:40:58 crc kubenswrapper[4805]: I0226 18:40:58.616458 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-282ct"] Feb 26 18:40:59 crc kubenswrapper[4805]: I0226 18:40:59.533423 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-282ct" podUID="bdfeb3b5-5095-4388-b918-1f807ef01392" containerName="registry-server" containerID="cri-o://e6b78471225ee7a24442ed882dff1d45d95e294be02a893ce99f8bf2b4e58488" gracePeriod=2 Feb 26 18:41:00 crc kubenswrapper[4805]: I0226 18:41:00.175148 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-282ct" Feb 26 18:41:00 crc kubenswrapper[4805]: I0226 18:41:00.300799 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdfeb3b5-5095-4388-b918-1f807ef01392-catalog-content\") pod \"bdfeb3b5-5095-4388-b918-1f807ef01392\" (UID: \"bdfeb3b5-5095-4388-b918-1f807ef01392\") " Feb 26 18:41:00 crc kubenswrapper[4805]: I0226 18:41:00.301285 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtkhv\" (UniqueName: \"kubernetes.io/projected/bdfeb3b5-5095-4388-b918-1f807ef01392-kube-api-access-vtkhv\") pod \"bdfeb3b5-5095-4388-b918-1f807ef01392\" (UID: \"bdfeb3b5-5095-4388-b918-1f807ef01392\") " Feb 26 18:41:00 crc kubenswrapper[4805]: I0226 18:41:00.301359 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdfeb3b5-5095-4388-b918-1f807ef01392-utilities\") pod \"bdfeb3b5-5095-4388-b918-1f807ef01392\" (UID: \"bdfeb3b5-5095-4388-b918-1f807ef01392\") " Feb 26 18:41:00 crc kubenswrapper[4805]: I0226 18:41:00.302091 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdfeb3b5-5095-4388-b918-1f807ef01392-utilities" (OuterVolumeSpecName: "utilities") pod "bdfeb3b5-5095-4388-b918-1f807ef01392" (UID: "bdfeb3b5-5095-4388-b918-1f807ef01392"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 18:41:00 crc kubenswrapper[4805]: I0226 18:41:00.312445 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdfeb3b5-5095-4388-b918-1f807ef01392-kube-api-access-vtkhv" (OuterVolumeSpecName: "kube-api-access-vtkhv") pod "bdfeb3b5-5095-4388-b918-1f807ef01392" (UID: "bdfeb3b5-5095-4388-b918-1f807ef01392"). InnerVolumeSpecName "kube-api-access-vtkhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:41:00 crc kubenswrapper[4805]: I0226 18:41:00.347978 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdfeb3b5-5095-4388-b918-1f807ef01392-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bdfeb3b5-5095-4388-b918-1f807ef01392" (UID: "bdfeb3b5-5095-4388-b918-1f807ef01392"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 18:41:00 crc kubenswrapper[4805]: I0226 18:41:00.403295 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdfeb3b5-5095-4388-b918-1f807ef01392-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 18:41:00 crc kubenswrapper[4805]: I0226 18:41:00.403323 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtkhv\" (UniqueName: \"kubernetes.io/projected/bdfeb3b5-5095-4388-b918-1f807ef01392-kube-api-access-vtkhv\") on node \"crc\" DevicePath \"\"" Feb 26 18:41:00 crc kubenswrapper[4805]: I0226 18:41:00.403334 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdfeb3b5-5095-4388-b918-1f807ef01392-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 18:41:00 crc kubenswrapper[4805]: I0226 18:41:00.547480 4805 generic.go:334] "Generic (PLEG): container finished" podID="bdfeb3b5-5095-4388-b918-1f807ef01392" containerID="e6b78471225ee7a24442ed882dff1d45d95e294be02a893ce99f8bf2b4e58488" exitCode=0 Feb 26 18:41:00 crc kubenswrapper[4805]: I0226 18:41:00.547561 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-282ct" Feb 26 18:41:00 crc kubenswrapper[4805]: I0226 18:41:00.547546 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-282ct" event={"ID":"bdfeb3b5-5095-4388-b918-1f807ef01392","Type":"ContainerDied","Data":"e6b78471225ee7a24442ed882dff1d45d95e294be02a893ce99f8bf2b4e58488"} Feb 26 18:41:00 crc kubenswrapper[4805]: I0226 18:41:00.547703 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-282ct" event={"ID":"bdfeb3b5-5095-4388-b918-1f807ef01392","Type":"ContainerDied","Data":"e12efa5e7f5969e96c234c73296cd57103a967f13285aeca712ab38fa61df413"} Feb 26 18:41:00 crc kubenswrapper[4805]: I0226 18:41:00.547724 4805 scope.go:117] "RemoveContainer" containerID="e6b78471225ee7a24442ed882dff1d45d95e294be02a893ce99f8bf2b4e58488" Feb 26 18:41:00 crc kubenswrapper[4805]: I0226 18:41:00.573720 4805 scope.go:117] "RemoveContainer" containerID="2ee3ec83250db8d371a07c01553e0ce902563ac8e303207afb357911770af74d" Feb 26 18:41:00 crc kubenswrapper[4805]: I0226 18:41:00.586402 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-282ct"] Feb 26 18:41:00 crc kubenswrapper[4805]: I0226 18:41:00.592516 4805 scope.go:117] "RemoveContainer" containerID="1f95a2bb17f4c0892d544a140e869b5fc76f518a24223da3e3be664bf36c2b1f" Feb 26 18:41:00 crc kubenswrapper[4805]: I0226 18:41:00.596407 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-282ct"] Feb 26 18:41:00 crc kubenswrapper[4805]: I0226 18:41:00.641582 4805 scope.go:117] "RemoveContainer" containerID="e6b78471225ee7a24442ed882dff1d45d95e294be02a893ce99f8bf2b4e58488" Feb 26 18:41:00 crc kubenswrapper[4805]: E0226 18:41:00.642285 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6b78471225ee7a24442ed882dff1d45d95e294be02a893ce99f8bf2b4e58488\": container with ID starting with e6b78471225ee7a24442ed882dff1d45d95e294be02a893ce99f8bf2b4e58488 not found: ID does not exist" containerID="e6b78471225ee7a24442ed882dff1d45d95e294be02a893ce99f8bf2b4e58488" Feb 26 18:41:00 crc kubenswrapper[4805]: I0226 18:41:00.642327 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6b78471225ee7a24442ed882dff1d45d95e294be02a893ce99f8bf2b4e58488"} err="failed to get container status \"e6b78471225ee7a24442ed882dff1d45d95e294be02a893ce99f8bf2b4e58488\": rpc error: code = NotFound desc = could not find container \"e6b78471225ee7a24442ed882dff1d45d95e294be02a893ce99f8bf2b4e58488\": container with ID starting with e6b78471225ee7a24442ed882dff1d45d95e294be02a893ce99f8bf2b4e58488 not found: ID does not exist" Feb 26 18:41:00 crc kubenswrapper[4805]: I0226 18:41:00.642358 4805 scope.go:117] "RemoveContainer" containerID="2ee3ec83250db8d371a07c01553e0ce902563ac8e303207afb357911770af74d" Feb 26 18:41:00 crc kubenswrapper[4805]: E0226 18:41:00.643012 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ee3ec83250db8d371a07c01553e0ce902563ac8e303207afb357911770af74d\": container with ID starting with 2ee3ec83250db8d371a07c01553e0ce902563ac8e303207afb357911770af74d not found: ID does not exist" containerID="2ee3ec83250db8d371a07c01553e0ce902563ac8e303207afb357911770af74d" Feb 26 18:41:00 crc kubenswrapper[4805]: I0226 18:41:00.643077 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ee3ec83250db8d371a07c01553e0ce902563ac8e303207afb357911770af74d"} err="failed to get container status \"2ee3ec83250db8d371a07c01553e0ce902563ac8e303207afb357911770af74d\": rpc error: code = NotFound desc = could not find container \"2ee3ec83250db8d371a07c01553e0ce902563ac8e303207afb357911770af74d\": container with ID starting with 2ee3ec83250db8d371a07c01553e0ce902563ac8e303207afb357911770af74d not found: ID does not exist" Feb 26 18:41:00 crc kubenswrapper[4805]: I0226 18:41:00.643104 4805 scope.go:117] "RemoveContainer" containerID="1f95a2bb17f4c0892d544a140e869b5fc76f518a24223da3e3be664bf36c2b1f" Feb 26 18:41:00 crc kubenswrapper[4805]: E0226 18:41:00.643415 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f95a2bb17f4c0892d544a140e869b5fc76f518a24223da3e3be664bf36c2b1f\": container with ID starting with 1f95a2bb17f4c0892d544a140e869b5fc76f518a24223da3e3be664bf36c2b1f not found: ID does not exist" containerID="1f95a2bb17f4c0892d544a140e869b5fc76f518a24223da3e3be664bf36c2b1f" Feb 26 18:41:00 crc kubenswrapper[4805]: I0226 18:41:00.643479 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f95a2bb17f4c0892d544a140e869b5fc76f518a24223da3e3be664bf36c2b1f"} err="failed to get container status \"1f95a2bb17f4c0892d544a140e869b5fc76f518a24223da3e3be664bf36c2b1f\": rpc error: code = NotFound desc = could not find container \"1f95a2bb17f4c0892d544a140e869b5fc76f518a24223da3e3be664bf36c2b1f\": container with ID starting with 1f95a2bb17f4c0892d544a140e869b5fc76f518a24223da3e3be664bf36c2b1f not found: ID does not exist" Feb 26 18:41:00 crc kubenswrapper[4805]: I0226 18:41:00.971506 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdfeb3b5-5095-4388-b918-1f807ef01392" path="/var/lib/kubelet/pods/bdfeb3b5-5095-4388-b918-1f807ef01392/volumes" Feb 26 18:41:02 crc kubenswrapper[4805]: I0226 18:41:02.977728 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 18:41:02 crc kubenswrapper[4805]: I0226 18:41:02.977996 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 18:41:02 crc kubenswrapper[4805]: I0226 18:41:02.978043 4805 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" Feb 26 18:41:02 crc kubenswrapper[4805]: I0226 18:41:02.978737 4805 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"94698164bbc748d493aaa4737cfc8a3edd3072f7f08fd64a50f13376435af125"} pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 26 18:41:02 crc kubenswrapper[4805]: I0226 18:41:02.978785 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" containerID="cri-o://94698164bbc748d493aaa4737cfc8a3edd3072f7f08fd64a50f13376435af125" gracePeriod=600 Feb 26 18:41:03 crc kubenswrapper[4805]: I0226 18:41:03.584300 4805 generic.go:334] "Generic (PLEG): container finished" podID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerID="94698164bbc748d493aaa4737cfc8a3edd3072f7f08fd64a50f13376435af125" exitCode=0 Feb 26 18:41:03 crc kubenswrapper[4805]: I0226 18:41:03.584441 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" event={"ID":"25e83477-65d0-41be-8e55-fdacfc5871a8","Type":"ContainerDied","Data":"94698164bbc748d493aaa4737cfc8a3edd3072f7f08fd64a50f13376435af125"} Feb 26 18:41:03 crc kubenswrapper[4805]: I0226 18:41:03.584826 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" event={"ID":"25e83477-65d0-41be-8e55-fdacfc5871a8","Type":"ContainerStarted","Data":"9484b03452fdc75ce3afa30897a3db2fb34cf0a31f205c35b2c58d85d36c8e69"} Feb 26 18:41:03 crc kubenswrapper[4805]: I0226 18:41:03.584850 4805 scope.go:117] "RemoveContainer" containerID="b7c30223c5b76c8ad9d42938cacce991adefae21cfd696e2668eb62862ed8166" Feb 26 18:41:28 crc kubenswrapper[4805]: I0226 18:41:28.000387 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vsq9v"] Feb 26 18:41:28 crc kubenswrapper[4805]: E0226 18:41:28.001474 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdfeb3b5-5095-4388-b918-1f807ef01392" containerName="registry-server" Feb 26 18:41:28 crc kubenswrapper[4805]: I0226 18:41:28.001492 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdfeb3b5-5095-4388-b918-1f807ef01392" containerName="registry-server" Feb 26 18:41:28 crc kubenswrapper[4805]: E0226 18:41:28.001509 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdfeb3b5-5095-4388-b918-1f807ef01392" containerName="extract-content" Feb 26 18:41:28 crc kubenswrapper[4805]: I0226 18:41:28.001518 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdfeb3b5-5095-4388-b918-1f807ef01392" containerName="extract-content" Feb 26 18:41:28 crc kubenswrapper[4805]: E0226 18:41:28.001561 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdfeb3b5-5095-4388-b918-1f807ef01392" containerName="extract-utilities" Feb 26 18:41:28 crc kubenswrapper[4805]: I0226 18:41:28.001569 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdfeb3b5-5095-4388-b918-1f807ef01392" containerName="extract-utilities" Feb 26 18:41:28 crc kubenswrapper[4805]: I0226 18:41:28.001829 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdfeb3b5-5095-4388-b918-1f807ef01392" containerName="registry-server" Feb 26 18:41:28 crc kubenswrapper[4805]: I0226 18:41:28.003829 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vsq9v" Feb 26 18:41:28 crc kubenswrapper[4805]: I0226 18:41:28.031983 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vsq9v"] Feb 26 18:41:28 crc kubenswrapper[4805]: I0226 18:41:28.063241 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fac3bc0-4c91-4592-ad63-a5ac900bbdda-utilities\") pod \"certified-operators-vsq9v\" (UID: \"5fac3bc0-4c91-4592-ad63-a5ac900bbdda\") " pod="openshift-marketplace/certified-operators-vsq9v" Feb 26 18:41:28 crc kubenswrapper[4805]: I0226 18:41:28.063588 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fac3bc0-4c91-4592-ad63-a5ac900bbdda-catalog-content\") pod \"certified-operators-vsq9v\" (UID: \"5fac3bc0-4c91-4592-ad63-a5ac900bbdda\") " pod="openshift-marketplace/certified-operators-vsq9v" Feb 26 18:41:28 crc kubenswrapper[4805]: I0226 18:41:28.063649 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck9hz\" (UniqueName: \"kubernetes.io/projected/5fac3bc0-4c91-4592-ad63-a5ac900bbdda-kube-api-access-ck9hz\") pod \"certified-operators-vsq9v\" (UID: \"5fac3bc0-4c91-4592-ad63-a5ac900bbdda\") " pod="openshift-marketplace/certified-operators-vsq9v" Feb 26 18:41:28 crc kubenswrapper[4805]: I0226 18:41:28.165520 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fac3bc0-4c91-4592-ad63-a5ac900bbdda-utilities\") pod \"certified-operators-vsq9v\" (UID: \"5fac3bc0-4c91-4592-ad63-a5ac900bbdda\") " pod="openshift-marketplace/certified-operators-vsq9v" Feb 26 18:41:28 crc kubenswrapper[4805]: I0226 18:41:28.165608 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fac3bc0-4c91-4592-ad63-a5ac900bbdda-catalog-content\") pod \"certified-operators-vsq9v\" (UID: \"5fac3bc0-4c91-4592-ad63-a5ac900bbdda\") " pod="openshift-marketplace/certified-operators-vsq9v" Feb 26 18:41:28 crc kubenswrapper[4805]: I0226 18:41:28.165655 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ck9hz\" (UniqueName: \"kubernetes.io/projected/5fac3bc0-4c91-4592-ad63-a5ac900bbdda-kube-api-access-ck9hz\") pod \"certified-operators-vsq9v\" (UID: \"5fac3bc0-4c91-4592-ad63-a5ac900bbdda\") " pod="openshift-marketplace/certified-operators-vsq9v" Feb 26 18:41:28 crc kubenswrapper[4805]: I0226 18:41:28.166671 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fac3bc0-4c91-4592-ad63-a5ac900bbdda-utilities\") pod \"certified-operators-vsq9v\" (UID: \"5fac3bc0-4c91-4592-ad63-a5ac900bbdda\") " pod="openshift-marketplace/certified-operators-vsq9v" Feb 26 18:41:28 crc kubenswrapper[4805]: I0226 18:41:28.166715 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fac3bc0-4c91-4592-ad63-a5ac900bbdda-catalog-content\") pod \"certified-operators-vsq9v\" (UID: \"5fac3bc0-4c91-4592-ad63-a5ac900bbdda\") " pod="openshift-marketplace/certified-operators-vsq9v" Feb 26 18:41:28 crc kubenswrapper[4805]: I0226 18:41:28.184915 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck9hz\" (UniqueName: \"kubernetes.io/projected/5fac3bc0-4c91-4592-ad63-a5ac900bbdda-kube-api-access-ck9hz\") pod \"certified-operators-vsq9v\" (UID: \"5fac3bc0-4c91-4592-ad63-a5ac900bbdda\") " pod="openshift-marketplace/certified-operators-vsq9v" Feb 26 18:41:28 crc kubenswrapper[4805]: I0226 18:41:28.347551 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vsq9v" Feb 26 18:41:28 crc kubenswrapper[4805]: I0226 18:41:28.875773 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vsq9v"] Feb 26 18:41:28 crc kubenswrapper[4805]: I0226 18:41:28.900715 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vsq9v" event={"ID":"5fac3bc0-4c91-4592-ad63-a5ac900bbdda","Type":"ContainerStarted","Data":"fbe2897ca2b2912a90ad5be0bfea778a303b92c4ed9e94a0f84353d83b7e4e47"} Feb 26 18:41:29 crc kubenswrapper[4805]: I0226 18:41:29.911723 4805 generic.go:334] "Generic (PLEG): container finished" podID="5fac3bc0-4c91-4592-ad63-a5ac900bbdda" containerID="4bd67d2d1dc1747480cdd72cbcce2d7f86e476154a6e4161dff9b8d360e90353" exitCode=0 Feb 26 18:41:29 crc kubenswrapper[4805]: I0226 18:41:29.911781 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vsq9v" event={"ID":"5fac3bc0-4c91-4592-ad63-a5ac900bbdda","Type":"ContainerDied","Data":"4bd67d2d1dc1747480cdd72cbcce2d7f86e476154a6e4161dff9b8d360e90353"} Feb 26 18:41:29 crc kubenswrapper[4805]: I0226 18:41:29.915171 4805 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 26 18:41:30 crc kubenswrapper[4805]: I0226 18:41:30.930149 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vsq9v" event={"ID":"5fac3bc0-4c91-4592-ad63-a5ac900bbdda","Type":"ContainerStarted","Data":"7987779ecd90d80a881bca36c356caaf328fbd1dd51454dd6ab397e817201a4c"} Feb 26 18:41:32 crc kubenswrapper[4805]: I0226 18:41:32.959811 4805 generic.go:334] "Generic (PLEG): container finished" podID="5fac3bc0-4c91-4592-ad63-a5ac900bbdda" containerID="7987779ecd90d80a881bca36c356caaf328fbd1dd51454dd6ab397e817201a4c" exitCode=0 Feb 26 18:41:32 crc kubenswrapper[4805]: I0226 18:41:32.982082 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vsq9v" event={"ID":"5fac3bc0-4c91-4592-ad63-a5ac900bbdda","Type":"ContainerDied","Data":"7987779ecd90d80a881bca36c356caaf328fbd1dd51454dd6ab397e817201a4c"} Feb 26 18:41:33 crc kubenswrapper[4805]: I0226 18:41:33.975933 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vsq9v" event={"ID":"5fac3bc0-4c91-4592-ad63-a5ac900bbdda","Type":"ContainerStarted","Data":"f733a9221bb141ceb170afeec5db7612f8f936e1b42d227713622eeef268bc7c"} Feb 26 18:41:34 crc kubenswrapper[4805]: I0226 18:41:34.013684 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vsq9v" podStartSLOduration=3.427168148 podStartE2EDuration="7.013663268s" podCreationTimestamp="2026-02-26 18:41:27 +0000 UTC" firstStartedPulling="2026-02-26 18:41:29.914491597 +0000 UTC m=+5204.476245986" lastFinishedPulling="2026-02-26 18:41:33.500986767 +0000 UTC m=+5208.062741106" observedRunningTime="2026-02-26 18:41:34.002417156 +0000 UTC m=+5208.564171505" watchObservedRunningTime="2026-02-26 18:41:34.013663268 +0000 UTC m=+5208.575417617" Feb 26 18:41:38 crc kubenswrapper[4805]: I0226 18:41:38.348543 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vsq9v" Feb 26 18:41:38 crc kubenswrapper[4805]: I0226 18:41:38.350411 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vsq9v" Feb 26 18:41:38 crc kubenswrapper[4805]: I0226 18:41:38.409537 4805 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vsq9v" Feb 26 18:41:39 crc kubenswrapper[4805]: I0226 18:41:39.114074 4805 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vsq9v" Feb 26 18:41:39 crc kubenswrapper[4805]: I0226 18:41:39.182083 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vsq9v"] Feb 26 18:41:41 crc kubenswrapper[4805]: I0226 18:41:41.065676 4805 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vsq9v" podUID="5fac3bc0-4c91-4592-ad63-a5ac900bbdda" containerName="registry-server" containerID="cri-o://f733a9221bb141ceb170afeec5db7612f8f936e1b42d227713622eeef268bc7c" gracePeriod=2 Feb 26 18:41:41 crc kubenswrapper[4805]: I0226 18:41:41.746985 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vsq9v" Feb 26 18:41:41 crc kubenswrapper[4805]: I0226 18:41:41.867617 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fac3bc0-4c91-4592-ad63-a5ac900bbdda-catalog-content\") pod \"5fac3bc0-4c91-4592-ad63-a5ac900bbdda\" (UID: \"5fac3bc0-4c91-4592-ad63-a5ac900bbdda\") " Feb 26 18:41:41 crc kubenswrapper[4805]: I0226 18:41:41.867714 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fac3bc0-4c91-4592-ad63-a5ac900bbdda-utilities\") pod \"5fac3bc0-4c91-4592-ad63-a5ac900bbdda\" (UID: \"5fac3bc0-4c91-4592-ad63-a5ac900bbdda\") " Feb 26 18:41:41 crc kubenswrapper[4805]: I0226 18:41:41.867798 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ck9hz\" (UniqueName: \"kubernetes.io/projected/5fac3bc0-4c91-4592-ad63-a5ac900bbdda-kube-api-access-ck9hz\") pod \"5fac3bc0-4c91-4592-ad63-a5ac900bbdda\" (UID: \"5fac3bc0-4c91-4592-ad63-a5ac900bbdda\") " Feb 26 18:41:41 crc kubenswrapper[4805]: I0226 18:41:41.868979 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fac3bc0-4c91-4592-ad63-a5ac900bbdda-utilities" (OuterVolumeSpecName: "utilities") pod "5fac3bc0-4c91-4592-ad63-a5ac900bbdda" (UID: "5fac3bc0-4c91-4592-ad63-a5ac900bbdda"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 18:41:41 crc kubenswrapper[4805]: I0226 18:41:41.888208 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fac3bc0-4c91-4592-ad63-a5ac900bbdda-kube-api-access-ck9hz" (OuterVolumeSpecName: "kube-api-access-ck9hz") pod "5fac3bc0-4c91-4592-ad63-a5ac900bbdda" (UID: "5fac3bc0-4c91-4592-ad63-a5ac900bbdda"). InnerVolumeSpecName "kube-api-access-ck9hz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:41:41 crc kubenswrapper[4805]: I0226 18:41:41.945962 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fac3bc0-4c91-4592-ad63-a5ac900bbdda-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5fac3bc0-4c91-4592-ad63-a5ac900bbdda" (UID: "5fac3bc0-4c91-4592-ad63-a5ac900bbdda"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 26 18:41:41 crc kubenswrapper[4805]: I0226 18:41:41.970582 4805 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fac3bc0-4c91-4592-ad63-a5ac900bbdda-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 26 18:41:41 crc kubenswrapper[4805]: I0226 18:41:41.970631 4805 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fac3bc0-4c91-4592-ad63-a5ac900bbdda-utilities\") on node \"crc\" DevicePath \"\"" Feb 26 18:41:41 crc kubenswrapper[4805]: I0226 18:41:41.970645 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ck9hz\" (UniqueName: \"kubernetes.io/projected/5fac3bc0-4c91-4592-ad63-a5ac900bbdda-kube-api-access-ck9hz\") on node \"crc\" DevicePath \"\"" Feb 26 18:41:42 crc kubenswrapper[4805]: I0226 18:41:42.319202 4805 generic.go:334] "Generic (PLEG): container finished" podID="5fac3bc0-4c91-4592-ad63-a5ac900bbdda" containerID="f733a9221bb141ceb170afeec5db7612f8f936e1b42d227713622eeef268bc7c" exitCode=0 Feb 26 18:41:42 crc kubenswrapper[4805]: I0226 18:41:42.319274 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vsq9v" event={"ID":"5fac3bc0-4c91-4592-ad63-a5ac900bbdda","Type":"ContainerDied","Data":"f733a9221bb141ceb170afeec5db7612f8f936e1b42d227713622eeef268bc7c"} Feb 26 18:41:42 crc kubenswrapper[4805]: I0226 18:41:42.319316 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vsq9v" event={"ID":"5fac3bc0-4c91-4592-ad63-a5ac900bbdda","Type":"ContainerDied","Data":"fbe2897ca2b2912a90ad5be0bfea778a303b92c4ed9e94a0f84353d83b7e4e47"} Feb 26 18:41:42 crc kubenswrapper[4805]: I0226 18:41:42.319347 4805 scope.go:117] "RemoveContainer" containerID="f733a9221bb141ceb170afeec5db7612f8f936e1b42d227713622eeef268bc7c" Feb 26 18:41:42 crc kubenswrapper[4805]: I0226 18:41:42.319625 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vsq9v" Feb 26 18:41:42 crc kubenswrapper[4805]: I0226 18:41:42.368185 4805 scope.go:117] "RemoveContainer" containerID="7987779ecd90d80a881bca36c356caaf328fbd1dd51454dd6ab397e817201a4c" Feb 26 18:41:42 crc kubenswrapper[4805]: I0226 18:41:42.410480 4805 scope.go:117] "RemoveContainer" containerID="4bd67d2d1dc1747480cdd72cbcce2d7f86e476154a6e4161dff9b8d360e90353" Feb 26 18:41:42 crc kubenswrapper[4805]: I0226 18:41:42.410693 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vsq9v"] Feb 26 18:41:42 crc kubenswrapper[4805]: I0226 18:41:42.425545 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vsq9v"] Feb 26 18:41:42 crc kubenswrapper[4805]: I0226 18:41:42.463826 4805 scope.go:117] "RemoveContainer" containerID="f733a9221bb141ceb170afeec5db7612f8f936e1b42d227713622eeef268bc7c" Feb 26 18:41:42 crc kubenswrapper[4805]: E0226 18:41:42.466517 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f733a9221bb141ceb170afeec5db7612f8f936e1b42d227713622eeef268bc7c\": container with ID starting with f733a9221bb141ceb170afeec5db7612f8f936e1b42d227713622eeef268bc7c not found: ID does not exist" containerID="f733a9221bb141ceb170afeec5db7612f8f936e1b42d227713622eeef268bc7c" Feb 26 18:41:42 crc kubenswrapper[4805]: I0226 18:41:42.466559 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f733a9221bb141ceb170afeec5db7612f8f936e1b42d227713622eeef268bc7c"} err="failed to get container status \"f733a9221bb141ceb170afeec5db7612f8f936e1b42d227713622eeef268bc7c\": rpc error: code = NotFound desc = could not find container \"f733a9221bb141ceb170afeec5db7612f8f936e1b42d227713622eeef268bc7c\": container with ID starting with f733a9221bb141ceb170afeec5db7612f8f936e1b42d227713622eeef268bc7c not found: ID does not exist" Feb 26 18:41:42 crc kubenswrapper[4805]: I0226 18:41:42.466585 4805 scope.go:117] "RemoveContainer" containerID="7987779ecd90d80a881bca36c356caaf328fbd1dd51454dd6ab397e817201a4c" Feb 26 18:41:42 crc kubenswrapper[4805]: E0226 18:41:42.466954 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7987779ecd90d80a881bca36c356caaf328fbd1dd51454dd6ab397e817201a4c\": container with ID starting with 7987779ecd90d80a881bca36c356caaf328fbd1dd51454dd6ab397e817201a4c not found: ID does not exist" containerID="7987779ecd90d80a881bca36c356caaf328fbd1dd51454dd6ab397e817201a4c" Feb 26 18:41:42 crc kubenswrapper[4805]: I0226 18:41:42.466977 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7987779ecd90d80a881bca36c356caaf328fbd1dd51454dd6ab397e817201a4c"} err="failed to get container status \"7987779ecd90d80a881bca36c356caaf328fbd1dd51454dd6ab397e817201a4c\": rpc error: code = NotFound desc = could not find container \"7987779ecd90d80a881bca36c356caaf328fbd1dd51454dd6ab397e817201a4c\": container with ID starting with 7987779ecd90d80a881bca36c356caaf328fbd1dd51454dd6ab397e817201a4c not found: ID does not exist" Feb 26 18:41:42 crc kubenswrapper[4805]: I0226 18:41:42.466994 4805 scope.go:117] "RemoveContainer" containerID="4bd67d2d1dc1747480cdd72cbcce2d7f86e476154a6e4161dff9b8d360e90353" Feb 26 18:41:42 crc kubenswrapper[4805]: E0226 18:41:42.467360 4805 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bd67d2d1dc1747480cdd72cbcce2d7f86e476154a6e4161dff9b8d360e90353\": container with ID starting with 4bd67d2d1dc1747480cdd72cbcce2d7f86e476154a6e4161dff9b8d360e90353 not found: ID does not exist" containerID="4bd67d2d1dc1747480cdd72cbcce2d7f86e476154a6e4161dff9b8d360e90353" Feb 26 18:41:42 crc kubenswrapper[4805]: I0226 18:41:42.467397 4805 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bd67d2d1dc1747480cdd72cbcce2d7f86e476154a6e4161dff9b8d360e90353"} err="failed to get container status \"4bd67d2d1dc1747480cdd72cbcce2d7f86e476154a6e4161dff9b8d360e90353\": rpc error: code = NotFound desc = could not find container \"4bd67d2d1dc1747480cdd72cbcce2d7f86e476154a6e4161dff9b8d360e90353\": container with ID starting with 4bd67d2d1dc1747480cdd72cbcce2d7f86e476154a6e4161dff9b8d360e90353 not found: ID does not exist" Feb 26 18:41:42 crc kubenswrapper[4805]: I0226 18:41:42.963423 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fac3bc0-4c91-4592-ad63-a5ac900bbdda" path="/var/lib/kubelet/pods/5fac3bc0-4c91-4592-ad63-a5ac900bbdda/volumes" Feb 26 18:42:00 crc kubenswrapper[4805]: I0226 18:42:00.156323 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535522-8wbk9"] Feb 26 18:42:00 crc kubenswrapper[4805]: E0226 18:42:00.157197 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fac3bc0-4c91-4592-ad63-a5ac900bbdda" containerName="extract-utilities" Feb 26 18:42:00 crc kubenswrapper[4805]: I0226 18:42:00.157209 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fac3bc0-4c91-4592-ad63-a5ac900bbdda" containerName="extract-utilities" Feb 26 18:42:00 crc kubenswrapper[4805]: E0226 18:42:00.157224 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fac3bc0-4c91-4592-ad63-a5ac900bbdda" containerName="extract-content" Feb 26 18:42:00 crc kubenswrapper[4805]: I0226 18:42:00.157230 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fac3bc0-4c91-4592-ad63-a5ac900bbdda" containerName="extract-content" Feb 26 18:42:00 crc kubenswrapper[4805]: E0226 18:42:00.157252 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fac3bc0-4c91-4592-ad63-a5ac900bbdda" containerName="registry-server" Feb 26 18:42:00 crc kubenswrapper[4805]: I0226 18:42:00.157258 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fac3bc0-4c91-4592-ad63-a5ac900bbdda" containerName="registry-server" Feb 26 18:42:00 crc kubenswrapper[4805]: I0226 18:42:00.157446 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fac3bc0-4c91-4592-ad63-a5ac900bbdda" containerName="registry-server" Feb 26 18:42:00 crc kubenswrapper[4805]: I0226 18:42:00.158183 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535522-8wbk9" Feb 26 18:42:00 crc kubenswrapper[4805]: I0226 18:42:00.160494 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 18:42:00 crc kubenswrapper[4805]: I0226 18:42:00.160574 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 18:42:00 crc kubenswrapper[4805]: I0226 18:42:00.166328 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 18:42:00 crc kubenswrapper[4805]: I0226 18:42:00.173138 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535522-8wbk9"] Feb 26 18:42:00 crc kubenswrapper[4805]: I0226 18:42:00.266980 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8ch2\" (UniqueName: \"kubernetes.io/projected/9834af61-ac69-4329-923f-54f403290c0c-kube-api-access-z8ch2\") pod \"auto-csr-approver-29535522-8wbk9\" (UID: \"9834af61-ac69-4329-923f-54f403290c0c\") " pod="openshift-infra/auto-csr-approver-29535522-8wbk9" Feb 26 18:42:00 crc kubenswrapper[4805]: I0226 18:42:00.369676 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8ch2\" (UniqueName: \"kubernetes.io/projected/9834af61-ac69-4329-923f-54f403290c0c-kube-api-access-z8ch2\") pod \"auto-csr-approver-29535522-8wbk9\" (UID: \"9834af61-ac69-4329-923f-54f403290c0c\") " pod="openshift-infra/auto-csr-approver-29535522-8wbk9" Feb 26 18:42:00 crc kubenswrapper[4805]: I0226 18:42:00.389268 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8ch2\" (UniqueName: \"kubernetes.io/projected/9834af61-ac69-4329-923f-54f403290c0c-kube-api-access-z8ch2\") pod \"auto-csr-approver-29535522-8wbk9\" (UID: \"9834af61-ac69-4329-923f-54f403290c0c\") " pod="openshift-infra/auto-csr-approver-29535522-8wbk9" Feb 26 18:42:00 crc kubenswrapper[4805]: I0226 18:42:00.477798 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535522-8wbk9" Feb 26 18:42:00 crc kubenswrapper[4805]: I0226 18:42:00.934992 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535522-8wbk9"] Feb 26 18:42:01 crc kubenswrapper[4805]: I0226 18:42:01.601685 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535522-8wbk9" event={"ID":"9834af61-ac69-4329-923f-54f403290c0c","Type":"ContainerStarted","Data":"7909a16533aca4f29e6267949bb4a76f196fed617950cfef4abf831cc4c3504c"} Feb 26 18:42:02 crc kubenswrapper[4805]: I0226 18:42:02.619615 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535522-8wbk9" event={"ID":"9834af61-ac69-4329-923f-54f403290c0c","Type":"ContainerStarted","Data":"4ed040cd1ba50066602f0c7602ae1daae6444a39bab96d7d59d48b2e8c83cb7f"} Feb 26 18:42:02 crc kubenswrapper[4805]: I0226 18:42:02.637183 4805 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29535522-8wbk9" podStartSLOduration=1.45420759 podStartE2EDuration="2.637159392s" podCreationTimestamp="2026-02-26 18:42:00 +0000 UTC" firstStartedPulling="2026-02-26 18:42:00.936643577 +0000 UTC m=+5235.498397916" lastFinishedPulling="2026-02-26 18:42:02.119595389 +0000 UTC m=+5236.681349718" observedRunningTime="2026-02-26 18:42:02.631955712 +0000 UTC m=+5237.193710051" watchObservedRunningTime="2026-02-26 18:42:02.637159392 +0000 UTC m=+5237.198913741" Feb 26 18:42:03 crc kubenswrapper[4805]: I0226 18:42:03.645107 4805 generic.go:334] "Generic (PLEG): container finished" podID="9834af61-ac69-4329-923f-54f403290c0c" containerID="4ed040cd1ba50066602f0c7602ae1daae6444a39bab96d7d59d48b2e8c83cb7f" exitCode=0 Feb 26 18:42:03 crc kubenswrapper[4805]: I0226 18:42:03.645327 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535522-8wbk9" event={"ID":"9834af61-ac69-4329-923f-54f403290c0c","Type":"ContainerDied","Data":"4ed040cd1ba50066602f0c7602ae1daae6444a39bab96d7d59d48b2e8c83cb7f"} Feb 26 18:42:05 crc kubenswrapper[4805]: I0226 18:42:05.180654 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535522-8wbk9" Feb 26 18:42:05 crc kubenswrapper[4805]: I0226 18:42:05.363400 4805 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8ch2\" (UniqueName: \"kubernetes.io/projected/9834af61-ac69-4329-923f-54f403290c0c-kube-api-access-z8ch2\") pod \"9834af61-ac69-4329-923f-54f403290c0c\" (UID: \"9834af61-ac69-4329-923f-54f403290c0c\") " Feb 26 18:42:05 crc kubenswrapper[4805]: I0226 18:42:05.370206 4805 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9834af61-ac69-4329-923f-54f403290c0c-kube-api-access-z8ch2" (OuterVolumeSpecName: "kube-api-access-z8ch2") pod "9834af61-ac69-4329-923f-54f403290c0c" (UID: "9834af61-ac69-4329-923f-54f403290c0c"). InnerVolumeSpecName "kube-api-access-z8ch2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 26 18:42:05 crc kubenswrapper[4805]: I0226 18:42:05.468511 4805 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8ch2\" (UniqueName: \"kubernetes.io/projected/9834af61-ac69-4329-923f-54f403290c0c-kube-api-access-z8ch2\") on node \"crc\" DevicePath \"\"" Feb 26 18:42:05 crc kubenswrapper[4805]: I0226 18:42:05.665600 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535522-8wbk9" event={"ID":"9834af61-ac69-4329-923f-54f403290c0c","Type":"ContainerDied","Data":"7909a16533aca4f29e6267949bb4a76f196fed617950cfef4abf831cc4c3504c"} Feb 26 18:42:05 crc kubenswrapper[4805]: I0226 18:42:05.665651 4805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7909a16533aca4f29e6267949bb4a76f196fed617950cfef4abf831cc4c3504c" Feb 26 18:42:05 crc kubenswrapper[4805]: I0226 18:42:05.665713 4805 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535522-8wbk9" Feb 26 18:42:05 crc kubenswrapper[4805]: I0226 18:42:05.708601 4805 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29535516-8jgpr"] Feb 26 18:42:05 crc kubenswrapper[4805]: I0226 18:42:05.717402 4805 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29535516-8jgpr"] Feb 26 18:42:06 crc kubenswrapper[4805]: I0226 18:42:06.966732 4805 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f813caa-a8ab-49b0-891a-d499bc05a9a5" path="/var/lib/kubelet/pods/1f813caa-a8ab-49b0-891a-d499bc05a9a5/volumes" Feb 26 18:42:50 crc kubenswrapper[4805]: I0226 18:42:50.012283 4805 scope.go:117] "RemoveContainer" containerID="a261f788842547c329e1a509b15464eb97e814ec93bec5920352a4c14b058a00" Feb 26 18:43:32 crc kubenswrapper[4805]: I0226 18:43:32.977694 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 18:43:32 crc kubenswrapper[4805]: I0226 18:43:32.978219 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 18:44:00 crc kubenswrapper[4805]: I0226 18:44:00.152837 4805 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29535524-jl9sk"] Feb 26 18:44:00 crc kubenswrapper[4805]: E0226 18:44:00.154285 4805 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9834af61-ac69-4329-923f-54f403290c0c" containerName="oc" Feb 26 18:44:00 crc kubenswrapper[4805]: I0226 18:44:00.154306 4805 state_mem.go:107] "Deleted CPUSet assignment" podUID="9834af61-ac69-4329-923f-54f403290c0c" containerName="oc" Feb 26 18:44:00 crc kubenswrapper[4805]: I0226 18:44:00.154726 4805 memory_manager.go:354] "RemoveStaleState removing state" podUID="9834af61-ac69-4329-923f-54f403290c0c" containerName="oc" Feb 26 18:44:00 crc kubenswrapper[4805]: I0226 18:44:00.155958 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535524-jl9sk" Feb 26 18:44:00 crc kubenswrapper[4805]: I0226 18:44:00.158711 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 26 18:44:00 crc kubenswrapper[4805]: I0226 18:44:00.158711 4805 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-s8mr8" Feb 26 18:44:00 crc kubenswrapper[4805]: I0226 18:44:00.158851 4805 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 26 18:44:00 crc kubenswrapper[4805]: I0226 18:44:00.168592 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535524-jl9sk"] Feb 26 18:44:00 crc kubenswrapper[4805]: I0226 18:44:00.285767 4805 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fddz\" (UniqueName: \"kubernetes.io/projected/d624b327-eec0-48ed-bef9-ed4150dd144b-kube-api-access-9fddz\") pod \"auto-csr-approver-29535524-jl9sk\" (UID: \"d624b327-eec0-48ed-bef9-ed4150dd144b\") " pod="openshift-infra/auto-csr-approver-29535524-jl9sk" Feb 26 18:44:00 crc kubenswrapper[4805]: I0226 18:44:00.388111 4805 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fddz\" (UniqueName: \"kubernetes.io/projected/d624b327-eec0-48ed-bef9-ed4150dd144b-kube-api-access-9fddz\") pod \"auto-csr-approver-29535524-jl9sk\" (UID: \"d624b327-eec0-48ed-bef9-ed4150dd144b\") " pod="openshift-infra/auto-csr-approver-29535524-jl9sk" Feb 26 18:44:00 crc kubenswrapper[4805]: I0226 18:44:00.416823 4805 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fddz\" (UniqueName: \"kubernetes.io/projected/d624b327-eec0-48ed-bef9-ed4150dd144b-kube-api-access-9fddz\") pod \"auto-csr-approver-29535524-jl9sk\" (UID: \"d624b327-eec0-48ed-bef9-ed4150dd144b\") " pod="openshift-infra/auto-csr-approver-29535524-jl9sk" Feb 26 18:44:00 crc kubenswrapper[4805]: I0226 18:44:00.482639 4805 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29535524-jl9sk" Feb 26 18:44:00 crc kubenswrapper[4805]: I0226 18:44:00.927193 4805 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29535524-jl9sk"] Feb 26 18:44:00 crc kubenswrapper[4805]: I0226 18:44:00.989639 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535524-jl9sk" event={"ID":"d624b327-eec0-48ed-bef9-ed4150dd144b","Type":"ContainerStarted","Data":"06baf55bcbd752aaaf6f457bef531e7ee51b1979768575d60dcfd2de06007cf0"} Feb 26 18:44:02 crc kubenswrapper[4805]: I0226 18:44:02.978462 4805 patch_prober.go:28] interesting pod/machine-config-daemon-2mnb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 26 18:44:02 crc kubenswrapper[4805]: I0226 18:44:02.980189 4805 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2mnb9" podUID="25e83477-65d0-41be-8e55-fdacfc5871a8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 26 18:44:02 crc kubenswrapper[4805]: I0226 18:44:02.983632 4805 generic.go:334] "Generic (PLEG): container finished" podID="d624b327-eec0-48ed-bef9-ed4150dd144b" containerID="fc1758b5f8e98f28c41815af7f65ad605624d876d91955d4795cb8e27bb87977" exitCode=0 Feb 26 18:44:02 crc kubenswrapper[4805]: I0226 18:44:02.983684 4805 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29535524-jl9sk" event={"ID":"d624b327-eec0-48ed-bef9-ed4150dd144b","Type":"ContainerDied","Data":"fc1758b5f8e98f28c41815af7f65ad605624d876d91955d4795cb8e27bb87977"}